1, Nacos registry
1. pom.xml configuration dependency
<!-- nacos As a registry Discovery Center --> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId> </dependency>
2. application.yml configure Nacos Service Center Address
spring: application: name: product cloud: nacos: discovery: server-addr: 127.0.0.1:8848
3. Inject NamingService of Nacos through @ NacosInjected
4. Use the OpenAPI method provided by Nacos
(1) Register instance
- registerInstance(serviceName, ip, port)
- registerInstance(serviceName, ip, port, clusterName)
- registerInstance(serviceName, instance)
(2) Get instance
- getAllInstances(serviceName)
- getAllInstances(serviceName, clusters)
(3) Monitoring service
- subscribe(serviceName, listener)
- subscribe(serviceName, clusters, listener)
[Dubbo]
5. Dubbo uses Nacos to implement the registry
(1)application.properties
dubbo.application.name: spring-boot-dubbo-nacos-sample dubbo.registry.address=nacos://127.0.0.1:8848 dubbo.protocol.name=dubbo dubbo.protocol.port=20880
(2) Through Dubbo's @ Service declaration
(3) The startup class uses @ DubboComponentScan to scan @ Service
2, Nacos configuration center
1. pom.xml configuration dependency
<!-- nacos As configuration center --> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-nacos-config</artifactId> </dependency>
2. bootstrap.properties configure Nacos configuration center address
spring.cloud.nacos.config.server-addr=127.0.0.1:8848 spring.cloud.nacos.config.namespace=6ca49aac-50ec-4e6f-8686-70bb11922fdd
3. Dynamic reading of configuration from NacosServer using annotations
@NacosPropertySource(dataId = "example", autoRefreshed = true) @RestController public class NacosConfigController{ @NacosValue(value = "${info:Local Hello World}", autoRefreshed = true) private String info; @GetMapping("/config") public String get(){ return info; } }
(1) @ NacosPropertySource: used to load the configuration source with dataId as example. autoRefreshed means to enable automatic update;
(2) @ NacosValue: set the default value of the property
4. Use the OpenAPI method provided by Nacos
3, Dubbo
[service provider]
1. pom.xml configuration dependency
<dependency> <groupId>com.springframework.boot</groupId> <artifactId>spring-boot-starter</artifactId> </dependency> <dependency> <groupId>org.apache.dubbo</groupId> <artifactId>dubbo-spring-boot-starter</artifactId> <version>2.7.5</version> </dependency>
2. application.properties configure Dubbo service
spring.application.name=spring-dubbo-demo dubbo.application.name=springboot-provider dubbo.protocol.name=dubbo dubbo.protocol.port=20880 dubbo.registry.address=N/A
3. Declare the @ Service annotation publishing Service provided by Dubbo
@Service public class HelloServiceImpl implements HelloService{ @value("${dubbo.application.name}") private String servicename; @Override public String sayHello(String name){ return servicename; } }
4. Add @ DubboComponentScan on the startup method to scan the @ Service declared by Dubbo
@DubboComponentScan @SpringBootApplication public class ProviderApplication{ public static void main(Sting[] args){ SpringApplication.run(ProviderApplication.class, args); } }
[service caller]
1. pom.xml configuration dependency
<dependency> <groupId>org.apache.dubbo</groupId> <artifactId>dubbo-spring-boot-starter</artifactId> <version>2.7.5</version> </dependency>
2. application.properties configure Dubbo service
dubbo.application.name=springboot-consumer
3. Use the @ Reference annotation provided by Dubbo to obtain the remote proxy service object
@Reference(url = "dubbo://192.168.13.1:20880/com.gupaoedu.book.dubbo.helloService") private HelloService helloService;
[advanced configuration]
1. Cluster fault tolerance
@Service(cluster = "failfast")
Fault tolerant mode
(1) Failover Cluster: fail to automatically switch. After the call fails, switch to other machines in the cluster and retry twice by default.
(2) Failfast Cluster: fast failure. After the call fails, an error is reported immediately, and only sequential calls are initiated.
(3) Failsafe Cluster: fail safe. When an exception occurs, it is ignored directly.
(4) Failback Cluster: automatically reply after failure. Background record failed, request, resend regularly.
(5) Forking Cluster: multiple services in the cluster are called in parallel. As long as one of them succeeds, it will be returned.
(6) Broadcast Cluster: broadcast calls all service providers. If any service reports an error, it means that the service call fails.
2. Load balancing
@Service(cluster = "failfast", loadbalance = "roundrobin")
Load balancing strategy
(1) Random LoadBalance: random algorithm. Larger weight values may be set for servers with better performance.
(2)RoundRobin LoadBalance: polling. Set the polling scale according to the weight after the Convention.
(3)LeastActive LoadBalance: least active call book. Slower processing nodes will receive fewer requests.
(4)ConsistentHash LoadBalance: consistency Hash. Requests with the same parameters are always sent to the same service provider.
3. Service degradation
(1) Create degraded local data default return
public class MockHelloService implements HelloService{ @Override public String sayHello(String s){ return "Sorry, Service unreachable, degraded data returned"; } }
(2) Add Mock parameter to @ Reference annotation
@Reference(mock = "com.gupaoedu.book.springcloud.springclouddubboconsumer.MockHelloService", cluster = "failfast") private HelloService helloService;
4. Host binding
(1) The IP address published by Dubbo service, in the default order
- Find Dubbo in environment variable_ IP_ TO_ The bind property configures the IP address of the
- Find the configured IP address of dubbo.protocol.host property. It is empty by default
- Obtain the local IP address through LocalHost.getHostAddress
- The address of the registration center. After Socket communication is used to connect to the address of the registration center, use the for loop to scan each network card through socket.getLocalAddress().getHostAddress(), and obtain the IP address of the network card
(2) Solution to service consumer's failure to call normally
- /Configure the machine name in / etc/hosts to correspond to the correct IP address mapping
- Add Dubbo to environment variable_ IP_ TO_ Bind or DUBBO_IP_TO_REGISTRY property
- Set the host address through dubbo.protocol.host
Unable to call normally. Reason: Dubbo maps the IP address by obtaining the hostname of the local machine. If the IP address is wrong, ZooKeeper will still be registered and started normally.
4, Feign
1. pom.xml import dependency
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-openfeign</artifactId> </dependency>
2. Define the interface and bind the service by specifying the service name through the @ FeignClient annotation
Each method that declares an interface is a request that invokes which remote service
@FeignClient("whalemall-coupon") public interface CouponFeignService { @PostMapping("/coupon/spubounds/save") R saveSpuBounds(@RequestBody SpuBoundTo spuBoundTo); @PostMapping("/coupon/skufullreduction/saveinfo") R saveSkuReduction(@RequestBody SkuReductionTo skuReductionTo); }
3. The startup class starts Spring Cloud Feign through the @ EnableFeignClients annotation
@EnableFeignClients(basePackages = "com.island.whalemall.product.feign") @EnableDiscoveryClient @MapperScan("com.island.whalemall.product.dao") @SpringBootApplication public class WhalemallProductApplication { public static void main(String[] args) { SpringApplication.run(WhalemallProductApplication.class, args); } }
5, Gateway
1. Add dependency to pom.xml
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-gateway</artifactId> </dependency>
2. Add the routing configuration of the Gateway in the application.yml file
spring: cloud: gateway: routes: # Goods and services - id: product_route uri: lb://whalemall-product predicates: - Path=/api/product/** filters: - RewritePath=/api/(?<segment>.*),/$\{segment}
6, Seate
7, OSS
1. Add dependency to pom.xml
<dependency> <groupId>com.aliyun.oss</groupId> <artifactId>aliyun-sdk-oss</artifactId> <version>3.10.2</version> </dependency>
2. Create an OSSClient using OSS domain name
// yourEndpoint fill in the Endpoint corresponding to the Bucket region. Take East China 1 (Hangzhou) as an example, and the Endpoint is filled in as https://oss-cn-hangzhou.aliyuncs.com . String endpoint = "yourEndpoint"; // Alibaba cloud account AccessKey has access rights to all APIs, which is very risky. It is strongly recommended that you create and use RAM users for API access or daily operation and maintenance. Please log in to RAM console to create RAM users. String accessKeyId = "yourAccessKeyId"; String accessKeySecret = "yourAccessKeySecret"; // Create an OSSClient instance. OSS ossClient = new OSSClientBuilder().build(endpoint, accessKeyId, accessKeySecret); // Close the OSSClient. ossClient.shutdown();
3. Create examplebucket storage space
// yourEndpoint fill in the Endpoint corresponding to the Bucket region. Take East China 1 (Hangzhou) as an example, and the Endpoint is filled in as https://oss-cn-hangzhou.aliyuncs.com . String endpoint = "yourEndpoint"; // Alibaba cloud account AccessKey has access rights to all APIs, which is very risky. It is strongly recommended that you create and use RAM users for API access or daily operation and maintenance. Please log in to RAM console to create RAM users. String accessKeyId = "yourAccessKeyId"; String accessKeySecret = "yourAccessKeySecret"; // Fill in the Bucket name, such as examplebucket. String bucketName = "examplebucket"; OSS ossClient = null; try { // Create an OSSClient instance. ossClient = new OSSClientBuilder().build(endpoint, accessKeyId, accessKeySecret); // Create storage space. ossClient.createBucket(bucketName); } catch (OSSException e){ e.printStackTrace(); } finally { // Close the OSSClient. ossClient.shutdown(); }
4. Upload files to OSS through streaming upload
// yourEndpoint fill in the Endpoint corresponding to the Bucket region. Take East China 1 (Hangzhou) as an example, and the Endpoint is filled in as https://oss-cn-hangzhou.aliyuncs.com . String endpoint = "yourEndpoint"; // Alibaba cloud account AccessKey has access rights to all APIs, which is very risky. It is strongly recommended that you create and use RAM users for API access or daily operation and maintenance. Please log in to RAM console to create RAM users. String accessKeyId = "yourAccessKeyId"; String accessKeySecret = "yourAccessKeySecret"; // Fill in the Bucket name, such as examplebucket. String bucketName = "examplebucket"; // Fill in the file name. The file name contains the path, not the Bucket name. For example, exampledir/exampleobject.txt. String objectName = "exampledir/exampleobject.txt"; OSS ossClient = null; try { // Create an OSSClient instance. ossClient = new OSSClientBuilder().build(endpoint, accessKeyId, accessKeySecret); String content = "Hello OSS"; ossClient.putObject(bucketName, objectName, new ByteArrayInputStream(content.getBytes())); } catch (OSSException e){ e.printStackTrace(); } finally { // Close the OSSClient. ossClient.shutdown(); }
5. Download files from OSS through streaming Download
// yourEndpoint fill in the Endpoint corresponding to the Bucket region. Take East China 1 (Hangzhou) as an example, and the Endpoint is filled in as https://oss-cn-hangzhou.aliyuncs.com . String endpoint = "yourEndpoint"; // Alibaba cloud account AccessKey has access rights to all APIs, which is very risky. It is strongly recommended that you create and use RAM users for API access or daily operation and maintenance. Please log in to RAM console to create RAM users. String accessKeyId = "yourAccessKeyId"; String accessKeySecret = "yourAccessKeySecret"; // Fill in the Bucket name, such as examplebucket. String bucketName = "examplebucket"; // Fill in the file name. The file name contains the path, not the Bucket name. For example, exampledir/exampleobject.txt. String objectName = "exampledir/exampleobject.txt"; OSS ossClient = null; try { // Create an OSSClient instance. ossClient = new OSSClientBuilder().build(endpoint, accessKeyId, accessKeySecret); // Call ossClient.getObject to return an OSSObject instance, which contains file content and file meta information. OSSObject ossObject = ossClient.getObject(bucketName, objectName); // Call ossObject.getObjectContent to get the file input stream, which can be read to get its content. InputStream content = ossObject.getObjectContent(); if (content != null) { BufferedReader reader = new BufferedReader(new InputStreamReader(content)); while (true) { String line = reader.readLine(); if (line == null) break; System.out.println("\n" + line); } // After the data reading is completed, the obtained stream must be closed, otherwise it will cause connection leakage, resulting in no connection available for the request, and the program cannot work normally. content.close(); } } catch (OSSException e){ e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } finally { // Close the OSSClient. ossClient.shutdown(); }
8, RabbitMQ
1. pom.xml introduces amqp dependency
<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-amqp</artifactId> </dependency>
2. applications.properties configure the RabbitMQ service
# RabbitMQ configuration # RabbitMQ server address spring.rabbitmq.host = localhost # RabbitMQ port spring.rabbitmq.post = 5672 # RabbitMQ user spring.rabbitmq.username = admin # RabbitMQ password spring.rabbitmq.password = 123456 # Confirm that the sent message has been consumed spring.rabbitmq.publisher-confirms = true #The message queue name of RabbitMQ, which sends the string rabbitmq.queue.msg = spring-boot-queue-msg #The message queue name of RabbitMQ, which sends the user object rabbitmq.queue.user = spring-boot-queue-user
3. The main startup class uses @ EnableRabbit to enable the function
4. Using RabbitMQ
(1) Use the management component AmqpAdmin to create Exchange, Queue and Binding
//1. Automatic injection @Autowire AdqpAdmin amqpAdmin; //2. Create switch DirectExchange directExchange = new DirectExchange($name,$type,$autodelete,$Arg parameter); //3. Management component declaration switch amqpAdmin.declareExchange(directExchange); //4. Create queue Queue queue = new queue($name,$type,$exclusive,$autodelete,$Arg parameter); //5. Manage component declaration queue amqpAdmin.declareQueue(queue) //6. Create binding relationship Binding binding = new Binding($distination,$distinationtype[Distination.Type.Queue or Distination.Type.Exchange],$exchange,$routingKey,$Arg Parameters (optional) null)); //7. Manager declaration binding relationship amqpAdmin.declareBinding(binding);
(2) Send and receive messages using RabbitTemplate
//1. Automatic injection @Autowire RabbitTemplate rabbitTemplate; //2. Send message (if the send message is an object, the serialization mechanism will be used, so the object must implement Serializable) rabbitTemplate.convertAndSend($exchange,$routingKey,$object[[message]); //3. The message sending the object type can be JSON You need to configure your own RabbitConfig.java Place a message converter in the container @Bean public MessageConverter messageConverter(){ return new Jackson2JsonMessageConverter; }
5. Listening for RabbitMQ messages
(1) Use @ RabbitListener(queues = {"xxx","xxx",...}) (must have @ EnableRabbit enabled) (can be marked on classes or methods)
//1. Get the message body (input parameter: message message, T < type of sending message > orderreturnreasonentity content, current data transmission channel) byte[] body = message.getBody(); JSON.perse(); //2. Get message header property information MessageProperties = message.getMessageProperties(); //3. A message in the Queue can only be received by one client, and the messages are received in order. The next message will be processed only after one message is processed
(2) Use @ RabbitHandler (which can be marked on the method) (you can overload and receive different types of messages in the same queue)
6. Rabbit message confirmation mechanism
(1) [sender] the ConfirmCallback service receives a message callback
1) Enable the sender to confirm the configuration of application.properties
spring.rabbitmq.publisher-confirms=true
2) Customize rabbitTemplate and set confirmation callback
@Autowired RedisTemplate redisTemplate; //Spring boot generates the config object and executes this method @PostConstruct public void initRabbitTemplate(){ //Set callback redisTemplate.setConfirmCallback(new RabbitTemplate.ConfirmCallback(){ @override public void confirm(CorrelationData correlationData[only Id], boolean ack[[confirm], String cause[[reason]){ } }); }
(2) [sender] the ReturnCallback message correctly arrives at the queue for callback
1) Enable the sender to confirm the arrival of the message queue and configure application.properties
spring.rabbitmq.publisher-returns=true # After arriving at the queue, the priority is to execute the returns callback confirmation asynchronously spring.rabbitmq.template-mandatory=true
2) Customize rabbitTemplate, and call back when the delivery queue is not triggered
@Autowired RedisTemplate redisTemplate; //Spring boot generates the config object and executes this method @PostConstruct public void initRabbitTemplate(){ //Set callback redisTemplate.setConfirmCallback(new RabbitTemplate.ConfirmCallback(){ @override public void returnMessage(Message message[Message details of failed delivery], int replayCode[[recovery status code], String relayText[Text content of reply], String exchange[Processing switch], String routingKey[[routing key of message]){ } }); }
(3) Consumer confirmation
The first method is automatic confirmation by default. As long as the message is received, the server will remove the message
Existing problems: when many messages are received, only one message is successfully processed and goes down halfway. It is still confirmed that all messages are received, and the data in the queue is deleted, resulting in message loss.
The second method: manual confirmation
1) Enable the manual confirmation message to configure application.properties
spring.rabbitmq.listener.simple.acknowledge.-mode=manual
2) Confirm receipt
long deliveryTag = message.getMessageProperties().getDeliveryTag(); channel.basicAck(deliveryTag, false[Do not open batch sign in]);
3) Reject
channel.basicNAck(deliveryTag, false[Batch reject without opening],true[Rejoin the team]);
9, Sentinel
1. pom.xml import dependency
<dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-sentinel</artifactId> <version>2.1.1.RELEASE</version> </dependency>
2. Configure resource protection rules through @ SentinelResource
@RestController public class HelloController{ @SentinelResource(value = "hello", blockHandler = "blockHandlerHello") @GetMapping("/say") public String hello(){ return "Hello World"; } //Current limiting protection Controller resources public String blockHandlerHello(BlockException e){ return "The current is limited"; } }
3. Manually configure rules
//You can configure flow control rules, fuse degradation rules, system protection rules, source access control rules, and hotspot parameter rules public class FlowRuleInitFunc implements InitFunc{ @Override public void init() throws Exception{ List<FlowRule> rules = new ArrayList<>(); FlowRule rule = new FlowRule(); //Current limiting threshold rule.setCount(1); //Set up resources to be protected rule.setResource("hello"); //Flow control rules rule.setGrade(RuleConstant.FLOW_GRADE_QPS); //Current limiting for call source rule.setLimitApp("default"); rules.add(rule); FlowRuleManager.loadRules(rules); } }
[current limiting]
The main purpose of flow restriction is to restrict concurrent access through Or limit the number of requests allowed to be processed within a time window to protect the system. Once the limit is reached, the current request will be processed and the corresponding rejection policy will be adopted.
private static void initFlowRules(){ List<FlowRule> rules = new ArrayList<>(); FlowRule rule = new FlowRule(); rule.setResource("doSomething"); rule.setCount(20); rule.setGrade(RuleConstant.FLOW_GRADE_QPS); rule.setLimitApp("default"); rule.setStrategy(RuleConstant.STRATEGY_CHAIN); rule.setControlBehavior(RuleConstant.CONTROL_BEHAVIOR_DEFAULT); rule.setClusterMode(fasle); rules.add(rule); FlowRuleManager.loadRules(rules); }
- resource: set the resources to be protected.
- count: current limiting threshold.
- Grade: current limiting threshold type, QPS mode (1) or number of concurrent threads mode (0).
- limitApp: whether current restriction is required for the call source. The default is default, that is, the call source is not distinguished.
- Strategy: call relationship flow limiting strategy - direct, link and association.
- controlBehavior: flow control behavior, including direct rejection, queuing and slow start modes. Direct rejection is the default.
- clusterMode: whether it is cluster current limiting. The default value is No.
[fusing]
Fusing means that when the current service provider cannot normally provide services to the service caller, such as request timeout, service exception, etc., in order to prevent the avalanche effect of the whole system, the faulty interface is temporarily isolated and the connection with the external interface is cut off. When the fusing is triggered, the service caller's request will fail directly in the following period of time, Know that the target service is back to normal.
private static void initDegradeRule(){ List<DegradeRule> rules = new ArrayList<>(); DegradeRule degradeRule = new DegradeRule(); degradeRule.setResource("KEY"); degradeRule.setCount(10); degradeRule.setGrade(RuleConstant.DEGRADE_GRADE_RT); degradeRule.setTimeWindow(10); degradeRule.setMinRequestAmount(5); degradeRule.setRtSlowRequestAmount(5); rules.add(degradeRule); }
- Grade: fuse strategy. Support second level RT (ruleconstant. Grade_grade_rt), second level exception ratio (ruleconstant. Grade_grade_exception_ratio), and minute level exception count (ruleconstant. Grade_grade_exception_count).
- timeWindow: time window of fuse degradation, unit: s.
- rtSlowRequestAmount: in RT mode, the average RT of the number of requests within 1s exceeds the threshold and triggers the fuse. The default value is 5.
- minRequestAmount: the minimum number of abnormal fusing requests triggered. When the number of requests is less than this value, fusing will not be triggered even if the abnormal proportion exceeds the threshold value. The default value is 5.
3. Configure rules based on Sentinel Dashboard
(1) Launch Sentinel Dashboard
(2) In application.yml
spring: application: name: spring-cloud-sentinel-sample cloud: sentinel: transport: dashboard: 192.168.216.128:7777
(3) REST interface
@RestController public class DashboardController{ @GetMapping("/dash") public String dash(){ return "Hello Dash"; } }
4. Custom URL flow restriction exception
@Service public class CustomUrlBlockHandler implements UrlBlockHandler{ @Override public void blocked(HttpServletRequest httpServletRequest, HttpServletResponse httpServletResponse, BlockException e) throws IOException { httpServletResponse.setHeader("Content-Type", "application/json;charset=UTF-8"); String message = "{\"code\":999,\"msg\":\"Too many visitors\"}"; httpServletResponse.getWriter().write(message); } }
Degraded page
spring.cloud.sentinel.servlet.block-page={url}
5. URL resource cleaning is realized through the UrlCleaner interface
@RestController public class UrlCleanController{ @GetMapping("/clean/{id}") public String clean(@PathVariable("id")int id){ return "Hello,Cleaner"; } }
@Service public class CustomerUrlCleaner implements UrlCleaner{ @Override public String clean(String originUrl){ if(StringUtils.isEmpty(originUrl)){ return originUrl; } if(originUrl.startsWith("/clean/")){ return "/clean/*"; } return originUrl; } }
6. Integrating Nacos to realize dynamic rule synchronization
7. Dubbo integrated Sentinel
(1) Add dependency to pom.xml
<dependency> <groupId>com.alibaba.csp</groupId> <artifactId>sentinel-apache-dubbo-adapter</artifactId> <version>1.7.1</version> </dependency>
(2) Sentinel Apache Dubbo Adapter custom on / off filter
(3) Dubbo service access Sentinel Dashboard
1) Introduce sentinel transport simple HTTP dependency
<dependency> <groupId>com.alibaba.csp</groupId> <artifactId>sentinel-transport-simple-http</artifactId> <version>1.7.1</version> </dependency>
2) Add startup parameters
-Djava.net.preferIPv4Stack=true -Dcsp.sentinel.api.port=8720 -Dcsp.sentinel.dashboard.server=192.168.216.128:7777 -Dproject.name=spring-cloud.sentinel-dubbo.provider
3) Log in to Sentinel Dashboard to perform cluster link operations
(4) Persistence of Dubbo service flow restriction rules
1) Add sentinel datasource Nacos
<dependency> <groupId>com.alibaba.csp</groupId> <artifactId>sentinel-datasource-nacos</artifactId> <version>1.7.1</version> </dependency>
2) Nacos data source configuration is realized through InitFunc extension point provided by Sentinel
public class NacosDataSourceInitFunc implements InitFunc{ private String serverAddr="192.168.216.128:8848"; private String groupId="DEFAULT_GROUP"; private String dataId="spring-cloud.sentinel-dubbo.provider-sentinel-flow"; @Override public void init() throws Exception { loadNacosData(); } private void loadNacosData(){ ReadableDataSource<String,List<FlowRule>> flowRuleDataSource = new NacosDataSource<>(serverAddr, groupId, dataId, source -> JSON.parseObject(source, new TypeReference<List<FlowRule>>(){ })); FlowRuleManager.register2Property(flowRuleDataSource.getProperty()); } }
3) Visit Sentinel Dashboard
10, ElasticSearch
There are two ways for Java to operate ElasticSearch: one is to use TCP through port 9300 of ES, and the other is to use HTTP through port 9200 of ES
1)9300 TCP
spring-data-elasticsearch:transport-api.jar
-
Different versions of springboot and transport-api.jar cannot be adapted to the es version
-
7.x is no longer recommended and will be abandoned after 8
2)9200 HTTP
-
JestClient: unofficial, slow update
-
RestTemplate: simulate sending HTTP requests. Many ES operations need to be encapsulated by themselves, which is troublesome
-
HttpClient: same as above
-
Elasticsearch rest client: the official RestClient encapsulates ES es ES operations. The API is hierarchical and easy to use
To sum up, elastic search rest client (elastic search rest high level client) is the best choice. How to integrate is recorded below
1. pom.xml import dependency
<dependency> <groupId>org.elasticsearch</groupId> <artifactId>elasticsearch</artifactId> <version>6.3.2</version> </dependency> <!-- Java High Level REST Client --> <dependency> <groupId>org.elasticsearch.client</groupId> <artifactId>elasticsearch-rest-high-level-client</artifactId> <version>6.3.2</version> </dependency>
2. application.yml configuration
elasticsearch: ip: localhost:9200
3. Configure ElasticsearchRestClients
@Configuration public class ElasticsearchRestClient { /** * ES Address, ip:port */ @Value("${elasticsearch.ip}") String ipPort; @Bean public RestClientBuilder restClientBuilder() { return RestClient.builder(makeHttpHost(ipPort)); } @Bean(name = "highLevelClient") public RestHighLevelClient highLevelClient(@Autowired RestClientBuilder restClientBuilder) { restClientBuilder.setMaxRetryTimeoutMillis(60000); return new RestHighLevelClient(restClientBuilder); } private HttpHost makeHttpHost(String s) { String[] address = s.split(":"); String ip = address[0]; int port = Integer.parseInt(address[1]); return new HttpHost(ip, port, "http"); } }
4. Add
localhost:9200/customer/_doc/1?pretty
{ "city": "Beijing", "useragent": "Mobile Safari", "sys_version": "Linux armv8l", "province": "Beijing", "event_id": "", "log_time": 1559191912, "session": "343730" }
5. Condition query
@Service public class TestService { @Autowired RestHighLevelClient highLevelClient; private void search(RestHighLevelClient highLevelClient) throws IOException { SearchRequest searchRequest = new SearchRequest(); searchRequest.indices("customer"); searchRequest.types("_doc"); // Conditions= MatchQueryBuilder matchQuery = QueryBuilders.matchQuery("city", "Beijing"); TermQueryBuilder termQuery = QueryBuilders.termQuery("province", "Fujian"); // Range query RangeQueryBuilder timeFilter = QueryBuilders.rangeQuery("log_time").gt(12345).lt(343750); SearchSourceBuilder sourceBuilder = new SearchSourceBuilder(); QueryBuilder totalFilter = QueryBuilders.boolQuery() .filter(matchQuery) .filter(timeFilter) .mustNot(termQuery); int size = 200; int from = 0; long total = 0; do { try { sourceBuilder.query(totalFilter).from(from).size(size); sourceBuilder.timeout(new TimeValue(60, TimeUnit.SECONDS)); searchRequest.source(sourceBuilder); SearchResponse response = highLevelClient.search(searchRequest); SearchHit[] hits = response.getHits().getHits(); for (SearchHit hit : hits) { System.out.println(hit.getSourceAsString()); } total = response.getHits().totalHits; System.out.println("test:[" + total + "][" + from + "-" + (from + hits.length) + ")"); from += hits.length; // from + size must be less than or equal to: [10000] if (from >= 10000) { System.out.println("test:More than 10000 direct interrupts"); break; } } catch (Exception e) { e.printStackTrace(); } } while (from < total); } }
11, MySQL
1. Introducing MySQL dependency into pom.xml
<dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-jdbc</artifactId> </dependency> <!-- Database connection pool --> <dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-dbcp2</artifactId> </dependency>
2,application.properties
spring.datasource.url=jdbc:mysql://localhost:3306/spring_boot_chapter5 spring.datasource.username=root spring.datasource.password=123456 # Even if the driver is commented out, SpringBoot will judge the data source as much as possible, and then match by default # spring.datasource.driver-class-name=com.mysql.jdbc.Driver # Specify the database connection pool type spring.datasource.type=org.apache.commons.dbcp2.BasicDataSource # The maximum number of waiting connections. Set 0 to unlimited spring.datasource.dbcp2.max-idle=10 # Maximum connection activity spring.datasource.dbcp2.max-total=50 # Maximum wait milliseconds in ms spring.datasource.dbcp2.max-wait-millis=10000 # Number of database connection pool initialization connections spring.datasource.dbcp2.initial-size=5
3. Using the MyBatis framework
(1) pom.xml import dependency
<dependency> <groupId>org.mybatis.spring.boot</groupId> <artifactId>mybatis-spring-boot-starter</artifactId> <version>1.3.1</version> </dependency>
(2)application.properties
# MyBatis mapping file wildcard mybatis.mapper-locations=classpath:com/springboot/chapter/mapper/*.xml # Mybatis scans Alias packages and works with annotation @ Alias mybatis.type-aliases-package=com.springboot.chapter.pojo # Configure scan package for typeHandler mybatis.type-handlers-package=com.springboot.chapter.typehandler
Configurable properties
- Properties: properties
- Settings: settings
- typeAliases: type aliases
- typeHandlers: type handlers
- objectFactory: object factory
- plugins: plug-ins
- environments: database environment
- databaseIdProvider: database vendor ID
- mappers: mapper
(3) Use the @ Mapper annotation to represent the Mapper (Dao) interface
(4) The startup class uses @ MapperScan() to scan
12, Redis
1. pom.xml introduces Redis dependency
<!-- introduce Redis Client driver for --> <dependency> <groupId>redis.clients</groupId> <artifactId>jedis</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-redis</artifactId> <exclusions> <exclusion> <groupId>io.lettuce</groupId> <artifactId>lettuce-core</artifactId> </exclusion> </exclusions> </dependency>
2,application.properties
# Configure connection pool properties spring.redis.jedis.pool.min-idle=5 spring.redis.jedis.pool.max-idle=10 spring.redis.jedis.pool.max-active=10 spring.redis.jedis.pool.max-wait=2000 # Configure Redis server properties spring.redis.port=6379 spring.redis.host=192.168.11.131 spring.redis.password=123456 # Redis connection timeout (unit: ms) spring.redis.timeout=1000
3. Use the StringRedisTemplate provided by springboot to operate redis (re encapsulating jedis)
ValueOperations<String, String> ops = stringRedisTemplate.opsForValue(); //preservation ops.set("hello", "world_" + UUID.randomUUID().toString()); //query String hello = ops.get("hello");