Kafka, RabbitMQ, RockedMQ real application development summary 1

Posted by The Cat on Sun, 12 Sep 2021 03:35:20 +0200

Summary of practical applications of Kafka, RabbitMQ and RockedMQ 1.Kafka

  • Combined with the use cases on the official website, this paper records the examples and practical application of the three mainstream mq.
  • This article does not cover the installation and configuration of relevant environments, but involves more comprehensive codes (including configuration files and maven)
  • This article directly introduces the code and use cases. It is suitable for students who have learned or understood mq. They can join the production environment.
  • For the introduction and comparison of various mq, please refer to my previous articles
  • In fact, the design ideas of mq are similar. You can feel the design concept and basic service objects of mq in detail.
  • If you have any questions, you are welcome to communicate in the message area or private letter.
  • Although the key code has been selected, it is still very long to write three mq in one article, so it is divided into three records. This is the first one, the main speaker kafka.

1: Kafka

  • Basic / core concepts

    • Broker
      Kafka's server-side program can consider an mq node as a broker
      The broker stores topic data

    • Producer producer
      Create a Message and publish it to MQ
      This role publishes messages to Kafka's topic

    • Consumer:
      Messages in the consumption queue

    • ConsumerGroup consumer group
      The same topic is broadcast to different groups. Only one consumer in a group can consume this message

    • Topic
      Each message published to the Kafka cluster has a category called Topic, which means Topic

    • Partition partition
      kafka is the basic unit of data storage. The data in topic is divided into one or more partitions. Each topic has at least one partition, which is orderly
      Multiple partitions of a Topic are distributed on multiple server s in the kafka cluster
      Number of consumers < = less than or equal to the number of partitions

    • Replication replica (spare wheel)
      There will be multiple replicas of the same Partition, and the data of multiple replicas is the same. When other broker s hang up, the system can actively provide services with replicas
      By default, the copy of each topic is 1 (there is no copy by default, which saves resources). It can also be specified when creating a topic
      If the current kafka cluster has only three broker nodes, the maximum replication factor is 3. If the replica is 4, an error will be reported

    • ReplicationLeader,ReplicationFollower
      A Partition has multiple copies, but only one replicationLeader is responsible for the interaction between the Partition and the producer consumer
      The replication follower just makes a backup and synchronizes from the replication leader

    • ReplicationManager
      Responsible for all partition replica information of the Broker and Replication replica state switching

    • offset
      Each consumer instance needs to maintain an offset that records where it consumes for the partition it consumes
      kafka saves the offset in the consumer group on the consumer side

  • kafka features

    • Multiple subscribers
      A topic can have one or more subscribers
      Each subscriber must have a partition, so the number of subscribers should be less than or equal to the number of partitions
      High throughput and low latency: hundreds of thousands of messages can be processed per second

    • High concurrency: thousands of clients read and write at the same time

    • Fault tolerance: multi replica and multi partition. Nodes in the cluster are allowed to fail. If the amount of replica data is n, n-1 nodes can fail

    • Strong scalability: support hot expansion

  • Example code (jdk11+kafka2.8)

    • topic management center
package net.xdclass.kafkatest;

import org.apache.kafka.clients.admin.*;
import org.junit.jupiter.api.Test;

import java.util.*;
import java.util.concurrent.ExecutionException;

/**
 * @Author NJUPT wly
 * @Date 2021/8/14 12:15 morning
 * @Version 1.0
 */
public class KafkaAdminTest {
    private static final String TOPIC_NAME = "xdclass-sp-topic-1";
    /**
     * Setting up the admin client
     */
    public static AdminClient initAdminClient(){
        Properties properties = new Properties();
        properties.setProperty(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,"127.0.0.1:9092");

        return AdminClient.create(properties);
    }

    @Test
    public void createTopicTest(){
        AdminClient adminClient = initAdminClient();

        //Specify partition data, number of copies
        NewTopic newTopic = new NewTopic(TOPIC_NAME,5,(short) 1);
        CreateTopicsResult result =  adminClient.createTopics(Collections.singletonList(newTopic));
        try {
            //future waiting to be created
            result.all().get();
        } catch (InterruptedException | ExecutionException e) {
            e.printStackTrace();
        }
    }

    @Test
    public void listTopicTest() throws ExecutionException, InterruptedException {
        AdminClient adminClient = initAdminClient();

        ListTopicsOptions options =  new ListTopicsOptions();
        options.listInternal(true);

        ListTopicsResult listTopicsResult = adminClient.listTopics(options);
        Set<String> topics = listTopicsResult.names().get();
        for (String name : topics){
            System.out.println(name);
        }
    }

    @Test
    public void delTopicTest() throws ExecutionException, InterruptedException {
        AdminClient adminClient =initAdminClient();

        DeleteTopicsResult result = adminClient.deleteTopics(Collections.singletonList("xdclass-sp-topic"));
        result.all().get();

    }

    @Test
    public void detailTopicTest() throws ExecutionException, InterruptedException {
        AdminClient adminClient = initAdminClient();

        DescribeTopicsResult result = adminClient.describeTopics(Collections.singletonList(TOPIC_NAME));
        Map<String,TopicDescription> stringTopicDescriptionMap = result.all().get();
        Set<Map.Entry<String,TopicDescription>> entries = stringTopicDescriptionMap.entrySet();
        entries.forEach((entry)-> System.out.println("name:"+entry.getKey()+",des"+entry.getValue()));
    }

    @Test
    public void incrPartitionTest() throws ExecutionException, InterruptedException {
        AdminClient adminClient = initAdminClient();
        NewPartitions newPartitions = NewPartitions.increaseTo(5);
        Map<String,NewPartitions> map = new HashMap<>();
        map.put(TOPIC_NAME,newPartitions);
        CreatePartitionsResult result = adminClient.createPartitions(map);
        result.all().get();
    }
}

  • Consumer examples
package net.xdclass.kafkatest;

import org.apache.kafka.clients.consumer.*;
import org.apache.kafka.common.TopicPartition;
import org.junit.jupiter.api.Test;

import java.time.Duration;
import java.util.*;

/**
 * @Author NJUPT wly
 * @Date 2021/8/15 10:43 morning
 * @Version 1.0
 */
public class KafkaConsumerTest {
    private static final String TOPIC_NAME = "xdclass-sp-topic-1";

    public static Properties getProperties(){
        Properties properties = new Properties();

        properties.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,"127.0.0.1:9092");
        properties.setProperty(ConsumerConfig.GROUP_ID_CONFIG,"xdclass-g-1");

        properties.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,"earliest");

//        properties.setProperty(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG,"true");
//        properties.setProperty(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG,"1000");

        properties.setProperty(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG,"false");



        properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringDeserializer");
        properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringDeserializer");
        return properties;
    }

    @Test
    public void simpleConsumerTest(){
        Properties properties = getProperties();
        KafkaConsumer<String,String> kafkaConsumer = new KafkaConsumer<String, String>(properties);
        kafkaConsumer.subscribe(Collections.singleton(TOPIC_NAME));

        while (true){
            ConsumerRecords<String, String> records = kafkaConsumer.poll(Duration.ofMillis(100));
            for (ConsumerRecord record : records){
                System.err.printf("topic=%s,offset=%d,key=%s %n",record.topic(),record.offset(),record.key(),record.value());
            }
//            kafkaConsumer.commitSync();
            if (!records.isEmpty()){
                kafkaConsumer.commitAsync(new OffsetCommitCallback() {
                    @Override
                    public void onComplete(Map<TopicPartition, OffsetAndMetadata> offsets, Exception exception) {
                        if (exception == null){
                            System.err.println("Manual submission success"+offsets.toString());
                        } else {
                            System.err.println("Manual submission fail"+offsets.toString());
                        }
                    }
                });
            }
        }
    }

}
  • Producer instance
package net.xdclass.kafkatest;

import org.apache.kafka.clients.producer.*;
import org.apache.kafka.clients.producer.internals.FutureRecordMetadata;
import org.junit.jupiter.api.Test;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;

import java.util.Properties;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.Future;

/**
 * @Author NJUPT wly
 * @Date 2021/8/14 4:12 afternoon
 * @Version 1.0
 */
public class KafkaProductTest {
    private static final String TOPIC_NAME = "xdclass-sp-topic-1";

    public static Properties getProperties(){

        Properties properties = new Properties();
        properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"127.0.0.1:9092");
//        properties.put("bootstrap.server","127.0.0.1:9092");
        properties.put("acks","all");
        properties.put("retries",0);
        properties.put("batch",16384);
        properties.put("linger.ms",1);
        properties.put("buffer.memory",33554432);
        properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer");
        properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer");
        return properties;
    }

    @Test
    public void testSend(){
        Properties properties = getProperties();
        Producer<String,String> producer = new KafkaProducer<String, String>(properties);
        for (int i=0 ; i<3 ; i++){
            Future<RecordMetadata> future = producer.send(new ProducerRecord<>(TOPIC_NAME,"xdclass-key"+i,"xdclass-value"+i));
            try {
                RecordMetadata metadata = future.get();
                System.out.println("Send status"+metadata.toString());
            } catch (ExecutionException | InterruptedException e) {
                e.printStackTrace();
            }
        }
        producer.close();


    }
    @Test
    public void testSendCallBack(){
        Properties properties = getProperties();
        Producer<String,String> producer = new KafkaProducer<String, String>(properties);
        for (int i=0 ; i<3 ; i++){
            Future<RecordMetadata> future = producer.send(new ProducerRecord<>(TOPIC_NAME, "xdclass-key" + i, "xdclass-value" + i), new Callback() {
                @Override
                public void onCompletion(RecordMetadata recordMetadata, Exception e) {
                    if (e == null){
                        System.out.println("Send status"+recordMetadata.toString());
                    } else {
                        e.printStackTrace();
                    }
                }
            });
            try {
                RecordMetadata metadata = future.get();
                System.out.println("Send status"+metadata.toString());
            } catch (ExecutionException | InterruptedException e) {
                e.printStackTrace();
            }
        }
        producer.close();
    }
    @Test
    public void testSendPartition(){
        Properties properties = getProperties();
        Producer<String,String> producer = new KafkaProducer<String, String>(properties);
        for (int i=0 ; i<3 ; i++){
            Future<RecordMetadata> future = producer.send(new ProducerRecord<>(TOPIC_NAME,4,"xdclass-key"+i,"xdclass-value"+i));
            try {
                RecordMetadata metadata = future.get();
                System.out.println("Send status"+metadata.toString());
            } catch (ExecutionException | InterruptedException e) {
                e.printStackTrace();
            }
        }
        producer.close();

    }

    @Test
    public void testSendP(){
        Properties properties = getProperties();
        properties.setProperty(ProducerConfig.PARTITIONER_CLASS_CONFIG,"net.xdclass.kafkatest.config.PartitionerT");
        Producer<String,String> producer = new KafkaProducer<String, String>(properties);
        for (int i=0 ; i<3 ; i++){
            Future<RecordMetadata> future = producer.send(new ProducerRecord<>(TOPIC_NAME, "xdclass" + i, "xdclass-value" + i), new Callback() {
                @Override
                public void onCompletion(RecordMetadata recordMetadata, Exception e) {
                    if (e == null){
                        System.out.println("Send status"+recordMetadata.toString());
                    } else {
                        e.printStackTrace();
                    }
                }
            });
            try {
                RecordMetadata metadata = future.get();
                System.out.println("Send status"+metadata.toString());
            } catch (ExecutionException | InterruptedException e) {
                e.printStackTrace();
            }
        }
        producer.close();
    }
}
  • configuration file
logging:
  config: classpath:logback.xml

spring:
  kafka:
    bootstrap-servers: 127.0.0.1:9092
    producer:
      retries: 1
      batch-size: 16384
      buffer-memory: 33554432
      key-serializer: org.apache.kafka.common.serialization.StringSerializer
      value-serializer: org.apache.kafka.common.serialization.StringSerializer
      acks: all
      transaction-id-prefix: xdclass-tran-

    consumer:
      auto-commit-interval: 1S
      auto-offset-reset: earliest
      enable-auto-commit: false
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      value-deserializer: org.apache.kafka.common.serialization.StringDeserializer

    listener:
      ack-mode: manual_immediate
      concurrency: 4
  • The above is kafka's native api. In addition, spring boot integrates spring kafka, which is also used in actual development
    • Monitor / consumer
package net.xdclass.kafkatest;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.kafka.support.Acknowledgment;
import org.springframework.kafka.support.KafkaHeaders;
import org.springframework.messaging.handler.annotation.Header;
import org.springframework.stereotype.Component;

/**
 * @Author NJUPT wly
 * @Date 2021/8/16 1:05 afternoon
 * @Version 1.0
 */
@Component
public class LIstener {

    @KafkaListener(topics = {"user.register.topic"},groupId = "xdclass-test-gp2")
    public void Onmessage(ConsumerRecord<?,?> record, Acknowledgment ack, @Header(KafkaHeaders.RECEIVED_TOPIC)String topic){
        System.out.println("consumption"+record.topic()+"-"+record.partition()+"-"+record.value());
        ack.acknowledge();
    }
}
  • controller (send message)
package net.xdclass.kafkatest;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.core.KafkaOperations;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RestController;

/**
 * @Author NJUPT wly
 * @Date 2021/8/16 10:36 morning
 * @Version 1.0
 */
@RestController
public class UserController {

    private static final String TOPIC_NAME = "user.register.topic";

    @Autowired
    private KafkaTemplate<String,Object> kafkaTemplate;

    @GetMapping("/api/v1/{num}")
    public void sendMessage(@PathVariable("num")String num){
        kafkaTemplate.send(TOPIC_NAME,"This is a message"+num).addCallback(success->{
            assert success != null;
            String topic = success.getRecordMetadata().topic();
            int partition = success.getRecordMetadata().partition();
            long offset = success.getRecordMetadata().offset();
            System.out.println(topic+partition+offset);

        },failure->{
            System.out.println("fail");
        });
    }

    @GetMapping("/api/v1/tran")
    public void sendMessage(int num){
        kafkaTemplate.executeInTransaction(new KafkaOperations.OperationsCallback<String, Object, Object>() {
            @Override
            public Object doInOperations(KafkaOperations<String, Object> kafkaOperations) {
                kafkaOperations.send(TOPIC_NAME,"This is a message"+num);
                return true;
            }
        });
    }
}
  • Complete pom
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>
	<parent>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-parent</artifactId>
		<version>2.5.3</version>
		<relativePath/> <!-- lookup parent from repository -->
	</parent>
	<groupId>net.xdclass</groupId>
	<artifactId>kafkatest</artifactId>
	<version>0.0.1-SNAPSHOT</version>
	<name>kafkatest</name>
	<description>Demo project for Spring Boot</description>
	<properties>
		<java.version>11</java.version>
	</properties>
	<dependencies>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-web</artifactId>
		</dependency>
		<dependency>
			<groupId>org.apache.kafka</groupId>
			<artifactId>kafka-clients</artifactId>
			<version>2.8.0</version>
		</dependency>
		<dependency>
			<groupId>org.springframework.kafka</groupId>
			<artifactId>spring-kafka</artifactId>
		</dependency>

		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-test</artifactId>
			<scope>test</scope>
		</dependency>
	</dependencies>

	<build>
		<plugins>
			<plugin>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-maven-plugin</artifactId>
				<version>2.5.0</version>
			</plugin>
		</plugins>
	</build>

</project>

Topics: Java Big Data kafka RabbitMQ Concurrent Programming