Kafka message sending and receiving Java instance

Posted by shareaweb on Fri, 18 Feb 2022 09:06:03 +0100

The main objects of producers are Kafka producer and producer record.

KafkaProducer is a class used to send messages, and ProducerRecord class is used to encapsulate Kafka messages.

Parameters and meanings to be specified for creating KafkaProducer:

parameterexplain
bootstrap.serversConfigure how the producer establishes a connection with the broker. This parameter sets initialization parameters. If the producer needs to connect to the Kafka cluster, the addresses of several brokers in the cluster are configured here instead of all. After the producer connects to the broker specified here, other nodes in the cluster will be found through this connection.
key.serializerSerialization Class of key data to send information. When setting, you can write the Class name or use the Class object of this Class.
value.serializerSerialization Class for the v alue data of the message to be sent. When setting, you can write the Class name or use the Class object of this Class.
acks

Default: all.

acks=0: the producer does not wait for the broker to confirm the message. As long as the message is put into the buffer, it is considered that the message has been sent. In this case, there is no guarantee that the broker will} really receive the message, and the retries configuration will not take effect. The returned message offset of the sent message is always - 1.

acks=1: it means that the message only needs to be written to the primary partition, and then respond to the client without waiting for the confirmation of the replica partition. In this case, if the primary partition goes down after receiving the message} confirmation, and the replica partition has not had time to synchronize the message, the message is lost.

acks=all: the leader partition will wait for all ISR replica partition confirmation records. This process ensures that as long as one ISR replica partition survives, the message will not be lost. This is Kafka's strongest reliability guarantee, equivalent to acks=-1

retries

Retries retries

When there is an error in message sending, the system will resend the message. It is the same as resending when the client receives an error.

If retry is set and you want to ensure the order of messages, you need to set MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION=1 otherwise, when retrying this failed message, other messages may be sent successfully

 

 

 

 

 

 

 

 

 

Producer instance:

package com.cc.kafka.demo.producer;

import org.apache.kafka.clients.producer.Callback;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
import org.apache.kafka.common.header.Header;
import org.apache.kafka.common.header.internals.RecordHeader;
import org.apache.kafka.common.serialization.IntegerSerializer;
import org.apache.kafka.common.serialization.StringSerializer;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.concurrent.ExecutionException;

public class MyProducer1 {
    public static void main(String[] args) throws ExecutionException, InterruptedException {
        Map<String, Object> configs = new HashMap<String, Object>();
        //Specifies the broke address used for the initial link
        configs.put("bootstrap.servers", "192.168.231.128:9092");
        //Specifies the serialization class of the key
        configs.put("key.serializer", IntegerSerializer.class);
        //Specifies the serialized class of value
        configs.put("value.serializer", StringSerializer.class);

        KafkaProducer<Integer, String> producer = new KafkaProducer<Integer, String>(configs);

        //Used to set custom message headers
        List<Header> headers = new ArrayList<Header>();
        headers.add(new RecordHeader("bizname","cc.kafka".getBytes()));

        //Parameter: subject partition number key value
        ProducerRecord<Integer, String> record = new ProducerRecord<Integer, String>(
                "topic_1",
                0,
                0,
                "hello lagou 01",
                headers
        );
        // Synchronous confirmation of messages
//        final Future<RecordMetadata> future = producer.send(record);
//        final RecordMetadata metadata = future.get();
//        System.out.println("subject of message:" + metadata.topic());
//        System.out.println("partition number of message:" + metadata.partition());
//        System.out.println("offset of message:" + metadata.offset());

        // Asynchronous acknowledgement of messages
        producer.send(record, new Callback() {
            @Override
            public void onCompletion(RecordMetadata metadata, Exception exception) {
                if (exception == null) {
                    System.out.println("Subject of message:" + metadata.topic());
                    System.out.println("Partition number of message:" + metadata.partition());
                    System.out.println("Offset of message:" + metadata.offset());
                } else {
                    System.out.println("Exception message:" + exception.getMessage());
                }
            }
        });

        producer.close();
    }
}

 

consumer:

package com.cc.kafka.demo.consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.serialization.IntegerDeserializer;
import org.apache.kafka.common.serialization.StringDeserializer;
import java.util.Arrays;
import java.util.HashMap;
import java.util.Map;
import java.util.function.Consumer;

public class consumer1 {
    public static void main(String[] args) {
        Map<String, Object> configs = new HashMap<String, Object>();
        //Specifies the broke address used for the initial link
        configs.put("bootstrap.servers", "192.168.231.128:9092");
        //Specifies the serialization class of the key
        configs.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, IntegerDeserializer.class);
        //Specifies the serialized class of value
        configs.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);

        // Configure consumer group ID
        configs.put(ConsumerConfig.GROUP_ID_CONFIG, "consumer_demo1");
        // If the valid offset of the current consumer cannot be found, it will be automatically reset to the beginning
        // latest means to reset directly to the last of the message offset
        configs.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");

        KafkaConsumer<Integer, String> consumer = new KafkaConsumer<Integer, String>(configs);

        // Subscribe first and then consume
        consumer.subscribe(Arrays.asList("topic_1"));

        // If there is no message that can be consumed in the topic, the method can be put into the while loop and pulled again every 3 seconds
        // If you haven't pulled it yet, pull it again after 3 seconds to prevent poll calls with too dense while loops.

        // Batch pull messages from the topic partition
        final ConsumerRecords<Integer, String> consumerRecords = consumer.poll(3_000);

        // Traverse the batch messages pulled from the topic partition this time
        consumerRecords.forEach(new Consumer<ConsumerRecord<Integer, String>>() {
            @Override
            public void accept(ConsumerRecord<Integer, String> record) {
                System.out.println(record.topic() + "\t"
                        + record.partition() + "\t"
                        + record.offset() + "\t"
                        + record.key() + "\t"
                        + record.value());
            }
        });

        consumer.close();


    }
}

 

 

Topics: kafka