[original by HAVENT] spring boot + spring Kafka asynchronous configuration

Posted by manchesterkid on Mon, 06 Jan 2020 09:51:35 +0100

Recently, our project team used Kafka to manage the system log uniformly, but the Kafka cluster (3 servers) was hung up due to the unexpected disaster, which was comparable to the rhythm of winning the prize. Then, all the services using Kafka to send message log were stuck. After investigation, it was found that Kafka crashed and caused the call to Kafka to send log service to be blocked all the time.

Finally, when checking the code, we found that if we could not connect to Kafka service, there would be a one minute block. There are two solutions to the above problems:

1, Turn on asynchronous mode (@ EnableAsync)

@EnableAsync
@Configuration
public class KafkaProducerConfig {

    private static final Logger LOGGER = LoggerFactory.getLogger(KafkaProducerConfig.class);

    @Value("${kafka.brokers}")
    private String servers;

    @Bean
    public Map<String, Object> producerConfigs() {
        Map<String, Object> props = new HashMap<>();
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, servers);
        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
        return props;
    }

    @Bean
    public ProducerFactory<String, GenericMessage> producerFactory(ObjectMapper objectMapper) {
        return new DefaultKafkaProducerFactory<>(producerConfigs(), new StringSerializer(), new JsonSerializer(objectMapper));
    }

    @Bean
    public KafkaTemplate<String, GenericMessage> kafkaTemplate(ObjectMapper objectMapper) {
        return new KafkaTemplate<String, GenericMessage>(producerFactory(objectMapper));
    }

    @Bean
    public Producer producer() {
        return new Producer();
    }
}
public class Producer {

    public static final Logger LOGGER = LoggerFactory.getLogger(Producer.class);

    @Autowired
    private KafkaTemplate<String, GenericMessage> kafkaTemplate;

    @Async
    public void send(String topic, GenericMessage message) {
        ListenableFuture<SendResult<String, GenericMessage>> future = kafkaTemplate.send(topic, message);
        future.addCallback(new ListenableFutureCallback<SendResult<String, GenericMessage>>() {

            @Override
            public void onSuccess(final SendResult<String, GenericMessage> message) {
                LOGGER.info("sent message= " + message + " with offset= " + message.getRecordMetadata().offset());
            }

            @Override
            public void onFailure(final Throwable throwable) {
                LOGGER.error("unable to send message= " + message, throwable);
            }
        });
    }
}

 

2, If you use synchronous mode, you can reduce the blocking time by modifying the configuration parameter Max block MS config (max.block.ms / default 60s)

package com.havent.demo.logger.config;

import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.serialization.StringSerializer;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.annotation.EnableKafka;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.core.ProducerFactory;
import org.springframework.scheduling.annotation.EnableAsync;

import java.util.HashMap;
import java.util.Map;

@EnableAsync
@Configuration
@EnableKafka
public class KafkaConfiguration {
    @Value("${spring.kafka.producer.bootstrap-servers}")
    private String serverAddress;
    
    public Map<String, Object> producerConfigs() {
        System.out.println("HH > serverAddress: " + serverAddress);

        Map<String, Object> props = new HashMap<>();
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, serverAddress);

        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);

        // If the request fails, the producer will automatically retry. We specify 0 times. If retry is enabled, there will be the possibility of repeating the message
        props.put(ProducerConfig.RETRIES_CONFIG, 0);

        // Request sends a request, i.e. Batch, to reduce the number of requests, which is the size of each Batch
        props.put(ProducerConfig.BATCH_SIZE_CONFIG, 4096);

        /**
         * This instructs the producer to wait a while before sending the request, hoping that more messages will be filled into the batch that is not full. This is similar to the TCP algorithm, such as the above code segment,
         * Maybe 100 messages are sent in one request because we set the linger time to 1 millisecond, and then if we don't fill the buffer,
         * This setting will increase the latency request by 1 millisecond to wait for more messages. It should be noted that under high load, similar time will generally form a batch, even if
         * linger.ms=0. In the case of no high load, if the setting is larger than 0, less and more effective requests will be exchanged with a small amount of delay cost.
         */
        props.put(ProducerConfig.LINGER_MS_CONFIG, 2000);

        /**
         * Control the total amount of cache available to the producer, and if the message is sent faster than it is sent to the server, the cache space will be exhausted.
         * When the cache space is exhausted, other send calls will be blocked. The threshold of blocking time is set by max.block.ms, and then it will throw a TimeoutException.
         */
        props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 40960);

        // It is used to configure the maximum waiting time when the send data or partitionFor function gets the corresponding leader. The default value is 60 seconds
        // HH warning: if kafka cannot be connected, the program will be stuck. Try not to set the wait too long
        props.put(ProducerConfig.MAX_BLOCK_MS_CONFIG, 100);


        // Maximum waiting time for message to be sent
        props.put(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG, 100);

        // 0: do not guarantee the arrival confirmation of the message, just send it, with low delay but message loss. In the case of a server failure, it is a bit like TCP
        // 1: Send the message and wait for the leader to receive the confirmation, with certain reliability
        // -1: Send a message, wait for the leader to receive the confirmation, and copy the message before returning. The highest reliability
        props.put(ProducerConfig.ACKS_CONFIG, "0");

        System.out.println(props);
        return props;
    }

    public ProducerFactory<String, String> producerFactory() {
        return new DefaultKafkaProducerFactory<>(producerConfigs());
    }

    @Bean
    public KafkaTemplate<String, String> kafkaTemplate() {
        KafkaTemplate<String, String> kafkaTemplate = new KafkaTemplate<>(producerFactory());
        return kafkaTemplate;
    }
    
}

 

I would like to dedicate this to those compatriots who are trapped by Spring Kafka synchronous mode and have no way out...

Topics: kafka Apache Java Spring