Message queuing: pulsar installation and deployment

Posted by studio805 on Thu, 17 Feb 2022 13:42:13 +0100

Hostnames are confused

1. Preconditions

Install java1 8. Zookeeper is installed independently. There will be no detailed installation here

2. Download

apache-pulsar-2.9.1-bin.tar.gz

https://pulsar.apache.org/en/download/

3. Initialize and create a cluster

Cluster metadata initialization
tip:  you only need to write these metadata once.
bin/pulsar initialize-cluster-metadata \
  --cluster pulsar-cluster \
  --zookeeper baoding.domain.com:8181 \
  --configuration-store baoding.domain.com:8181 \
  --web-service-url http://suzhou-bigdata01.domain.com:8180/ \
  --web-service-url-tls https://suzhou-bigdata01.domain.com:8443/ \
  --broker-service-url pulsar://suzhou-bigdata01.domain.com:8650/ \
  --broker-service-url-tls pulsar+ssl://suzhou-bigdata01.domain.com:8651/

4. Deploy bookkeeper

Deploy BookKeeper
1),Configure bookies
 vim bookeeper.conf 
# zkServers=zk1:2181,zk2:2181,zk3:2181
zkServers=baoding-bigdata01.domain.com:8181,baoding-bigdata02.domain.com:8181,baoding-bigdata03.domain.com:8181

journalDirectory=/home/disk2/pulsar/data/bookkeeper/journal

ledgerDirectories=/home/disk2/pulsar/data/bookkeeper/ledgers

prometheusStatsHttpPort=8100

 mkdir -p /home/disk2/pulsar/data/bookkeeper/{journal,ledgers}
2) Start bookies
 two ways: in the foreground or as a background daemon
 background:
 bin/pulsar-daemon start bookie

3) Verify
You can verify that bookie works by running the bookiesanity command on the BookKeeper shell:
bin/bookkeeper shell bookiesanity

FAQ:
1) is not matching with 
Solution: delete zookeeper
zkCli.sh login

delete /ledgers/cookies/192.168.1.1:3181


5,Deploy brokers

1) Broker configuration
vim conf/broker.conf
# Zookeeper quorum connection string
zookeeperServers=baoding-bigdata01.domain.com:8181,baoding-bigdata02.domain.com:8181,baoding-bigdata03.domain.com:8181

# Configuration Store connection string
configurationStoreServers=baoding-bigdata01.domain.com:8181,baoding-bigdata02.domain.com:8181,baoding-bigdata03.domain.com:8181

# Broker data port
brokerServicePort=8650

# Broker data port for TLS - By default TLS is disabled
brokerServicePortTls=8651

# Port to use to server HTTP request
webServicePort=8180

# Port to use to server HTTPS request - By default TLS is disabled
webServicePortTls=8443

# Hostname or IP address the service binds on, default is 0.0.0.0.
bindAddress=0.0.0.0

# Name of the cluster to which this broker belongs to
clusterName=pulsar-cluster

### --- Functions --- ###
# Enable Functions Worker Service in Broker
functionsWorkerEnabled=true

2) Start the broker service
background: [*]
bin/pulsar-daemon start broker
foreground:
bin/pulsar broker

6,client 
Note: the name of the configured machine of each instance of the cluster should be different and should be changed
vim conf/client.conf

# webServiceUrl=https://localhost:8443/
webServiceUrl=http://suzhou-bigdata01.domain.com:8180/

# URL for Pulsar Binary Protocol (for produce and consume operations)
# For TLS:
# brokerServiceUrl=pulsar+ssl://localhost:6651/
brokerServiceUrl=pulsar://suzhou-bigdata01.domain.com:8650/

7. View the list of broker s under the cluster
scp to other nodes, start bookie and broker, and view the broker information of the whole cluster:
bin/pulsar-admin brokers list pulsar-cluster
"suzhou-bigdata01.domain.com:8180"
"suzhou-bigdata02.domain.com:8180"
"suzhou-bigdata03.domain.com:8180"

8. List cluster s
Official documents: https://pulsar.apache.org/docs/en/pulsar-admin/
1) List
bin/pulsar-admin clusters list
"pulsar-cluster"
2) Query cluster configuration
 bin/pulsar-admin clusters get pulsar-cluster
{
  "serviceUrl" : "http://suzhou-bigdata01.domain.com:8180/",
  "serviceUrlTls" : "https://suzhou-bigdata01.domain.com:8443/",
  "brokerServiceUrl" : "pulsar://suzhou-bigdata01.domain.com:8650/",
  "brokerServiceUrlTls" : "pulsar+ssl://suzhou-bigdata01.domain.com:8651/",
  "brokerClientTlsEnabled" : false,
  "tlsAllowInsecureConnection" : false,
  "brokerClientTlsEnabledWithKeyStore" : false,
  "brokerClientTlsTrustStoreType" : "JKS"
}

9. List all topics created under tenant / namespace:
By default, themes are created as a single partition persistent theme under the "public" tenant / "default" namespace. You can use the following command to list all topics created under:
bin/pulsar-admin topics list public/default

Let's create a new partition theme:
$ ./bin/pulsar-admin topics create-partitioned-topic --partitions 3 my-partitioned-topic

To list section topics, you must use the following command:
./bin/pulsar-admin topics list-partitioned-topics public/default

List all subscriptions for the topic:
$ ./bin/pulsar-admin topics subscriptions persistent://public/default/my-first-topic

Get statistics about a topic
./bin/pulsar-admin topics stats persistent://public/default/my-first-topic

10. Verify pub/sub
Eg1: 
1) Simulate producers to send messages
bin/pulsar-client produce persistent://public/default/test -n 1 -m "Hello Pulsar"
amount to
bin/pulsar-client produce test -n 1 -m "Hello Pulsar"
2) Listen for messages received by consumers
bin/pulsar-client consume persistent://public/default/test -n 100 -s "consumer-test" -t "Exclusive"

Eg2:
bin/pulsar-client consume my-topic -s "first-subscription"
Means to consume messages from the topic "my topic", and specify the subscription name as "first subscription" (my topic will be in P ersistent://public/default Create persistent theme under)

bin/pulsar-client produce my-topic --messages "hello pulsar"
Means to send a message to my topic.

11. Function test
Function is a very promising function, which can receive and process the messages ejected from one topic in real time, and then send the processing results to another topic, which is equivalent to lightweight streaming computing.
There is an API examples in the. / examples directory Jar package, which comes with some Function examples.

1) Deploy
The deployment process is actually to put the jar package with processing logic on the cluster. The command is as follows:
bin/pulsar-admin functions create \
--jar examples/api-examples.jar \
--className org.apache.pulsar.functions.api.examples.ExclamationFunction \
--inputs persistent://public/default/exclamation-input \
--output persistent://public/default/exclamation-output \
--name exclamation

Basically, create a function from examples / API examples Jar file and specify the specific class name (because multiple functions can be written in a jar package, the specific classname must be specified). Then the input parameter of this function is the topic of exclusion input. After processing, the result will be output to exclusion output. Finally, the name of this function in pulsar is exclusion - note: if the above command fails to execute, You can try to replace classname with classname (the case of this parameter is slightly different in different versions of pulsar)

Attachment: the java source code of ExclamationFunction is as follows. The logic is very simple, just add one after the input parameter!
package org.apache.pulsar.functions.api.examples;
 
import java.util.function.Function;
 
public class ExclamationFunction implements Function<String, String> {
    @Override
    public String apply(String input) {
        return String.format("%s!", input);
    }
}

2) View a list of deployed function s
bin/pulsar-admin functions list \
--tenant public \
--namespace default

3) Start the consumer and view the real-time processing results
bin/pulsar-client consume persistent://public/default/exclamation-output \
--subscription-name my-subscription \
--num-messages 0 

4) Start the producer to generate the materials needed for real-time processing
bin/pulsar-client produce persistent://public/default/exclamation-input \
--num-produce 1 \
--messages "Hello world"

12. Test code

1) pom.xml:

<dependency>
    <groupId>org.apache.pulsar</groupId>
    <artifactId>pulsar-client</artifactId>
    <!--<version>2.9.1</version>-->
    <version>2.8.1</version>
</dependency>

producer:

package com.baidu.matrix.pulsar.demo;

import org.apache.pulsar.client.api.CompressionType;
import org.apache.pulsar.client.api.Message;
import org.apache.pulsar.client.api.MessageId;
import org.apache.pulsar.client.api.MessageRouter;
import org.apache.pulsar.client.api.MessageRoutingMode;
import org.apache.pulsar.client.api.Producer;
import org.apache.pulsar.client.api.PulsarClient;
import org.apache.pulsar.client.api.PulsarClientException;
import org.apache.pulsar.client.api.Schema;
import org.apache.pulsar.client.api.TopicMetadata;
import org.apache.pulsar.client.api.TypedMessageBuilder;
import org.apache.pulsar.shade.org.apache.commons.codec.digest.DigestUtils;
import org.apache.pulsar.shade.org.apache.commons.codec.digest.PureJavaCrc32;

import java.util.concurrent.CompletableFuture;
import java.util.concurrent.TimeUnit;

/**
 * @author leh
 * @version 1.0
 * @desc: 
 * @date 2022/2/16 2:24 PM
 * <p>
 * <p>
 * reference resources: http://javakk.com/2121.html
 */

/**
 * producer Configuration item:
 *
 * "topicName" : "persistent://public/pulsar-cluster/default/my-topic", //topicName It consists of four parts [Topic type: / / tenant name / namespace / topic name]
 * "producerName" : "my-producer", //Producer name
 * "sendTimeoutMs" : 30000, //Sending timeout: 30s by default
 * "blockIfQueueFull" : false, //Whether to block the sending operation when the message queue is full. The default is false. When the message queue is full, the sending operation will fail immediately
 * "maxPendingMessages" : 1000,//Set the maximum size of the queue waiting to receive the confirmation message from the broker. The queue is full, and blockIfQueueFull=true is valid
 * "maxPendingMessagesAcrossPartitions" : 50000,//Set the maximum number of pending messages for all partitions
 * "messageRoutingMode" : "CustomPartition", //Message distribution routing mode: CustomPartition; Roundrobin partition circular traversal partition; SinglePartition randomly selects a partition / / reference http://pulsar.apache.org/docs/zh-CN/2.2.0/cookbooks-partitioned/
 * "hashingScheme" : "JavaStringHash",//Change the hash scheme of the partition used to choose where to publish a particular message
 * "cryptoFailureAction" : "FAIL",//The default value specified for a specific producer is invalid
 * "batchingMaxPublishDelayMicros" : 1000,//Set the time period during which the sent messages will be processed in batches. The default value is 1 millisecond if batch messages are enabled.
 * "batchingMaxMessages" : 1000, //Sets the maximum number of messages allowed in a batch
 * "batchingEnabled" : true, //Controls whether automatic batching of messages is enabled for producers.
 * "compressionType" : "NONE", //Set the compression type of the producer (eg: CompressionType.SNAPPY)
 * "initialSequenceId" : null, //Set the base value of the sequence ID for the message published by the producer
 * "properties" : { } //Set properties for producers
 */

public class PulsarProducerDemo {

    public static void main(String[] args) {
        PulsarClient client = null;
        Producer<String> producer = null;

        //  bin/pulsar-client consume my-first-topic -n 10 -s "consumer-test" -t "Exclusive"
        try {
            client = PulsarClient.builder()
                    .serviceUrl("pulsar://suzhou-bigdata01.domain.com:8650,pulsar://suzhou-bigdata02.domain.com:8650,pulsar://suzhou-bigdata03.domain.com:8650")
                    .build();


            // 1. Single byte synchronous send byte []
            /*
            Producer<byte[]> producer1 = client.newProducer()
                    .topic("my-first-topic")
                    .create();

            producer1.send("Hello byte Streams Word!".getBytes());
            */

            // 2. Single String synchronous send String
            /*
            producer = client.newProducer(Schema.STRING)
                    .topic("my-first-topic")
                    .create();

            producer.send("Hello Streams Word!");
            */


            // 3. Step by step send (sendAsync)
            /*
            producer = client.newProducer(Schema.STRING)
                    .topic("my-first-topic")
                    .create();

            CompletableFuture<MessageId> future = producer.sendAsync("sendAsync streams processing");
            future.thenAccept(msgId -> {
                System.out.printf("Message with ID %s successfully sent asynchronously\n", msgId);
            });

            // Consumer side:
            // ----- got message -----
            // key:[null], properties:[], content:sendAsync streams processing
            */

            // 4. You can also build messages using the given keys and attributes:
            /*
            producer = client.newProducer(Schema.STRING)
                    .topic("my-first-topic")
                    .create();

            TypedMessageBuilder<String> message = producer.newMessage()
                    .key("my-key")
                    .property("application", "pulsar-java-quickstart")
                    .property("pulsar.client.version", "2.4.1")
                    .value("this message content");
            message.send();
            */

            // Consumer side:
            // ----- got message -----
            // key:[my-key], properties:[application=pulsar-java-quickstart, pulsar.client.version=2.4.1], content:value-message

            // 5. For performance reasons, it is usually best to send batch messages in order to save some network bandwidth according to throughput. You can enable b caching when you create a producer client.
            producer = client.newProducer(Schema.STRING)
                    .producerName("my-producer") //Producer name
                    .topic("my-first-topic") //topicName consists of four parts [Topic type: / / tenant name / namespace / topic name]
                    .compressionType(CompressionType.SNAPPY)
                    .enableBatching(true)
                    .blockIfQueueFull(true)
                    .batchingMaxPublishDelay(100, TimeUnit.MILLISECONDS)
                    .batchingMaxMessages(10)
                    .maxPendingMessages(512)
                    // Set message sending timeout
                    .sendTimeout(86400, TimeUnit.SECONDS)  //Sending timeout: 30s by default
                    //Set the cluster routing policy (where the information is stored)
                    .messageRoutingMode(MessageRoutingMode.CustomPartition).messageRouter(
                            new MessageRouter() {
                                @Override
                                public int choosePartition(Message<?> message, TopicMetadata metadata) {
                                    return new String(message.getData()).trim().charAt(0) % metadata.numPartitions();
                                }
                            }
                    )
                    // Set properties for producers
                    .property("author", "leh")
                    .create();

            for (int i = 0; i < 100; i++) {
                producer.send("message_" + i);
            }


            System.out.println("send ok!");

            TimeUnit.SECONDS.sleep(5);

        } catch (Exception e) {
            e.printStackTrace();
        } finally {
            try {
                if (producer != null) {
                    producer.close();
                }
            } catch (PulsarClientException e) {
                e.printStackTrace();
            }

            try {
                if (client != null) {
                    client.close();
                }
            } catch (PulsarClientException e) {
                e.printStackTrace();
            }


            // The close operation can also be asynchronous:
            if (producer != null) {
                producer.closeAsync()
                        .thenRun(() -> System.out.println("Producer closed"))
                        .exceptionally((ex) -> {
                            System.err.println("Failed to close producer: " + ex);
                            return null;
                        });
            }

        }
    }
}

consumer:

package com.baidu.matrix.pulsar.demo;

import org.apache.pulsar.client.api.BatchReceivePolicy;
import org.apache.pulsar.client.api.Consumer;
import org.apache.pulsar.client.api.Message;
import org.apache.pulsar.client.api.MessageListener;
import org.apache.pulsar.client.api.Messages;
import org.apache.pulsar.client.api.Producer;
import org.apache.pulsar.client.api.PulsarClient;
import org.apache.pulsar.client.api.PulsarClientException;
import org.apache.pulsar.client.api.Schema;
import org.apache.pulsar.client.api.SubscriptionInitialPosition;
import org.apache.pulsar.client.api.SubscriptionType;

import java.util.concurrent.TimeUnit;

/**
 * @author leh
 * @version 1.0
 * @desc: 
 * @date 2022/2/16 5:51 PM
 */

/**
 * Reference link
 * http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/PulsarClient.html
 * http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientBuilder.html
 *
 * client Configuration item:
 *
 * "serviceUrl" : "pulsar://localhost:6650", //broker Cluster address, address and multiple, comma separated
 * "operationTimeoutMs" : 30000, //Operation timeout setting
 * "statsIntervalSeconds" : 60, //Set the interval between each statistic (default: 60 seconds) the statistic will be activated with a positive value. The interval in seconds should be set to at least 1 second
 * "numIoThreads" : 1,//Set the number of threads used to process the connection with the broker (default: 1 thread)
 * "numListenerThreads" : 1,// Set the number of threads to use for the message listener (default: 1 thread)
 * "connectionsPerBroker" : 1, //Set the maximum number of connections that the client library will open to a single broker.
 * "enableTcpNoDelay" : true, //Configure whether to use delay free tcp on the connection. The default is true. The delay free function ensures that data packets are sent to the network as soon as possible. It is very important to realize low delay publishing. On the other hand, sending a large number of small packets may limit the overall throughput.
 * "useTls" : false, // Enable ssl by using "pulsar+ssl: / /" in the serviceurl
 * "tlsTrustCertsFilePath" : "",//Set the path to the trusted TLS certificate file
 * "tlsAllowInsecureConnection" : false, //Configure whether pulsar client accepts untrusted TLS certificate from broker (default: false)
 * "tlsHostnameVerificationEnable" : false,//It allows the host name to be verified when the client connects to the agent through TLS
 * "concurrentLookupRequest" : 5000,//The number of concurrent lookup requests allowed to be sent on each broker connection to prevent agent overload.
 * "maxLookupRequest" : 50000,//To prevent broker overload, the maximum number of lookup requests allowed on each broker connection.
 * "maxNumberOfRejectedRequestPerConnection" : 50,//Set the maximum number of broker requests rejected within a specific time period (30 seconds). After this time period, the current connection will be closed and the client will create a new connection to have the opportunity to connect to other brokers (default: 50)
 * "keepAliveIntervalSeconds" : 30 //Set the heartbeat detection time in seconds for each client broker connection
 */


/**
 * Reference link
 * //http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder
 *
 * consumer Configuration item:
 *
 * "topicNames" : [ ], //Topics subscribed by consumers
 * "topicsPattern" : null, //Specify the mode of topics that this user will subscribe to. It accepts regular expressions and compiles them internally into patterns. For example: "P" ersistent://prop/use/ns  abc/pattern topic-.*”
 * "subscriptionName" : "my-subscription", //Consumer's subscription name
 * "subscriptionType" : "Exclusive",//Select the subscription type to use when subscribing to a topic. Exclusive; Failover failover; Shared
 *                      Exclusive subscription: - only one consumer at a time can read the topic through subscription
 *                      Shared Subscriptions: - competing consumers can read topics simultaneously through the same subscription.
 *                      Failover subscription: - user's active / backup mode. If the active consumer dies, the backup takes over. But there have never been two active consumers at the same time.
 * "receiverQueueSize" : 3,//Sets the size of the consumer receive queue.
 * "acknowledgementsGroupTimeMicros" : 100000, //Group consumers by specified time
 * "maxTotalReceiverQueueSizeAcrossPartitions" : 10, //Sets the maximum total receiver queue size across partitions
 * "consumerName" : "my-consumer", //Consumer's name
 * "ackTimeoutMillis" : 10000,//Set timeout for unacknowledged messages
 * "priorityLevel" : 0, //Set the priority level for shared subscription consumers, and the broker provides them with higher priority when scheduling messages.
 * "cryptoFailureAction" : "FAIL",//Specify a default specific value for invalid consumers
 * "properties" : { }, //Set attribute value
 * "readCompacted" : false, //If enabled, the consumer will read messages from the compressed topic instead of the full message backlog of the topic.
 * "subscriptionInitialPosition" : "Latest", //Set the initial location of the consumer's subscription from the Earliest location, that is, the first message. Latest starts from the last position, that is, the last message.
 * "patternAutoDiscoveryPeriod" : 1, //Set the theme auto discovery cycle when using the theme consumer mode.
 * "subscriptionTopicsMode" : "PERSISTENT",//Determine which topics this consumer should subscribe to - persistent topics, non persistent topics, or both.
 * "deadLetterPolicy" : null //Dead letter policy sets the dead letter policy for consumers. Some messages will be re delivered as many times as possible. By using the dead letter mechanism, the message will have the maximum redelivery count. When the message exceeds the maximum redelivery count, the message will be sent to the dead letter subject and automatically confirmed. You can enable the dead letter mechanism by setting the dead letter policy.
 */
public class PulsarConsumerDemo {
    public static void main(String[] args) {
        PulsarClient client = null;
        Consumer<String> consumer = null;

        //  bin/pulsar-client consume my-first-topic -n 10 -s "consumer-test" -t "Exclusive"
        try {

            client = PulsarClient.builder()
                    .serviceUrl("pulsar://suzhou-bigdata01.domain.com:8650,pulsar://suzhou-bigdata02.domain.com:8650,pulsar://suzhou-bigdata03.domain.com:8650")
                    .enableTcpNoDelay(true)
                    .build();


            // 1,single record
            /*
            consumer = client.newConsumer(Schema.STRING)
                    .topic("my-first-topic")
                    .subscriptionName("my-first-subscription")
                    .subscriptionType(SubscriptionType.Exclusive)
                    .subscribe();

             while (true) {
                // blocks until a message is available
                Message<String> messageObj = consumer.receive();

                try {

                    // Do something with the message
                    System.out.printf("Message1 received: %s\n", messageObj.getValue());
                    System.out.printf("Message2 received: %s\n", new String(messageObj.getData()));

                    // Acknowledge the message so that it can be deleted by the message broker
                    consumer.acknowledge(messageObj);
                } catch (Exception e) {
                    // Message failed to process, redeliver later
                    consumer.negativeAcknowledge(messageObj);
                }
                System.out.println("next...");
            }
            */


            // 2,batch
            consumer = client.newConsumer(Schema.STRING)
                    .topic("my-first-topic")
                    .subscriptionName("my-first-subscription")
                    .subscriptionType(SubscriptionType.Exclusive)
                    .subscriptionInitialPosition(SubscriptionInitialPosition.Latest) // SubscriptionInitialPosition.Earliest
                    .batchReceivePolicy(
                        BatchReceivePolicy.builder()
                        .maxNumMessages(50)
                        .maxNumBytes(5 * 1024 * 1024)
                        .timeout(100, TimeUnit.MILLISECONDS)
                        .build()
                    )
                    .subscribe();

            while (true) {
                // blocks until a message is available
                Messages<String> messages = consumer.batchReceive();

                try {
                    // Do something with the message
                    messages.forEach(messageObj -> {
                        System.out.printf("Message1 received: %s\n", messageObj.getValue());
                        // System.out.printf("Message2 received: %s\n", new String(messageObj.getData()));
                    });

                    // Acknowledge the message so that it can be deleted by the message broker
                    consumer.acknowledge(messages);
                } catch (Exception e) {
                    // Message failed to process, redeliver later
                    consumer.negativeAcknowledge(messages);
                }
                System.out.println("next...");
            }

            // 3,listen message
            // If you don't want to block your main thread and rather listen constantly for new messages, consider using a MessageListener
            /*
            consumer = client.newConsumer(Schema.STRING)
                    .topic("my-first-topic")
                    .subscriptionName("my-first-subscription")
                    .subscriptionType(SubscriptionType.Exclusive)
                    .subscriptionInitialPosition(SubscriptionInitialPosition.Earliest) // SubscriptionInitialPosition.Earliest
                    .messageListener(new MessageListener<String>() {
                        @Override
                        public void received(Consumer<String> consumer, Message<String> msg) {
                            try {
                                System.out.println("Message received: " + new String(msg.getData()));
                                consumer.acknowledge(msg);
                            } catch (PulsarClientException e) {
                                e.printStackTrace();
                            }
                        }
                    })
                    .subscribe();
                    */

        } catch (Exception e) {
            e.printStackTrace();
        } finally {
            try {
                if (consumer != null) {
                    consumer.close();
                }
            } catch (PulsarClientException e) {
                e.printStackTrace();
            }

            try {
                if (client != null) {
                    client.close();
                }
            } catch (PulsarClientException e) {
                e.printStackTrace();
            }

            // The close operation can also be asynchronous:
            /*if (consumer != null) {
                consumer.closeAsync()
                        .thenRun(() -> System.out.println("Producer closed"))
                        .exceptionally((ex) -> {
                            System.err.println("Failed to close producer: " + ex);
                            return null;
                        });
            }*/
        }
    }
}

13. Common commands
https://pulsar.apache.org/docs/en/pulsar-admin/
1,clusters
1) View: bin / pulsar admin clusters list
"pulsar-cluster"
2) View: AR pulsar / cluster pulsar admin
3) Create: bin / pulsar admin clusters create: provisions a new cluster This operation requires Pulsar
4) Update: bin / pulsar admin clusters update: update the configuration for a cluster
5) Delete: bin / pulsar admin clusters delete [- A] [clustername]: delete an existing cluster
6) Others: bin / pulsar admin clusters get peer clusters pulsar cluster

2,brokers
1) bin/pulsar-admin brokers version
2) View the cluster broker instance: bin / pulsar admin brokers list [clustername]
eg:
bin/pulsar-admin brokers list pulsar-cluster
"suzhou-bigdata02.domain.com:8180"
"suzhou-bigdata01.domain.com:8180"
"suzhou-bigdata03.domain.com:8180"
3) View the broker's leader: bin / pulsar admin brokers leader broker
{
  "serviceUrl" : "http://suzhou-bigdata02.domain.com:8180"
}

3, topics
1) Check the topic list under tenant/namespace:
bin/pulsar-admin topics list public/default : Get the list of topics under a namespace.

"persistent://public/default/exclamation-output"
"persistent://public/default/exclamation-input"
"persistent://public/default/my-partitioned-topic-partition-0"
"persistent://public/default/test"
"persistent://public/default/my-topic"
"persistent://public/default/my-first-topic"
"persistent://public/default/my-partitioned-topic-partition-1"
"persistent://public/default/my-partitioned-topic-partition-2"
2) Check partition topic:
bin/pulsar-admin topics list-partitioned-topics public/default :  Get the list of partitioned topics under a namespace
3) Delete theme:
bin/pulsar-admin topics delete -d -f persistent://tenant/namespace/topic : Delete a topic. The topic cannot be deleted if there's any 
eg: bin/pulsar-admin topics delete -d -f persistent://public/default/test

Topics: Java Big Data Apache