ActiveMQ Tutorial (2) - Cluster

Posted by Accurax on Sun, 30 Jun 2019 21:27:32 +0200



  • zookeeper
    Program Coordination Service Framework for automatic scheduling of multiple activemqs. After an activemq service goes down, zookeeper automatically schedules one of the normal activemq services in the cluster to become the master host to continue serving.
  • activemq
    Message queuing framework, we will deploy multiple activemq services


Suppose we have three servers:

    Deploy activemq-master, zookeeper
    Deployment of activemq-slave01
    Deployment of activemq-slave02

Cluster building

Configure zookeeper

  • Decompress zookeeper-3.4.9.tar.gz
$ tar -zxvf zookeeper-3.4.9.tar.gz
  • Create zookeeper configuration file by example
$ cp -R zoo_sample.cfg /home/zookeeper-3.4.9/conf/zoo.cfg
  • Configure zookeeper-3.4.9/conf/zoo.cfg as follows:
# The number of milliseconds of each tick
# The number of ticks that the initial 
# synchronization phase can take
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
# the port at which the clients will connect
# the maximum number of client connections.
# increase this if you need to handle more clients
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
# The number of snapshots to retain in dataDir
# Purge task interval in hours
# Set to "0" to disable auto purge feature

Key configuration instructions:
dataDir: zookeeper Data Storage Directory
dataLogDir: The directory where zookeeper saves log files
clientPort: zookeeper port
TickTime: The time interval between the zookeeper servers or between the client and the server to maintain a heartbeat, i.e. each tickTime sends a heartbeat. TickTime in milliseconds
initLimit: The maximum number of tickTime s that can be tolerated at the initial connection between follower servers (F) and leader servers (L) in the cluster
syncLimit: The maximum number of tickTime s tolerated between requests and responses between follower servers and leader servers in a cluster
server.N=YYY:A:B: Server name and address: Cluster information (server number, server address, LF communication port, election port), this configuration item is written in a special format. Among them, N denotes the server number, YYY denotes the IP address of the server, A is the LF communication port, and represents the port of information exchanged between the server and the leader in the cluster. B is the election port, which means the port of communication between servers when the new leader is elected (when the leader is suspended, the other servers will communicate with each other and choose a new leader). Generally speaking, port A of every server in the cluster is the same, and port B of every server is the same. But when using pseudo cluster, IP addresses are the same, only when port A and port B are different.

Configure activemq

Actemq configuration, which will configure three services according to the plan, will be handed over to zookeeper for scheduling.


    <!-- Allows us to use system properties as variables in this configuration file -->
    <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
        <property name="locations">

   <!-- Allows accessing the server log -->
    <bean id="logQuery" class="io.fabric8.insight.log.log4j.Log4jLogQuery"
          lazy-init="false" scope="singleton"
          init-method="start" destroy-method="stop">

        The <broker> element is used to configure the ActiveMQ broker.
    <broker xmlns="" brokerName="broker-test" dataDirectory="${}">

                <policyEntry topic=">" >
                    <!-- The constantPendingMessageLimitStrategy is used to prevent
                         slow topic consumers to block producers and affect other consumers
                         by limiting the number of messages that are retained
                         For more information, see:


                    <constantPendingMessageLimitStrategy limit="1000"/>

            The managementContext is used to configure how ActiveMQ is exposed in
            JMX. By default, ActiveMQ uses the MBean server that is started by
            the JVM. For more information, see:

            <managementContext createConnector="false"/>

            Configure message persistence for the broker. The default persistence
            mechanism is the KahaDB store (identified by the kahaDB tag).
            For more information, see:

            <!--<kahaDB directory="${}/kahadb"/>-->
        <replicatedLevelDB directory="${}/kahadb" 

            The systemUsage controls the maximum amount of space the broker will
            use before disabling caching and/or slowing down producers. For more information, see:
                    <memoryUsage percentOfJvmHeap="70" />
                    <storeUsage limit="100 gb"/>
                    <tempUsage limit="50 gb"/>

            The transport connectors expose ActiveMQ over a given protocol to
            clients and other brokers. For more information, see:

            <!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
            <transportConnector name="openwire" uri="tcp://;wireFormat.maxFrameSize=104857600"/>
            <transportConnector name="amqp" uri="amqp://;wireFormat.maxFrameSize=104857600"/>
            <transportConnector name="stomp" uri="stomp://;wireFormat.maxFrameSize=104857600"/>
            <transportConnector name="mqtt" uri="mqtt://;wireFormat.maxFrameSize=104857600"/>
            <transportConnector name="ws" uri="ws://;wireFormat.maxFrameSize=104857600"/>

        <!-- destroy the spring context on shutdown to stop jetty -->
            <bean xmlns="" class="org.apache.activemq.hooks.SpringContextHook" />


        Enable web consoles, REST and Ajax APIs and demos
        The web consoles requires by default login, you can disable this in the jetty.xml file

        Take a look at ${ACTIVEMQ_HOME}/conf/jetty.xml for more details
    <import resource="jetty.xml"/>


Configuration details:
a. Unified setup of brokerName
All activemq services must be fully configured with brokerName as a unified value

<broker xmlns="" brokerName="broker-test" dataDirectory="${}">

b. Configuring persistence adapter
There are three main ways to set persistence adapter persistence mode: kahaDB (default mode), database persistence and levelDB (activemq v5.9.0 provides support)

     <!--<kahaDB directory="${}/kahadb"/>-->
    <replicatedLevelDB directory="${}/kahadb" 

directory: the path to store data
Replicas: the number of nodes in the cluster, expressed by the formula (replicas/2)+1) as the number of services in the cluster that need to run normally at least, three clusters allow one downtime, and the other two to run normally.
bind: When the service node becomes a Master, it binds the configured address and port to fulfill the master-slave replication protocol (configuring the ip of each activemq service to ensure that the ports are different)
zkAddress: The ip and port of zookeeper, if it is a zookeeper cluster, separated by "."
ZkSession Timeout: The timeout of the zookeeper call
zkPassword: password for zookeeper service platform
hostname: ip of activemq service deployment machine
zkPath:zookeeper Election Information Exchange Storage Path
sync: It is believed that before the message is consumed, the strategy of synchronous information storage will be selected. If there are many strategies separated by commas, ActiveMQ will choose a stronger strategy (local_mem, local_disk will definitely choose to store on local hard disk).

c. Configure the transportConnector message port

    <transportConnector name="openwire" uri="tcp://;wireFormat.maxFrameSize=104857600"/>

Ensure that the message ports of each activemq service are inconsistent. For example, three activemq service ports are configured as 51511, 51512 and 51513 respectively.

  • Configure the activemq management platform access port / home/apache-activemq-5.9.1-bin/jetty.xml
<bean id="jettyPort" class="org.apache.activemq.web.WebConsolePort" init-method="start">
        <property name="host" value=""/>
        <property name="port" value="8161"/>

Change the value value value of the property attribute port to the port you want to configure

Start up service

  • Start zookeeper
    zookeeper service must be better than activemq service startup
$ {zookeeper_home}/bin/ start
  • Start activemq
    Start activemq service in turn
$ {activemq_home}/bin/active start

Managing activemq

zookeeper's strategy is to select one of the three activemq servers to run, while the other two are waiting to run, just to synchronize the master and slave on the data.

So, after starting the ZooKeeper server and ActiveMQ server, access ,, There will only be one success.

Client Connection Actemq

Using Spring to manage beans, the configuration is as follows:

 <bean id="activeMQConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
        <property name="userName" value="admin" />
        <property name="password" value="admin" />
        <property name="brokerURL" value="failover:(tcp://,tcp://,tcp://" />

Topics: Zookeeper Apache xml Jetty