>* the original content of greatsql community cannot be used without authorization. For reprint, please contact the editor and indicate the source.
- 1. Installation Preparation
- 2. Initialize MySQL Server
- 3. Initialize the first MGR node
- 4. Continue to set the other two nodes
- 5. Write data to MGR cluster
- References, documents
- Disclaimers
This paper introduces how to use GreatSQL 8.0.25 to build a three node MGR cluster.
1. Installation Preparation
Prepare the following three servers:
IP | port | role |
---|---|---|
172.16.16.10 | 3306 | mgr1 |
172.16.16.11 | 3306 | mgr2 |
172.16.16.12 | 3306 | mgr3 |
Ensure that the network between the three nodes can be interconnected, and there are no firewall interception rules for ports 3306 and 33061.
Download the greatsql binary package at: https://gitee.com/GreatSQL/GreatSQL/releases .
This article is based on CentOS x86_64 environment as an example, the downloaded binary package name is: greatsql-8.0.25-15-linux-glibc2 28-x86_ 64.tar. XZ, put it in / usr/local directory and extract it:
$ cd /usr/local $ tar xf GreatSQL-8.0.25-15-Linux-glibc2.28-x86_64.tar.xz $ cd GreatSQL-8.0.25-15-Linux-glibc2.28-x86_64 $ ls bin COPYING-jemalloc include LICENSE LICENSE-test mysqlrouter-log-rotate README.router run support-files cmake docs lib LICENSE.router man README README-test share var
2. Initialize MySQL Server
First prepare / etc / my CNF profile:
#/etc/my.cnf [mysqld] user = mysql basedir=/usr/local/GreatSQL-8.0.25-15-Linux-glibc2.28-x86_64 datadir=/data/GreatSQL port=3306 server_id=103306 log-bin log_slave_updates=1 gtid_mode=ON enforce_gtid_consistency=ON
This article only aims to start MySQL Server and deploy MGR normally, so this configuration file is very simple. If you want to use it in a formal occasion, you can refer to it This configuration file.
Initialize MySQL Server first:
$ mkdir -p /data/GreatSQL && chown -R mysql:mysql /data/GreatSQL $ /usr/local/GreatSQL-8.0.25-15-Linux-glibc2.28-x86_64/bin/mysqld --defaults-file=/etc/my.cnf --initialize-insecure
Note: do not use the -- initialize execute option to initialize the installation in the production environment, because if you do so, the super administrator root account defaults to an empty password, and anyone can log in to the database with this account. There is a security risk. This is only done for the convenience of demonstration in this article.
Start MySQL Server:
$ /usr/local/GreatSQL-8.0.25-15-Linux-glibc2.28-x86_64/bin/mysqld --defaults-file=/etc/my.cnf &
If there is no accident, MySQL Server can be started normally. Use the same method to initialize the other two nodes.
In addition, it is recommended to add GreatSQL to the systemd service of the system to facilitate management. For specific methods, please refer to this article: Add GreatSQL to the systemd service of the system.
3. Initialize the first MGR node
Next, prepare to initialize the first node of MGR, also known as the boot node.
Modify / etc / my CNF, add the following lines and MGR related configuration parameters:
plugin_load_add='group_replication.so' group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaa1" group_replication_local_address= "172.16.16.10:33061" group_replication_group_seeds= "172.16.16.10:33061,172.16.16.11:33061,172.16.16.12:33061" report-host=172.16.16.10
The report host option is used to report the address used by this node to other MGR nodes, so as to avoid the problem that MGR cannot be started because the corresponding relationship may not be found correctly when there are multiple host names on a server. In addition, after setting report host, it is not necessary to modify the / etc/hosts system file and add the address and host name of each node.
In addition, note that the port configured above writes 33061 instead of 3306, which specifies a special communication port for MGR service, which is different from the normal read-write service port of MySQL. The 33061 port number here can be customized, such as 12345. Note that this port cannot be blocked by the firewall.
Using this configuration file, restart MySQL Server, and you should see that the group has been successfully loaded_ The replicaton plug-in:
mysql> show plugins; ... +---------------------------------+----------+--------------------+----------------------+---------+ | Name | Status | Type | Library | License | +---------------------------------+----------+--------------------+----------------------+---------+ ... | group_replication | ACTIVE | GROUP REPLICATION | group_replication.so | GPL | ...
If it is not loaded correctly, you can also log in to MySQL Server and load the plugin manually:
myqsl> install plugin group_replication soname 'group_replication.so';
Next, create a special account for MGR service and prepare to configure MGR service channel:
#Each node needs to create a user separately, so there is no need to record binlog and copy it to other nodes mysql> set session sql_log_bin=0; mysql> create user repl@'%' identified by 'repl'; mysql> GRANT BACKUP_ADMIN, REPLICATION SLAVE ON *.* TO `repl`@`%`; #After creating the user, continue to enable binlog logging mysql> set session sql_log_bin=1; #Configure MGR service channel #Channel name group_replication_recovery is fixed and cannot be modified mysql> CHANGE MASTER TO MASTER_USER='repl', MASTER_PASSWORD='repl' FOR CHANNEL 'group_replication_recovery';
Then execute the following command and set it as the boot node of MGR (only the first node needs to do this) to directly start the MGR service:
mysql> set global group_replication_bootstrap_group=ON; mysql> start group_replication;
Reminder: when the whole MGR cluster restarts, the first node to start must also be set to boot mode, and then start other nodes. Do not set boot mode except.
Then, check the MGR service status:
mysql> select * from performance_schema.replication_group_members; +---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | +---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+ | group_replication_applier | 4ebd3504-11d9-11ec-8f92-70b5e873a570 | 172.16.16.10 | 3306 | ONLINE | PRIMARY | 8.0.25 | +---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
OK, the initialization of the first node is completed.
4. Continue to set the other two nodes
Continue to use the following / etc / my CNF profile template:
#my.cnf [mysqld] user = mysql basedir=/usr/local/GreatSQL-8.0.25-15-Linux-glibc2.28-x86_64 datadir=/data/GreatSQL port=3306 server_id=113306 log-bin log_slave_updates=1 gtid_mode=ON enforce_gtid_consistency=ON #mgr plugin_load_add='group_replication.so' group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaa1" group_replication_local_address= "172.16.16.11:33061" group_replication_group_seeds= "172.16.16.10:33061,172.16.16.11:33061,172.16.16.12:33061" report-host=172.16.16.11
Reminder: among the above options, server_id,group_replication_local_address and report_ The three options of host should be modified to the correct value. In an MGR cluster, each node sets its own server_id and server_uuid is unique, but group_ replication_ group_ The value of name should be the same, which is the unique ID of the MGR cluster.
After restarting the MySQL Server instance (report host is a read-only option and needs to be restarted to take effect), create a special account for MGR service and configure MGR service channel:
mysql> set session sql_log_bin=0; mysql> create user repl@'%' identified by 'repl'; mysql> GRANT BACKUP_ADMIN, REPLICATION SLAVE ON *.* TO `repl`@`%`; mysql> set session sql_log_bin=1; mysql> CHANGE MASTER TO MASTER_USER='repl', MASTER_PASSWORD='repl' FOR CHANNEL 'group_replication_recovery';
Next, you can directly start the MGR service (except the first node, other nodes do not need to set the boot mode):
mysql> start group_replication;
Check MGR node status again:
mysql> select * from performance_schema.replication_group_members; +---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | +---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+ | group_replication_applier | 4ebd3504-11d9-11ec-8f92-70b5e873a570 | 172.16.16.10 | 3306 | ONLINE | PRIMARY | 8.0.25 | | group_replication_applier | 549b92bf-11d9-11ec-88e1-70b5e873a570 | 172.16.16.11 | 3306 | ONLINE | SECONDARY | 8.0.25 | | group_replication_applier | 5596116c-11d9-11ec-8624-70b5e873a570 | 172.16.16.12 | 3306 | ONLINE | SECONDARY | 8.0.25 | +---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
We can see that there are three nodes in the above cluster in the ONLINE state, of which 172.16.16.10 is the PRIMARY node and the other two are the SECONDARY nodes. That is to say, the current cluster adopts the single master mode. If the multi master mode is adopted, the roles of all nodes are PRIMARY.
5. Write data to MGR cluster
Next, we connect to the PRIMARY node, create a test library table and write data:
$mysql -h172.16.16.10 -uroot -Spath/mysql.sock mysql> create database mgr; mysql> use mgr; mysql> create table t1(c1 int unsigned not null primary key); mysql> insert into t1 select rand()*10240; mysql> select * from t1; +------+ | c1 | +------+ | 8078 | +------+
Then connect to one of the SECONDARY nodes and check whether the data just written in PRIMARY can be seen:
$mysql -h172.16.16.11 -uroot -Spath/mysql.sock mysql> use mgr; mysql> select * from t1; +------+ | c1 | +------+ | 8078 | +------+
Confirm that the data can be read.
Here, you have completed the installation and deployment of the three node MGR cluster.
References, documents
- MySQL 8.0 Reference Manual
- Database Kernel Development - Wen Zhenghu
- Group Replication principle - song libing
Disclaimers
Due to the limited personal level, mistakes and omissions are inevitable in the column. Do not directly copy the commands and methods in the document and apply them directly to the online production environment. Readers must fully understand and verify the test environment before formal implementation, so as to avoid damage or damage to the production environment.