Deployment planning
This job is the three node deployment method of OceanBase cluster. Directly and remotely log in to the OceanBase node through the central control computer to deploy and start the observer and obproxy processes.
Since there are exactly seven physical machines on hand, OBD will be directly deployed as an OceanBase cluster with 2:2:2 architecture in this job.
- The machine information is as follows:
Machine type | Host information |
---|---|
IP | 10.144.2.112,10.144.2.111,10.144.2.110,10.144.2.109, 10.144.2.108,10.144.2.107,10.144.2.107 |
adapter name | bond0 |
OS | CentOS Linux release 7.9.2009 (Core) |
CPU | 64 |
Memory | 256G, 230G available+ |
Disk 1 | /data/1 |
Disk 2 | data/2 |
- The machines are divided as follows:
role | machine | remarks |
---|---|---|
OBD | 10.144.2.112 | Central control computer, automatic installation and Deployment Software |
OBSERVER | 10.144.2.111 | OceanBase database, zone1 |
10.144.2.110 | OceanBase database, zone2 | |
10.144.2.109 | OceanBase database, zone3 | |
10.144.2.108 | OceanBase database, zone1 | |
10.144.2.107 | OceanBase database, zone2 | |
10.144.2.106 | OceanBase database, zone3 | |
OBPROXY | 10.144.2.111 | OceanBase access reverse proxy |
10.144.2.110 | OceanBase access reverse proxy | |
10.144.2.109 | OceanBase access reverse proxy | |
10.144.2.108 | OceanBase access reverse proxy | |
10.144.2.107 | OceanBase access reverse proxy | |
10.144.2.106 | OceanBase access reverse proxy | |
OBCLIENT | 10.144.2.112 | OceanBase command line client |
Environment settings
Kernel parameter modification
Modify the configuration file: VI / etc / sysctl conf
Add the following
net.core.somaxconn = 2048 net.core.netdev_max_backlog = 10000 net.core.rmem_default = 16777216 net.core.wmem_default = 16777216 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.ip_local_port_range = 3500 65535 net.ipv4.ip_forward = 0 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 net.ipv4.tcp_syncookies = 0 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.tcp_fin_timeout = 15 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_tw_recycle = 1 net.ipv4.tcp_slow_start_after_idle=0 vm.swappiness = 0 vm.min_free_kbytes = 2097152 vm.max_map_count=655360 fs.aio-max-nr=1048576
Make configuration effective
sysctl -p
Session variable modification
The limits involved in the process of OceanBase database include the maximum Stack size of threads, the maximum number of Open Files and the Core File Size.
Modify the configuration file: VI / etc / security / limits conf
- Modify at the global level. Note that after modification, the logged in session needs to exit and log in again to take effect.
Set the maximum stack space size at the session level to unlimited, the maximum number of file handles to 655350, and the Core file size to unlimited. Modify / etc / security / limits Conf configuration file, if the existing setting value is lower than this setting value
* soft nofile 655360 * hard nofile 655360 * soft nproc 655360 * hard nproc 655360 * soft core unlimited * hard core unlimited * soft stack unlimited * hard stack unlimited
View the configuration method. Exit the current session and log in again. Execute the following command to see if the configuration takes effect:
ulimit -a
Close SELinux
Modify the SELinux option in the SELinux configuration file. Note: one of the three values in the comment must be used. If the writing is wrong, the operating system will report an error after the machine is restarted. At that time, you can only enter the single user mode for modification.
Modify the configuration file: vi /etc/selinux/config
# This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=disabled
After modifying the configuration file, it will only take effect after restarting the host. You also need to use the following command to take effect immediately.
setenforce 0
Turn off the firewall
View firewall status
systemctl status firewalld
If it's inactive, don't worry. If it is active, it will be closed permanently
systemctl disable firewalld systemctl stop firewalld systemctl status firewalld
Determine whether to use ntpd to synchronize time.
systemctl status ntpd Unit ntpd.service could not be found.
If the above information is prompted, it indicates that ntpd is not used. If you are prompted that there is an ntpd service, uninstall the ntpd software and check the chrony service.
chrony service check
-
View time synchronization activities
chronyc activity
-
View time server
chronyc sources
-
View synchronization status
chronyc sources -v
-
Calibration time server:
chronyc tracking
Install the chrony service
If it is not installed, you can use the YUM installation method. You can also download the corresponding RPM package installation.
My server has been installed. The following installation methods refer to the community tutorial for reference only:
yum -y install chrony
chrony configuration description
The name of the chrony service daemon is chronyd. chronyc is a command-line tool used to monitor the performance and configuration parameters of chronyd. Main configuration file of chrony: / etc / chrony conf . The configuration method is as follows:
vi /etc/chrony.conf
# Server followed by time synchronization server # Use pool ntp. Org project. According to the server configuration, you can theoretically add as many servers as you want. # Or use Alibaba cloud's ntp server # Please consider joining the pool (http://www.pool.ntp.org/join.html). server ntp.cloud.aliyuncs.com minpoll 4 maxpoll 10 iburst server ntp.aliyun.com minpoll 4 maxpoll 10 iburst server ntp1.aliyun.com minpoll 4 maxpoll 10 iburst server ntp1.cloud.aliyuncs.com minpoll 4 maxpoll 10 iburst server ntp10.cloud.aliyuncs.com minpoll 4 maxpoll 10 iburst # If there is no time synchronization server in the test environment, select one configured as the time synchronization server. # If native is selected, the following server comment will be canceled #server 127.127.1.0 # According to the actual time, calculate the rate of increasing and decreasing time of the server, and then record it in a file to make the best time compensation adjustment for the system after the system is restarted. driftfile /var/lib/chrony/drift # chronyd adjusts the time to slow down or accelerate according to the demand, # In some cases, the system clock may drift too fast, resulting in too long time for time adjustment. # This instruction forces the chronyd to adjust the period. When it is greater than a certain threshold, step by step adjust the system clock. # It takes effect only when there are no more clock updates because the chronyd startup time exceeds the specified limit (the limit can be disabled by using a negative value). makestep 1.0 3 # A kernel mode is enabled in which the system time is copied to the real-time clock (RTC) every 11 minutes. rtcsync # Enable hardware timestamping on all interfaces that support it. # Enable the hardware timestamp by using the hwtimestamp instruction #hwtimestamp eth0 #hwtimestamp eth1 #hwtimestamp * # Increase the minimum number of selectable sources required to adjust # the system clock. #minsources 2 # Specify a host, subnet, or network to allow or deny NTP to connect to the machine acting as a clock server #allow 192.168.0.0/16 #deny 192.168/16 # Service time even if it is not synchronized to the time source local stratum 10 # Specifies the file that contains the NTP authentication key. #keyfile /etc/chrony.keys # Specify the directory of the log file. logdir /var/log/chrony # Select which information is logged. #log measurements statistics tracking
The simplest configuration file is as follows:
server 127.127.1.0 allow 10.144.0.0/16 local stratum 10
Configure installation users
- Add ordinary users
groupadd -g 2000 admin useradd -u 2000 -g 2000 admin
-
passwd admin
Just enter the password
-
Add sudo permission of admin
[root@localhost ~]# visudo
Add content after:
## Allow root to run any commands anywhere root ALL=(ALL) ALL
Add the following:
admin ALL=(ALL) ALL
initial directory
View disk partition
[root@localhost ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 1.8T 0 disk sdb 8:16 0 1.8T 0 disk sdc 8:32 0 446.1G 0 disk ├─sdc1 8:33 0 1G 0 part /boot/efi ├─sdc2 8:34 0 1G 0 part /boot └─sdc3 8:35 0 444.1G 0 part ├─vg00-lvroot 253:0 0 100G 0 lvm / └─vg00-lvswap 253:1 0 16G 0 lvm [SWAP] loop0 7:0 0 387M 0 loop /usr/hxm/loadrunner
View disk partition and mounting
[root@localhost ~]# lsblk -f NAME FSTYPE LABEL UUID MOUNTPOINT sda LVM2_member 4bEcx7-m6bx-mL41-vKEO-DvOC-zEgE-H32Add sdb sdc ├─sdc1 vfat 0320-6ED4 /boot/efi ├─sdc2 xfs 93739afe-f19e-436e-aefb-be9522931fdf /boot └─sdc3 LVM2_member 22LfZ1-VwqX-MMKK-97L0-1l0T-iDlG-YPXHZ5 ├─vg00-lvroot xfs lvroot c2918725-45ec-4415-a7c1-d10598630178 / └─vg00-lvswap swap lvswap b9ce8c9c-03b2-4ceb-9044-945493566db5 [SWAP] loop0 udf T7330-15010 2010-08-25-10-57-00-0 /usr/hxm/loadrunner
View disk usage
[root@localhost ~]# df -hl Filesystem Size Used Avail Use% Mounted on devtmpfs 126G 0 126G 0% /dev tmpfs 126G 4.0K 126G 1% /dev/shm tmpfs 126G 116M 126G 1% /run tmpfs 126G 0 126G 0% /sys/fs/cgroup /dev/mapper/vg00-lvroot 100G 43G 58G 43% / /dev/sdc2 1014M 227M 788M 23% /boot /dev/sdc1 1022M 12M 1011M 2% /boot/efi tmpfs 26G 12K 26G 1% /run/user/42 tmpfs 26G 0 26G 0% /run/user/0 /dev/loop0 386M 386M 0 100% /usr/hxm/loadrunner
View the partition types supported by the system
mkfs then double tap the tab key to see the partition types supported by our system
[root@localhost ~]# mkfs mkfs mkfs.btrfs mkfs.cramfs mkfs.ext2 mkfs.ext3 mkfs.ext4 mkfs.fat mkfs.minix mkfs.msdos mkfs.vfat mkfs.xfs
format partition
[root@localhost ~]# mkfs -t ext4 /dev/sda mke2fs 1.42.9 (28-Dec-2013) /dev/sda is entire device, not just one partition! Proceed anyway? (y,n) y Discarding device blocks: done Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 117211136 inodes, 468843606 blocks 23442180 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=2617245696 14308 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done
[root@localhost ~]# mkfs -t ext4 /dev/sdb mke2fs 1.42.9 (28-Dec-2013) /dev/sdb is entire device, not just one partition! Proceed anyway? (y,n) y Discarding device blocks: done Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 117211136 inodes, 468843606 blocks 23442180 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=2617245696 14308 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done
View disk partition and mounting
[root@localhost ~]# lsblk -f NAME FSTYPE LABEL UUID MOUNTPOINT sda ext4 90551db9-8abf-4b2e-80f7-27cb6d3139ce sdb ext4 31120181-6143-4385-bc62-8841e21941db sdc ├─sdc1 vfat 0320-6ED4 /boot/efi ├─sdc2 xfs 93739afe-f19e-436e-aefb-be9522931fdf /boot └─sdc3 LVM2_member 22LfZ1-VwqX-MMKK-97L0-1l0T-iDlG-YPXHZ5 ├─vg00-lvroot xfs lvroot c2918725-45ec-4415-a7c1-d10598630178 / └─vg00-lvswap swap lvswap b9ce8c9c-03b2-4ceb-9044-945493566db5 [SWAP] loop0 udf T7330-15010 2010-08-25-10-57-00-0 /usr/hxm/loadrunner
directories creating
[root@localhost ~]# mkdir -p /data/1 [root@localhost ~]# mkdir -p /data/2
Directory mount
mount /dev/sda /data/1
This sentence means to mount the sda partition of the device to directory 1 under the data directory
[root@localhost ~]# mount /dev/sda /data/1 [root@localhost ~]# mount /dev/sdb /data/2
View disk partition and mounting
You can see that the sda partition and the sdb partition have been successfully mounted to the / data/1he /data/2 directory respectively.
[root@localhost ~]# lsblk -f NAME FSTYPE LABEL UUID MOUNTPOINT sda ext4 90551db9-8abf-4b2e-80f7-27cb6d3139ce /data/1 sdb ext4 31120181-6143-4385-bc62-8841e21941db /data/2 sdc ├─sdc1 vfat 0320-6ED4 /boot/efi ├─sdc2 xfs 93739afe-f19e-436e-aefb-be9522931fdf /boot └─sdc3 LVM2_member 22LfZ1-VwqX-MMKK-97L0-1l0T-iDlG-YPXHZ5 ├─vg00-lvroot xfs lvroot c2918725-45ec-4415-a7c1-d10598630178 / └─vg00-lvswap swap lvswap b9ce8c9c-03b2-4ceb-9044-945493566db5 [SWAP] loop0 udf T7330-15010 2010-08-25-10-57-00-0 /usr/hxm/loadrunner
Check whether the mount is successful
[root@localhost /]# ls /data/1 lost+found [root@localhost /]# ls /data/2 lost+found
The presence of lost+found indicates that we have successfully mounted.
Settings can be automatically mounted (permanently mounted)
Check the current mount:
[root@localhost /]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Sat Dec 18 00:52:03 2021 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/vg00-lvroot / xfs defaults 0 0 UUID=93739afe-f19e-436e-aefb-be9522931fdf /boot xfs defaults 0 0 UUID=0320-6ED4 /boot/efi vfat defaults,uid=0,gid=0,umask=0077,shortname=winnt 0 0 /dev/mapper/vg00-lvswap swap swap defaults 0 0
After that, we don't need UUID to write down which partition of the device is mounted to which directory:
Enter vim /etc/fstab to enter the modification interface:
Fill in / data/1 and data/2
# # /etc/fstab # Created by anaconda on Sat Dec 18 00:52:03 2021 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/vg00-lvroot / xfs defaults 0 0 UUID=93739afe-f19e-436e-aefb-be9522931fdf /boot xfs defaults 0 0 UUID=0320-6ED4 /boot/efi vfat defaults,uid=0,gid=0,umask=0077,shortname=winnt 0 0 /dev/mapper/vg00-lvswap swap swap defaults 0 0 /dev/sda /data/1 ext4 defaults 0 0 /dev/sdb /data/2 ext4 defaults 0 0
This means to mount the sda partition of this machine to / data/1 and the sdb partition to / data/2. By default, we can change it to 0.
Save and exit, and then execute the mount -a command (a means auto, which means to mount it automatically)
[root@localhost /]# mount -a [root@localhost /]#
Then the automatic mount is completed.
Change directory permissions
[root@localhost /]# chown -R admin.admin /data/1 && chown -R admin.admin /data/2 [root@localhost /]#
Install OBD automatic deployment software
The full name of OBD is OceanBase Deployer, which is the automatic deployment software under the command line of OceanBase community version. According to whether the central control machine can connect to the public network, offline and online installation methods are provided. You can choose the installation method according to the actual situation. This is an intranet environment. I choose to install offline.
Download relevant software packages
The software package can be downloaded by itself through the download page of the official website.
The downloaded software package is transferred to the designated directory on the central control computer.
obproxy-3.2.0-1.el7.x86_64.rpm
Offline installation of OBD
When logging in to the central control computer, I use root to install. Generally, it should be installed by application users. Optional.
The OB deploy software is installed under / usr/obd by default. There may be some changes in different versions. You can view the location through the following command.
[root@localhost 3.1.2_ce]# rpm -ivh ob-deploy-1.2.0-15.el7.x86_64.rpm warning: ob-deploy-1.2.0-15.el7.x86_64.rpm: Header V4 RSA/SHA1 Signature, key ID e9b4a7aa: NOKEY Preparing... ################################# [100%] Updating / installing... 1:ob-deploy-1.2.0-15.el7 ################################# [100%] Installation of obd finished successfully Please source /etc/profile.d/obd.sh to enable it [root@localhost 3.1.2_ce]#
Check whether the installation is successful
Later, you can use the application user, such as admin user, to check whether the obd is successfully installed.
[admin@localhost 3.1.2_ce]$ obd cluster list Local deploy is empty
see. obd / directory
[admin@localhost 3.1.2_ce]$ ls ~/.obd/ cluster lock log mirror plugins version
About ~ / obd directory creation problem
After installing obd, no ~ /. Is created obd is not created until you hit any obd command
And do not directly create ~ / The reason for OBD is that OBD can run under any user, so I don't know which user to build ~ / obd
obd command
You can use - h to view the obd command help.
[admin@localhost 3.1.2_ce]$ obd -h Usage: obd <command> [options] Available commands: cluster Deploy and manage a cluster. mirror Manage a component repository for OBD. repo Manage local repository for OBD. test Run test for a running deployment. update Update OBD. Options: --version show program's version number and exit -h, --help Show help and exit. -v, --verbose Activate verbose output.
Add package to offline warehouse
Delete remote warehouse
Use the following command:
Note: the following commands should be operated under the operating system user who deploys and runs OBD. This is user admin.
/bin/rm -rf ~/.obd/mirror/remote/OceanBase.repo
- Use the following command to copy the previous package to the local warehouse.
[admin@localhost 3.1.2_ce]$ obd mirror clone /home/admin/soft/3.1.2_ce/*.rpm name: libobclient version: 2.0.0 release:2.el7 arch: x86_64 md5: f73cae67e2ff5be0682ac2803aba33a7ed26430e add /home/admin/soft/3.1.2_ce/libobclient-2.0.0-2.el7.x86_64.rpm to local mirror name: obagent version: 1.1.0 release:1.el7 arch: x86_64 md5: d2416fadeadba35944872467843d55da0999f298 add /home/admin/soft/3.1.2_ce/obagent-1.1.0-1.el7.x86_64.rpm to local mirror name: obclient version: 2.0.0 release:2.el7 arch: x86_64 md5: 1d2c3ee31f40b9d2fbf97f653f549d896b7e7060 add /home/admin/soft/3.1.2_ce/obclient-2.0.0-2.el7.x86_64.rpm to local mirror name: ob-deploy version: 1.2.0 release:15.el7 arch: x86_64 md5: b1cfa5fb95b6d7b06927e4ddbcdd7651f19311a9 add /home/admin/soft/3.1.2_ce/ob-deploy-1.2.0-15.el7.x86_64.rpm to local mirror name: obproxy version: 3.2.0 release:1.el7 arch: x86_64 md5: 8d5c6978f988935dc3da1dbec208914668dcf3b2 add /home/admin/soft/3.1.2_ce/obproxy-3.2.0-1.el7.x86_64.rpm to local mirror name: oceanbase-ce version: 3.1.2 release:10000392021123010.el7 arch: x86_64 md5: 7fafba0fac1e90cbd1b5b7ae5fa129b64dc63aed add /home/admin/soft/3.1.2_ce/oceanbase-ce-3.1.2-10000392021123010.el7.x86_64.rpm to local mirror name: oceanbase-ce-libs version: 3.1.2 release:10000392021123010.el7 arch: x86_64 md5: 94fff0ab31de053051dba66039e3185fa390cad5 add /home/admin/soft/3.1.2_ce/oceanbase-ce-libs-3.1.2-10000392021123010.el7.x86_64.rpm to local mirror name: oceanbase-ce-utils version: 3.1.2 release:10000392021123010.el7 arch: x86_64 md5: 6ca7db146fee526f4201508f9bd30901e487b7c5 add /home/admin/soft/3.1.2_ce/oceanbase-ce-utils-3.1.2-10000392021123010.el7.x86_64.rpm to local mirror
View the RPM list of the warehouse.
[admin@localhost 3.1.2_ce]$ obd mirror list local +----------------------------------------------------------------------------------------------------------+ | local Package List | +--------------------+---------+-----------------------+--------+------------------------------------------+ | name | version | release | arch | md5 | +--------------------+---------+-----------------------+--------+------------------------------------------+ | libobclient | 2.0.0 | 2.el7 | x86_64 | f73cae67e2ff5be0682ac2803aba33a7ed26430e | | obagent | 1.1.0 | 1.el7 | x86_64 | d2416fadeadba35944872467843d55da0999f298 | | obclient | 2.0.0 | 2.el7 | x86_64 | 1d2c3ee31f40b9d2fbf97f653f549d896b7e7060 | | ob-deploy | 1.2.0 | 15.el7 | x86_64 | b1cfa5fb95b6d7b06927e4ddbcdd7651f19311a9 | | obproxy | 3.2.0 | 1.el7 | x86_64 | 8d5c6978f988935dc3da1dbec208914668dcf3b2 | | oceanbase-ce | 3.1.2 | 10000392021123010.el7 | x86_64 | 7fafba0fac1e90cbd1b5b7ae5fa129b64dc63aed | | oceanbase-ce-libs | 3.1.2 | 10000392021123010.el7 | x86_64 | 94fff0ab31de053051dba66039e3185fa390cad5 | | oceanbase-ce-utils | 3.1.2 | 10000392021123010.el7 | x86_64 | 6ca7db146fee526f4201508f9bd30901e487b7c5 | +--------------------+---------+-----------------------+--------+------------------------------------------+
Install OceanBase cluster
Edit deployment yml file
obd can support a variety of configuration and installation of yml parameters, including autodeploy mode for production environment, which can adapt parameters. I do performance test here, so I choose autodeploy mode.
Download the template file from the official website and edit it. Download address: https://github.com/oceanbase/obdeploy/blob/master/example/autodeploy/distributed-with-obproxy-example.yaml
## Only need to configure when remote login is required user: username: admin password: admin # key_file: your ssh-key file path if need # port: your ssh port, default 22 # timeout: ssh connection timeout (second), default 30 oceanbase-ce: servers: - name: obce_zone1_01 # Please don't use hostname, only IP can be supported ip: 10.144.2.106 - name: obce_zone1_02 ip: 10.144.2.107 - name: obce_zone2_01 ip: 10.144.2.108 - name: obce_zone2_02 # Please don't use hostname, only IP can be supported ip: 10.144.2.109 - name: obce_zone3_01 ip: 10.144.2.110 - name: obce_zone3_02 ip: 10.144.2.111 global: # The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field. home_path: /home/admin/oceanbase-ce # The directory for data storage. The default value is $home_path/store. data_dir: /data/1/oceanbase-ce # The directory for clog, ilog, and slog. The default value is the same as the data_dir value. redo_dir: /data/2/oceanbase-ce # Please set devname as the network adaptor's name whose ip is in the setting of severs. # if set severs as "127.0.0.1", please set devname as "lo" # if current ip is 192.168.1.10, and the ip's network adaptor's name is "eth0", please use "eth0" devname: bond0 # External port for OceanBase Database. The default value is 2881.DO NOT change this value after the cluster is started. # mysql_port: 2881 # Internal port for OceanBase Database. The default value is 2882. DO NOT change this value after the cluster is started. # rpc_port: 2882 # Defines the zone for an observer. The default value is zone1. # zone: zone1 # The maximum running memory for an observer. When ignored, autodeploy calculates this value based on the current server available resource. # memory_limit: 58G # The percentage of the maximum available memory to the total memory. This value takes effect only when memory_limit is 0. The default value is 80. # memory_limit_percentage: 80 # The reserved system memory. system_memory is reserved for general tenants. The default value is 30G. Autodeploy calculates this value based on the current server available resource. # system_memory: 22G # The size of a data file. When ignored, autodeploy calculates this value based on the current server available resource. # datafile_size: 200G # The percentage of the data_dir space to the total disk space. This value takes effect only when datafile_size is 0. The default value is 90. # datafile_disk_percentage: 90 # System log level. The default value is INFO. # syslog_level: INFO # Print system logs whose levels are higher than WARNING to a separate log file. The default value is true. The default value for autodeploy mode is false. # enable_syslog_wf: false # Enable auto system log recycling or not. The default value is false. The default value for autodeploy mode is on. # enable_syslog_recycle: true # The maximum number of reserved log files before enabling auto recycling. When set to 0, no logs are deleted. The default value for autodeploy mode is 4. # max_syslog_file_count: 4 # Cluster name for OceanBase Database. The default value is obcluster. When you deploy OceanBase Database and obproxy, this value must be the same as the cluster_name for obproxy. # appname: obcluster # Password for root. The default value is empty. # root_password: # Password for proxyro. proxyro_password must be the same as observer_sys_password. The default value is empty. # proxyro_password: obce_zone1_01: zone: zone1 obce_zone1_02: zone: zone1 obce_zone2_01: zone: zone2 obce_zone2_02: zone: zone2 obce_zone3_01: zone: zone3 obce_zone3_02: zone: zone3 obproxy: depends: - oceanbase-ce servers: - 10.144.2.106 - 10.144.2.107 - 10.144.2.108 - 10.144.2.109 - 10.144.2.110 - 10.144.2.111 global: # The working directory for obproxy. Obproxy is started under this directory. This is a required field. home_path: /home/admin/obproxy # External port. The default value is 2883. # listen_port: 2883 # The Prometheus port. The default value is 2884. # prometheus_listen_port: 2884 # rs_list is the root server list for observers. The default root server is the first server in the zone. # The format for rs_list is observer_ip:observer_mysql_port;observer_ip:observer_mysql_port. # Ignore this value in autodeploy mode. # rs_list: 127.0.0.1:2881 # Cluster name for the proxy OceanBase Database. The default value is obcluster. This value must be set to the same with the appname for OceanBase Database. # cluster_name: obcluster # Password for obproxy system tenant. The default value is empty. # obproxy_sys_password: # Password for proxyro. proxyro_password must be the same with proxyro_password. The default value is empty. # observer_sys_password:
Perform installation
[admin@localhost ~]$ obd cluster autodeploy obtest -c distributed-with-obproxy-example.yaml oceanbase-ce-3.1.2 already installed. obproxy-3.2.0 already installed. Cluster param config check ok Open ssh connection ok Generate observer configuration ok Generate obproxy configuration ok oceanbase-ce-3.1.2 already installed. obproxy-3.2.0 already installed. +-------------------------------------------------------------------------------------------+ | Packages | +--------------+---------+-----------------------+------------------------------------------+ | Repository | Version | Release | Md5 | +--------------+---------+-----------------------+------------------------------------------+ | oceanbase-ce | 3.1.2 | 10000392021123010.el7 | 7fafba0fac1e90cbd1b5b7ae5fa129b64dc63aed | | obproxy | 3.2.0 | 1.el7 | 8d5c6978f988935dc3da1dbec208914668dcf3b2 | +--------------+---------+-----------------------+------------------------------------------+ Repository integrity check ok Parameter check ok Open ssh connection ok Remote oceanbase-ce-3.1.2-7fafba0fac1e90cbd1b5b7ae5fa129b64dc63aed repository install ok Remote oceanbase-ce-3.1.2-7fafba0fac1e90cbd1b5b7ae5fa129b64dc63aed repository lib check ok Remote obproxy-3.2.0-8d5c6978f988935dc3da1dbec208914668dcf3b2 repository install ok Remote obproxy-3.2.0-8d5c6978f988935dc3da1dbec208914668dcf3b2 repository lib check ok Cluster status check ok Initializes observer work home ok Initializes obproxy work home ok obtest deployed Get local repositories and plugins ok Open ssh connection ok Load cluster param plugin ok Check before start observer ok Check before start obproxy ok Start observer ok observer program health check ok Connect to observer ok Initialize cluster Cluster bootstrap ok Wait for observer init ok +------------------------------------------------+ | observer | +--------------+---------+------+-------+--------+ | ip | version | port | zone | status | +--------------+---------+------+-------+--------+ | 10.144.2.106 | 3.1.2 | 2881 | zone1 | active | | 10.144.2.107 | | 0 | zone1 | active | | 10.144.2.108 | 3.1.2 | 2881 | zone2 | active | | 10.144.2.109 | | 0 | zone2 | active | | 10.144.2.110 | 3.1.2 | 2881 | zone3 | active | | 10.144.2.111 | | 0 | zone3 | active | +--------------+---------+------+-------+--------+ Start obproxy ok obproxy program health check ok Connect to obproxy ok Initialize cluster +------------------------------------------------+ | obproxy | +--------------+------+-----------------+--------+ | ip | port | prometheus_port | status | +--------------+------+-----------------+--------+ | 10.144.2.106 | 2883 | 2884 | active | | 10.144.2.107 | 2883 | 2884 | active | | 10.144.2.108 | 2883 | 2884 | active | | 10.144.2.109 | 2883 | 2884 | active | | 10.144.2.110 | 2883 | 2884 | active | | 10.144.2.111 | 2883 | 2884 | active | +--------------+------+-----------------+--------+ obtest running
View cluster
[admin@localhost ~]$ obd cluster list +------------------------------------------------------------+ | Cluster List | +--------+---------------------------------+-----------------+ | Name | Configuration Path | Status (Cached) | +--------+---------------------------------+-----------------+ | obtest | /home/admin/.obd/cluster/obtest | running | +--------+---------------------------------+-----------------+
Create business tenants and business test users
Create ordinary tenant
-
Use OBD to create ordinary tenants.
[admin@localhost ~]$ obd cluster tenant create obtest -n tpcc Get local repositories and plugins ok [WARN] No such create_tenant plugin for obproxy-3.2.0 Open ssh connection ok Connect to observer ok Create tenant tpcc ok
Connect OceanBase
View new resource units
MySQL [oceanbase]> SELECT unit_config_id,name,max_cpu,min_cpu,max_memory,min_memory,max_disk_size FROM __all_unit_config; +----------------+-----------------+---------+---------+--------------+--------------+---------------+ | unit_config_id | name | max_cpu | min_cpu | max_memory | min_memory | max_disk_size | +----------------+-----------------+---------+---------+--------------+--------------+---------------+ | 1 | sys_unit_config | 5 | 2.5 | 17179869184 | 12884901888 | 1431297851392 | | 1001 | tpcc_unit | 9 | 9 | 141733920768 | 141733920768 | 1431297851392 | +----------------+-----------------+---------+---------+--------------+--------------+---------------+ 2 rows in set (0.002 sec)
View instances in the cluster
MySQL [oceanbase]> select a.zone,concat(a.svr_ip,':',a.svr_port) observer, cpu_total, (cpu_total-cpu_assigned) cpu_free, round(mem_total/1024/1024/1024) mem_total_gb, round((mem_total-mem_assigned)/1024/1024/1024) mem_free_gb, usec_to_time(b.last_offline_time) last_offline_time, usec_to_time(b.start_service_time) start_service_time, b.status, usec_to_time(b.stop_time) stop_time, b.build_version -> from __all_virtual_server_stat a join __all_server b on (a.svr_ip=b.svr_ip and a.svr_port=b.svr_port) -> order by a.zone, a.svr_ip -> ; +-------+-------------------+-----------+----------+--------------+-------------+----------------------------+----------------------------+--------+----------------------------+----------------------------------------------------------------------------------------+ | zone | observer | cpu_total | cpu_free | mem_total_gb | mem_free_gb | last_offline_time | start_service_time | status | stop_time | build_version | +-------+-------------------+-----------+----------+--------------+-------------+----------------------------+----------------------------+--------+----------------------------+----------------------------------------------------------------------------------------+ | zone1 | 10.144.2.106:2882 | 14 | 2.5 | 149 | 5 | 1970-01-01 08:00:00.000000 | 2022-01-14 00:56:20.552658 | active | 1970-01-01 08:00:00.000000 | 3.1.2_10000392021123010-d4ace121deae5b81d8f0b40afbc4c02705b7fc1d(Dec 30 2021 02:47:29) | | zone1 | 10.144.2.107:2882 | 14 | 5 | 150 | 18 | 1970-01-01 08:00:00.000000 | 2022-01-14 00:56:24.739132 | active | 1970-01-01 08:00:00.000000 | 3.1.2_10000392021123010-d4ace121deae5b81d8f0b40afbc4c02705b7fc1d(Dec 30 2021 02:47:29) | | zone2 | 10.144.2.108:2882 | 14 | 2.5 | 150 | 6 | 1970-01-01 08:00:00.000000 | 2022-01-14 00:56:22.918302 | active | 1970-01-01 08:00:00.000000 | 3.1.2_10000392021123010-d4ace121deae5b81d8f0b40afbc4c02705b7fc1d(Dec 30 2021 02:47:29) | | zone2 | 10.144.2.109:2882 | 14 | 5 | 150 | 18 | 1970-01-01 08:00:00.000000 | 2022-01-14 00:56:25.136243 | active | 1970-01-01 08:00:00.000000 | 3.1.2_10000392021123010-d4ace121deae5b81d8f0b40afbc4c02705b7fc1d(Dec 30 2021 02:47:29) | | zone3 | 10.144.2.110:2882 | 14 | 2.5 | 149 | 5 | 1970-01-01 08:00:00.000000 | 2022-01-14 00:56:23.249400 | active | 1970-01-01 08:00:00.000000 | 3.1.2_10000392021123010-d4ace121deae5b81d8f0b40afbc4c02705b7fc1d(Dec 30 2021 02:47:29) | | zone3 | 10.144.2.111:2882 | 14 | 5 | 148 | 16 | 1970-01-01 08:00:00.000000 | 2022-01-14 00:56:25.437576 | active | 1970-01-01 08:00:00.000000 | 3.1.2_10000392021123010-d4ace121deae5b81d8f0b40afbc4c02705b7fc1d(Dec 30 2021 02:47:29) | +-------+-------------------+-----------+----------+--------------+-------------+----------------------------+----------------------------+--------+----------------------------+----------------------------------------------------------------------------------------+ 6 rows in set (0.005 sec)
View resource pool
MySQL [oceanbase]> select t1.name resource_pool_name, t2.`name` unit_config_name, t2.max_cpu, t2.min_cpu, t2.max_memory/1024/1024/1024 max_mem_gb, t2.min_memory/1024/1024/1024 min_mem_gb, t3.unit_id, t3.zone, concat(t3.svr_ip,':',t3.`svr_port`) observer,t4.tenant_id, t4.tenant_name -> from __all_resource_pool t1 join __all_unit_config t2 on (t1.unit_config_id=t2.unit_config_id) -> join __all_unit t3 on (t1.`resource_pool_id` = t3.`resource_pool_id`) -> left join __all_tenant t4 on (t1.tenant_id=t4.tenant_id) -> order by t1.`resource_pool_id`, t2.`unit_config_id`, t3.unit_id -> ; +--------------------+------------------+---------+---------+------------------+------------------+---------+-------+-------------------+-----------+-------------+ | resource_pool_name | unit_config_name | max_cpu | min_cpu | max_mem_gb | min_mem_gb | unit_id | zone | observer | tenant_id | tenant_name | +--------------------+------------------+---------+---------+------------------+------------------+---------+-------+-------------------+-----------+-------------+ | sys_pool | sys_unit_config | 5 | 2.5 | 16.000000000000 | 12.000000000000 | 1 | zone1 | 10.144.2.106:2882 | 1 | sys | | sys_pool | sys_unit_config | 5 | 2.5 | 16.000000000000 | 12.000000000000 | 2 | zone2 | 10.144.2.108:2882 | 1 | sys | | sys_pool | sys_unit_config | 5 | 2.5 | 16.000000000000 | 12.000000000000 | 3 | zone3 | 10.144.2.110:2882 | 1 | sys | | tpcc_pool | tpcc_unit | 9 | 9 | 132.000000000000 | 132.000000000000 | 1001 | zone1 | 10.144.2.107:2882 | 1001 | tpcc | | tpcc_pool | tpcc_unit | 9 | 9 | 132.000000000000 | 132.000000000000 | 1002 | zone1 | 10.144.2.106:2882 | 1001 | tpcc | | tpcc_pool | tpcc_unit | 9 | 9 | 132.000000000000 | 132.000000000000 | 1003 | zone2 | 10.144.2.109:2882 | 1001 | tpcc | | tpcc_pool | tpcc_unit | 9 | 9 | 132.000000000000 | 132.000000000000 | 1004 | zone2 | 10.144.2.108:2882 | 1001 | tpcc | | tpcc_pool | tpcc_unit | 9 | 9 | 132.000000000000 | 132.000000000000 | 1005 | zone3 | 10.144.2.111:2882 | 1001 | tpcc | | tpcc_pool | tpcc_unit | 9 | 9 | 132.000000000000 | 132.000000000000 | 1006 | zone3 | 10.144.2.110:2882 | 1001 | tpcc | +--------------------+------------------+---------+---------+------------------+------------------+---------+-------+-------------------+-----------+-------------+ 9 rows in set (0.001 sec)
View the available resources of the cluster
MySQL [oceanbase]> select a.zone,concat(a.svr_ip,':',a.svr_port) observer, cpu_total, cpu_assigned, (cpu_total-cpu_assigned) cpu_free, mem_total/1024/1024/1024 mem_total_gb, mem_assigned/1024/1024/1024 mem_assign_gb, (mem_total-mem_assigned)/1024/1024/1024 mem_free_gb -> from __all_virtual_server_stat a join __all_server b on (a.svr_ip=b.svr_ip and a.svr_port=b.svr_port) -> order by a.zone, a.svr_ip -> ; +-------+-------------------+-----------+--------------+----------+------------------+------------------+-----------------+ | zone | observer | cpu_total | cpu_assigned | cpu_free | mem_total_gb | mem_assign_gb | mem_free_gb | +-------+-------------------+-----------+--------------+----------+------------------+------------------+-----------------+ | zone1 | 10.144.2.106:2882 | 14 | 11.5 | 2.5 | 149.000000000000 | 144.000000000000 | 5.000000000000 | | zone1 | 10.144.2.107:2882 | 14 | 9 | 5 | 150.000000000000 | 132.000000000000 | 18.000000000000 | | zone2 | 10.144.2.108:2882 | 14 | 11.5 | 2.5 | 150.000000000000 | 144.000000000000 | 6.000000000000 | | zone2 | 10.144.2.109:2882 | 14 | 9 | 5 | 150.000000000000 | 132.000000000000 | 18.000000000000 | | zone3 | 10.144.2.110:2882 | 14 | 11.5 | 2.5 | 149.000000000000 | 144.000000000000 | 5.000000000000 | | zone3 | 10.144.2.111:2882 | 14 | 9 | 5 | 148.000000000000 | 132.000000000000 | 16.000000000000 | +-------+-------------------+-----------+--------------+----------+------------------+------------------+-----------------+ 6 rows in set (0.004 sec)
Check out the resource allocation details.
MySQL [oceanbase]> select t1.name resource_pool_name, t2.`name` unit_config_name, t2.max_cpu, t2.min_cpu, t2.max_memory/1024/1024/1024 max_mem_gb, t2.min_memory/1024/1024/1024 min_mem_gb, t3.unit_id, t3.zone, concat(t3.svr_ip,':',t3.`svr_port`) observer,t4.tenant_id, t4.tenant_name -> from __all_resource_pool t1 join __all_unit_config t2 on (t1.unit_config_id=t2.unit_config_id) -> join __all_unit t3 on (t1.`resource_pool_id` = t3.`resource_pool_id`) -> left join __all_tenant t4 on (t1.tenant_id=t4.tenant_id) -> order by t1.`resource_pool_id`, t2.`unit_config_id`, t3.unit_id -> ; +--------------------+------------------+---------+---------+------------------+------------------+---------+-------+-------------------+-----------+-------------+ | resource_pool_name | unit_config_name | max_cpu | min_cpu | max_mem_gb | min_mem_gb | unit_id | zone | observer | tenant_id | tenant_name | +--------------------+------------------+---------+---------+------------------+------------------+---------+-------+-------------------+-----------+-------------+ | sys_pool | sys_unit_config | 5 | 2.5 | 16.000000000000 | 12.000000000000 | 1 | zone1 | 10.144.2.106:2882 | 1 | sys | | sys_pool | sys_unit_config | 5 | 2.5 | 16.000000000000 | 12.000000000000 | 2 | zone2 | 10.144.2.108:2882 | 1 | sys | | sys_pool | sys_unit_config | 5 | 2.5 | 16.000000000000 | 12.000000000000 | 3 | zone3 | 10.144.2.110:2882 | 1 | sys | | tpcc_pool | tpcc_unit | 9 | 9 | 132.000000000000 | 132.000000000000 | 1001 | zone1 | 10.144.2.107:2882 | 1001 | tpcc | | tpcc_pool | tpcc_unit | 9 | 9 | 132.000000000000 | 132.000000000000 | 1002 | zone1 | 10.144.2.106:2882 | 1001 | tpcc | | tpcc_pool | tpcc_unit | 9 | 9 | 132.000000000000 | 132.000000000000 | 1003 | zone2 | 10.144.2.109:2882 | 1001 | tpcc | | tpcc_pool | tpcc_unit | 9 | 9 | 132.000000000000 | 132.000000000000 | 1004 | zone2 | 10.144.2.108:2882 | 1001 | tpcc | | tpcc_pool | tpcc_unit | 9 | 9 | 132.000000000000 | 132.000000000000 | 1005 | zone3 | 10.144.2.111:2882 | 1001 | tpcc | | tpcc_pool | tpcc_unit | 9 | 9 | 132.000000000000 | 132.000000000000 | 1006 | zone3 | 10.144.2.110:2882 | 1001 | tpcc | +--------------------+------------------+---------+---------+------------------+------------------+---------+-------+-------------------+-----------+-------------+ 9 rows in set (0.006 sec)
Query remaining available resources
MySQL [oceanbase]> select a.zone,concat(a.svr_ip,':',a.svr_port) observer, cpu_total, cpu_assigned, (cpu_total-cpu_assigned) cpu_free, mem_total/1024/1024/1024 mem_total_gb, mem_assigned/1024/1024/1024 mem_assign_gb, (mem_total-mem_assigned)/1024/1024/1024 mem_free_gb from __all_virtual_server_stat a join __all_server b on (a.svr_ip=b.svr_ip and a.svr_port=b.svr_port) order by a.zone, a.svr_ip; +-------+-------------------+-----------+--------------+----------+------------------+------------------+-----------------+ | zone | observer | cpu_total | cpu_assigned | cpu_free | mem_total_gb | mem_assign_gb | mem_free_gb | +-------+-------------------+-----------+--------------+----------+------------------+------------------+-----------------+ | zone1 | 10.144.2.106:2882 | 14 | 11.5 | 2.5 | 149.000000000000 | 144.000000000000 | 5.000000000000 | | zone1 | 10.144.2.107:2882 | 14 | 9 | 5 | 150.000000000000 | 132.000000000000 | 18.000000000000 | | zone2 | 10.144.2.108:2882 | 14 | 11.5 | 2.5 | 150.000000000000 | 144.000000000000 | 6.000000000000 | | zone2 | 10.144.2.109:2882 | 14 | 9 | 5 | 150.000000000000 | 132.000000000000 | 18.000000000000 | | zone3 | 10.144.2.110:2882 | 14 | 11.5 | 2.5 | 149.000000000000 | 144.000000000000 | 5.000000000000 | | zone3 | 10.144.2.111:2882 | 14 | 9 | 5 | 148.000000000000 | 132.000000000000 | 16.000000000000 | +-------+-------------------+-----------+--------------+----------+------------------+------------------+-----------------+ 6 rows in set (0.001 sec)
View successfully created tenants
MySQL [oceanbase]> select tenant_id,tenant_name,primary_zone from __all_tenant; +-----------+-------------+-------------------+ | tenant_id | tenant_name | primary_zone | +-----------+-------------+-------------------+ | 1 | sys | zone1;zone2,zone3 | | 1001 | tpcc | RANDOM | +-----------+-------------+-------------------+ 2 rows in set (0.000 sec)
Create business user
Create business user
Log in using the root user of the business tenant
[admin@localhost ~]$ mysql -h10.144.2.106 -uroot@tpcc -P2883 -p -c -A oceanbase Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. Your MySQL connection id is 7 Server version: 5.6.25 OceanBase 3.1.2 (r10000392021123010-d4ace121deae5b81d8f0b40afbc4c02705b7fc1d) (Built Dec 30 2021 02:47:29) Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MySQL [oceanbase]>
Then execute to create ordinary users
MySQL [oceanbase]> create user benchmarksql@'%' identified by 'benchmarksql' ; Query OK, 0 rows affected (0.03 sec)
View successfully created users
MySQL [oceanbase]> SELECT user FROM mysql.user; +--------------+ | user | +--------------+ | root | | ORAAUDITOR | | benchmarksql | +--------------+ 3 rows in set (0.06 sec)
to grant authorization
MySQL [oceanbase]> grant all privileges on *.* to benchmarksql@'%'; Query OK, 0 rows affected (0.02 sec)
View authorization
MySQL [oceanbase]> show grants for benchmarksql; +-----------------------------------------------+ | Grants for benchmarksql@% | +-----------------------------------------------+ | GRANT ALL PRIVILEGES ON *.* TO 'benchmarksql' | +-----------------------------------------------+ 1 row in set (0.01 sec)
Create business database
Create a new user to log in
Exit sys tenant login and log in with the new business user and tenant.
[admin@localhost ~]$ obclient -h10.144.2.106 -ubenchmarksql@tpcc -P2883 -pbenchmarksql -c -A oceanbase Welcome to the OceanBase. Commands end with ; or \g. Your MySQL connection id is 9 Server version: 5.6.25 OceanBase 3.1.2 (r10000392021123010-d4ace121deae5b81d8f0b40afbc4c02705b7fc1d) (Built Dec 30 2021 02:47:29) Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MySQL [oceanbase]>
Create business library
MySQL [oceanbase]> create database benchmark; Query OK, 1 row affected (0.018 sec)
Create business user
Log in using the root user of the business tenant
[admin@localhost ~]$ mysql -h10.144.2.106 -uroot@tpcc -P2883 -p -c -A oceanbase Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. Your MySQL connection id is 8 Server version: 5.6.25 OceanBase 3.1.2 (r10000392021123010-d4ace121deae5b81d8f0b40afbc4c02705b7fc1d) (Built Dec 30 2021 02:47:29) Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MySQL [oceanbase]>
Then execute to create ordinary users
MySQL [oceanbase]> create user transfer@'%' identified by 'transfer' ; Query OK, 0 rows affected (0.03 sec)
View successfully created users
MySQL [oceanbase]> SELECT user FROM mysql.user; +--------------+ | user | +--------------+ | root | | ORAAUDITOR | | benchmarksql | | transfer | +--------------+ 4 rows in set (0.02 sec)
to grant authorization
MySQL [oceanbase]> grant all privileges on *.* to transfer@'%'; Query OK, 0 rows affected (0.02 sec)
View authorization
MySQL [oceanbase]> show grants for transfer; +-------------------------------------------+ | Grants for transfer@% | +-------------------------------------------+ | GRANT ALL PRIVILEGES ON *.* TO 'transfer' | +-------------------------------------------+ 1 row in set (0.01 sec)
Create business database
Create a new user to log in
Exit sys tenant login and log in with the new business user and tenant.
[admin@localhost ~]$ obclient -h10.144.2.106 -utransfer@tpcc -P2883 -ptransfer -c -A oceanbase Welcome to the OceanBase. Commands end with ; or \g. Your MySQL connection id is 9 Server version: 5.6.25 OceanBase 3.1.2 (r10000392021123010-d4ace121deae5b81d8f0b40afbc4c02705b7fc1d) (Built Dec 30 2021 02:47:29) Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MySQL [oceanbase]>
Create business library
MySQL [oceanbase]> create database transfer; Query OK, 1 row affected (0.026 sec)