system configuration
Detect and close system swap
This section describes how to close swap. TiDB needs enough memory to run, and swap is not recommended as a buffer with insufficient memory, which will reduce performance. Therefore, it is recommended to permanently shut down the system swap and do not use the swapoff -a mode to shut down, otherwise the operation will fail after restarting the machine.
It is recommended to shut down the system swap by executing the following command:
echo "vm.swappiness = 0">> /etc/sysctl.conf swapoff -a && swapon -a sysctl -p
Configure SSH mutual trust and sudo password free
# Add user useradd -m -d /home/tidb tidb # Set the login password to tidb123 echo "tidb123"|passwd --stdin tidb
# Add sudoers file read and write permissions chmod u+w /etc/sudoers # Edit sudoers file vim /etc/sudoers # Add the following line tidb ALL=(ALL) NOPASSWD: ALL
Deploy TiDB
Deploy using TiUP cluster
- Applicable scenario: I hope to use a single Linux server to experience the cluster with the smallest complete topology of TiDB and simulate the deployment steps of production.
- Time: 10 minutes
This section describes how to deploy a TiDB cluster by referring to a YAML file of the minimum topology of TiUP.
Prepare the environment
Prepare a deployment host to ensure that its software meets the requirements:
- It is recommended to install Ubuntu 16.04 and above
- The Linux operating system opens Internet access for downloading TiDB and related software installation packages
Smallest TiDB cluster topology:
example | number | IP | to configure |
---|---|---|---|
TiKV | 3 | 10.0.1.1 | Avoid port and directory conflicts |
TiDB | 1 | 10.0.1.1 | Default port global directory configuration |
PD | 1 | 10.0.1.1 | Default port global directory configuration |
TiFlash | 1 | 10.0.1.1 | Default port global directory configuration |
Monitor | 1 | 10.0.1.1 | Default port global directory configuration |
Deployment host software and environment requirements:
- Deployment host Turn off the firewall Or open the required ports between nodes of TiDB cluster
- Currently, TiUP only supports x86_ Deploy TiDB cluster on 64 (AMD64) architecture (TiUP will support deployment on ARM architecture at 4.0 GA)
- Under AMD64 architecture, it is recommended to use CentOS 7.3 and above Linux operating system
- Under ARM architecture, it is recommended to use CentOS 7.6 1810 Linux operating system
Implementation deployment
be careful:
You can use any ordinary user or root user of Linux system to log in to the host. The following steps take root user as an example.
-
Download and install TiUP:
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
-
Declare global environment variables.
source .bash_profile
be careful:
The absolute path of the corresponding profile file will be prompted after the installation of TiUP is completed. The following source operations need to be operated according to the actual location.
-
Install the cluster component of TiUP:
tiup cluster
-
If TiUP cluster is installed on the machine, the software version needs to be updated:
tiup update --self && tiup update cluster
-
Create and start the cluster
According to the following configuration template, edit the configuration file and name it TOPO Yaml, where:
- user: "tidb": means that the internal management of the cluster is done through the tidb system user (which will be created automatically after deployment). By default, port 22 is used to log in to the target machine through ssh
- replication. Enable placement rules: set this PD parameter to ensure the normal operation of TiFlash
- Host: set as the IP address of the deployment host
The configuration template is as follows:
# # Global variables are applied to all deployments and used as the default value of # # the deployments if a specific deployment value is missing. global: user: "tidb" ssh_port: 22 deploy_dir: "/tidb-deploy" data_dir: "/tidb-data" # # Monitored variables are applied to all the machines. monitored: node_exporter_port: 9100 blackbox_exporter_port: 9115 server_configs: tidb: log.slow-threshold: 300 tikv: readpool.storage.use-unified-pool: false readpool.coprocessor.use-unified-pool: true pd: replication.enable-placement-rules: true replication.location-labels: ["host"] tiflash: logger.level: "info" pd_servers: - host: 10.0.1.1 tidb_servers: - host: 10.0.1.1 tikv_servers: - host: 10.0.1.1 port: 20160 status_port: 20180 config: server.labels: { host: "logic-host-1" } - host: 10.0.1.1 port: 20161 status_port: 20181 config: server.labels: { host: "logic-host-2" } - host: 10.0.1.1 port: 20162 status_port: 20182 config: server.labels: { host: "logic-host-3" } tiflash_servers: - host: 10.0.1.1 monitoring_servers: - host: 10.0.1.1 grafana_servers: - host: 10.0.1.1
-
Execute the cluster deployment command:
tiup cluster deploy tidb-test v5.0.0 ./topo.yaml --user tidb -p
- The parameter < cluster name > indicates that the cluster name is set
- The parameter < TiDB version > indicates setting the cluster version. You can view the currently deployed TiDB version through the tiup list tidb command
Follow the instructions and enter "y" and tidb password to complete the deployment:
Do you want to continue? [y/N]: y Input SSH password:
-
Start the cluster:
tiup cluster start tidb-test
-
Access cluster:
-
Install MySQL client. If MySQL client is installed, you can skip this step.
yum -y install mysql
-
To access TiDB database, the password is blank:
mysql -h 10.0.1.1 -P 4000 -u root
-
Access to Grafana monitoring of TiDB:
adopt http://{grafana-ip}:3000 Visit the cluster Grafana monitoring page. The default user name and password are admin.
-
To access TiDB's Dashboard:
Access the cluster via http: / / {PD IP}: 2379 / Dashboard TiDB Dashboard On the monitoring page, the default user name is root and the password is empty.
-
Execute the following command to confirm the list of currently deployed clusters:
tiup cluster list
-
Execute the following command to view the topology and status of the cluster:
tiup cluster display <cluster-name>
-