1. Preparatory work
1. min officials recommend at least 4 nodes
node | IP | data |
---|---|---|
youduk2 | xxx.xxx.xxx.xxx | /home/minio/data1 /home/minio/data2 |
youduk3 | xxx.xxx.xxx.xxx | /home/minio/data1 /home/minio/data2 |
youduk4 | xxx.xxx.xxx.xxx | /home/minio/data1 /home/minio/data2 |
youduk5 | xxx.xxx.xxx.xxx | /home/minio/data1 /home/minio/data2 |
2,mkdir /home/minio/{data1,data2,run}
Create data1,data2 folders. Create the startup script run folder.
3. Upload the binary file miniio in the directory to / home/minio/
4. Modify the maximum number of files that the server can read
/etc/security/limits.conf
soft nofile 65535 hard nofile 65535
2. Script configuration
1. Configure the startup script.
- MINIO_ACCESS_KEY: user name, with a minimum length of 5 characters
- MINIO_SECRET_KEY: password. The password cannot be set too simply, or minio will fail to start. The minimum length is 8 characters
- – config dir: Specifies the directory of the cluster configuration file
#!/bin/bash export MINIO_ACCESS_KEY=admin export MINIO_SECRET_KEY=admin456789 /home/minio/minio server --config-dir /etc/minio \ --address "youduk2:9029" \ http://youduk2/home/minio/data1 http://youduk2/home/minio/data2 \ http://youduk3/home/minio/data1 http://youduk3/home/minio/data2 \ http://youduk4/home/minio/data1 http://youduk4/home/minio/data2 \ http://youduk5/home/minio/data1 http://youduk5/home/minio/data2
Minio has a default 9000 port. Add – address "youduk2:9029" in the configuration file to change the port
MINIO_ACCESS_KEY: user name, with a minimum length of 5 characters
MINIO_SECRET_KEY: password. The password cannot be set too simply, or minio will fail to start. The minimum length is 8 characters
– config dir: Specifies the directory of the cluster configuration file
2. Create Minio server
vim /usr/lib/systemd/system/minio.service
[Unit] Description=Minio service Documentation=https://docs.minio.io/ [Service] WorkingDirectory=/home/minio/run ExecStart=/home/minio/run/run.sh Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target
3. Modify file permissions
chmod +x /usr/lib/systemd/system/minio.service && chmod +x /home/minio/minio && chmod +x /home/minio/run/run.sh
4. Start cluster
systemctl daemon-reload
systemctl start minio
systemctl enable minio
View cluster status
systemctl status minio.service
5. Start other nodes
youduk3, youduk4 and youduk5 perform the 2 and 3 operations above
youduk3, youduk4 and youduk5 perform the operations 1, 2 and 3 of the above installation configuration
youduk3, youduk4, youduk5 execution
sudo systemctl start minio.service
sudo systemctl status minio.service journalctl -f -u minio.service
Test youduk3:9029
Page request succeeded.
Enter the username and password admin admin456789
3. Configure installation
Environmental preparation
Servers: youduk1, youduke2, youduk3, youduk4
File: / home/minio/data/{data1,data2,data3,data4}
Profile installation
vim /usr/lib/systemd/system/minio.service
[Unit] Description=MinIO Documentation=https://docs.min.io Wants=network-online.target After=network-online.target #Binary file location AssertFileIsExecutable=/home/minio/minio [Service] WorkingDirectory=/home/minio/ User=root Group=root ProtectProc=invisible #Configure the environment address. EnvironmentFile=-/home/minio/run/minio #ExecStartPre=/bin/bash -c "if [ -z \"${MINIO_VOLUMES}\" ]; then echo \"Variable MINIO_VOLUMES not set in /home/minio/run/minio\"; exit 1; fi" ExecStart=/home/minio/minio server $MINIO_OPTS $MINIO_VOLUMES # Set startup # Let systemd restart this service always Restart=always #Set the maximum number of open files # Specifies the maximum file descriptor number that can be opened by this process LimitNOFILE=65536 # Specify the maximum thrjournalctl - F - u Minio that this process can create Number of serviceeads # Specifies the maximum number of thrjournalctl -f -u minio.serviceeads this process can create TasksMax=infinity # Disable the timeout logic and wait for the process to stop # Disable timeout logic and wait until process is stopped TimeoutStopSec=infinity SendSIGKILL=no [Install] WantedBy=multi-user.target
chmod +x /usr/lib/systemd/system/minio.service
Each node
/home/minio/run/minio file configuration
# Set the hosts and volumes MinIO uses at startup # The command uses MinIO expansion notation {x...y} to denote a # sequential series. # # The following example covers four MinIO hosts # with 4 drives each at the specified hostname and drive locations. #MINIO_VOLUMES="http://youduk2/home/minio/data1 http://youduk2/home/minio/data2 http://youduk3/home/minio/data1 http://youduk3/home/minio/data2 http://youduk4/home/minio/data1 http://youduk4/home/minio/data2 http://youduk5/home/minio/data1 http://youduk5/home/minio/data2" MINIO_VOLUMES="http://youduk{1...4}:9029/home/minio/data/data{1...4}" # Set all MinIO server options # # The following explicitly sets the MinIO Console listen address to # port 9001 on all network interfaces. The default behavior is dynamic # port selection. #MINIO_OPTS="--console-address :9001 --address youduk3:9029 --config-dir /etc/minio" MINIO_OPTS="--console-address :9001 --address youduk2:9029" #--address youduk2:9029 write local domain name: Port # Set the root username. This user has unrestricted permissions to # perform S3 and administrative API operations on any resource in the # deployment. # # Defer to your organizations requirements for superadmin user name. MINIO_ROOT_USER=amdin # Set the root password # # Use a long, random, unique string that meets your organizations # requirements for passwords. MINIO_ROOT_PASSWORD=admin456789 # Set to the URL of the load balancer for the MinIO deployment # This value *must* match across all MinIO servers. If you do # not have a load balancer, set this value to to any *one* of the # MinIO hosts in the deployment as a temporary measure. # MINIO_SERVER_URL="http://youduk"
Each node should be configured with MINIO_OPTS --address youduk2:9029 modify the domain name of the cost machine corresponding to the server
The browser access address is 9001, and the cluster development port is 9029. Youduk {1... 4} is a server pool
Software deployment per node
Upload binary files to / home/minio /
chmod +x /home/minio/minio
The youduk2, youduk3, youduk4 and youduk5 nodes are started respectively:
systemctl daemon-reload
systemctl start minio
systemctl enable minio
Test address http://youduk2:9001
Test address http://youduk4:9001
Login user name - password
Test successful
4. Cluster expansion
MinIO supports two expansion modes:
- Expand the cluster by modifying the command line and specifying a new cluster set on the command line
- By introducing the third-party component etcd, the dynamic expansion scheme is realized on the basis of the original cluster
1. Configure extension mode
MinIO supports the expansion of distributed clusters by specifying a new cluster set on the command line. The storage capacity of multiple nodes is the storage capacity of distributed MinIO.
The deployment has a server pool consisting of four MinIO server hosts with sequential host names.
youduk1 youduk2 youduk3 youduk4
Each host has four locally connected drives with sequential mount points: I have no nodes. Test only directories
/home/minio/data/data1 /home/minio/data/data2 /home/minio/data/data3 /home/minio/data/data4
The new server pool consists of eight new MinIO hosts with sequential host names:
youduk5 youduk6 youduk7 youduk8 youduk9 youduk10 youduk11 youduk12
All new hosts have eight locally attached disks with sequential mount points:
/home/minio/data/data1 /home/minio/data/data2 /home/minio/data/data3 /home/minio/data/data4 /home/minio/data/data5 /home/minio/data/data6 /home/minio/data/data7 /home/minio/data/data8
The environment configuration is installed in the same way as the above configuration
Each node
MINIO_VOLUMES=“ http://youduk{1... 4}: "9029 / home / Minio / data / data {1... 4}" is modified to
MINIO_VOLUMES="http://youduk{1...4}:9029/home/minio/data/data{1...4} http://youduk{5...12}:9029/home/minio/data/data{1...8}"
Through the above expansion strategy, the cluster can be expanded on demand. Restarting the cluster after reconfiguration will take effect immediately in the cluster and have no impact on the existing cluster. In the above command, we can regard the original cluster as a cluster pool and the new cluster as another cluster pool. The new objects are placed in the cluster pool according to the proportion of available space in each cluster pool. Within each hash pool, the location is determined based on the hash algorithm.
Note: each added cluster pool must have the same number of disks (erasure code set) size as the original cluster pool in order to maintain the same data redundancy SLA. For example, if the first cluster pool has 8 disks, you can expand the cluster to a cluster pool of 16, 32 or 1024 disks. You only need to ensure that the deployed SLA is a multiple of the original cluster pool.
2. etcd extension scheme
etcd for bucket DNS service records
reference resources https://www.sohu.com/a/455702322_115128