1, Server planning
host name | Host IP | disk | role |
---|---|---|---|
node3 | public-ip: 172.18.112.20 cluster-ip: 172.18.112.20 | vdb | ceph-deploy,monitor,mgr,osd |
node4 | public-ip: 172.18.112.19 cluster-ip: 172.18.112.19 | vdb | monitor,mgr,osd |
node5 | public-ip: 172.18.112.18 cluster-ip: 172.18.112.18 | vdb | monitor,mgr,osd |
2, Set host name
Host name setting, three hosts execute their own commands respectively node3
[root@localhost ~]# hostnamectl set-hostname nod3 [root@localhost ~]# hostname node3
node4
[root@localhost ~]# hostnamectl set-hostname node4 [root@localhost ~]# hostname node4
node5
[root@localhost ~]# hostnamectl set-hostname node5 [root@localhost ~]# hostname node5
To see the effect after execution, you need to close the current command line window and reopen it to see the setting effect
3, Set hosts file
Execute the following command on all 3 machines to add the mapping
echo "172.18.112.20 node3 " >> /etc/hosts echo "172.18.112.19 node4 " >> /etc/hosts echo "172.18.112.18 node5 " >> /etc/hosts
4, Create users and set password free login
Create user (running on all three machines)
useradd -d /home/admin -m admin echo "123456" | passwd admin --stdin #sudo permissions echo "admin ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/admin sudo chmod 0440 /etc/sudoers.d/admin
Set password free login (only executed on node3)
[root@node3 ~]# su - admin [admin@node3 ~]$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/admin/.ssh/id_rsa): Created directory '/home/admin/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/admin/.ssh/id_rsa. Your public key has been saved in /home/admin/.ssh/id_rsa.pub. The key fingerprint is: SHA256:qfWhuboKeoHQOOMLOIB5tjK1RPjgw/Csl4r6A1FiJYA admin@admin.ops5.bbdops.com The key's randomart image is: +---[RSA 2048]----+ |+o.. | |E.+ | |*% | |X+X . | |=@.+ S . | |X.* o + . | |oBo. . o . | |ooo. . | |+o....oo. | +----[SHA256]-----+ [admin@node3 ~]$ ssh-copy-id admin@node3 [admin@node3 ~]$ ssh-copy-id admin@node4 [admin@node3 ~]$ ssh-copy-id admin@node5
Note: without SSH copy ID, you can manually transfer the public key to the corresponding machine
cat ~/.ssh/id_*.pub | ssh admin@host3 'cat >> .ssh/authorized_keys'
5, Configure time synchronization
All three machines execute
[root@node3 ~]$ timedatectl #View local time [root@node3 ~]$ timedatectl set-timezone Asia/Shanghai #Change to Asian Shanghai time [root@node3 ~]$ yum install -y chrony #Synchronization tool [root@node3 ~]$ chronyc -n sources -v #Sync list [root@node3 ~]$ chronyc tracking #Synchronize service status [root@node3 ~]$ timedatectl status #View local time
6, Install ceph deploy and install the ceph package
Configure ceph source
cat > /etc/yum.repos.d/ceph.repo<<'EOF' [Ceph] name=Ceph packages for $basearch baseurl=https://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/$basearch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc priority=1 [Ceph-noarch] name=Ceph noarch packages baseurl=https://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/noarch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc priority=1 [ceph-source] name=Ceph source packages baseurl=https://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/SRPMS enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc priority=1 EOF
Install CEPH deploy
[admin@node3 ~]# sudo yum install ceph-deploy
Initialize mon point
ceph requires epel source package, so all installed nodes need Yum install epel release
[admin@node3 ~]$ mkdir my-cluster [admin@node3 ~]$ cd my-cluster # new [admin@node3 my-cluster]$ ceph-deploy new node3 node4 node5 Traceback (most recent call last): File "/bin/ceph-deploy", line 18, in <module> from ceph_deploy.cli import main File "/usr/lib/python2.7/site-packages/ceph_deploy/cli.py", line 1, in <module> import pkg_resources ImportError: No module named pkg_resources #The above error is reported because there is no pip. Install pip [admin@node3 my-cluster]$ sudo yum install epel-release [admin@node3 my-cluster]$ sudo yum install python-pip #Reinitialize [admin@node3 my-cluster]$ ceph-deploy new node3 node4 node5 [admin@node3 my-cluster]$ ls ceph.conf ceph-deploy-ceph.log ceph.mon.keyring [admin@node3 my-cluster]$ cat ceph.conf [global] fsid = 3a2a06c7-124f-4703-b798-88eb2950361e mon_initial_members = node3, node4, node5 mon_host = 172.18.112.20,172.18.112.19,172.18.112.18 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx
Modify CEPH Conf, add the following configuration
public network = 172.18.112.0/24 cluster network = 172.18.112.0/24 osd pool default size = 3 osd pool default min size = 2 osd pool default pg num = 128 osd pool default pgp num = 128 osd pool default crush rule = 0 osd crush chooseleaf type = 1 max open files = 131072 ms bind ipv6 = false [mon] mon clock drift allowed = 10 mon clock drift warn backoff = 30 mon osd full ratio = .95 mon osd nearfull ratio = .85 mon osd down out interval = 600 mon osd report timeout = 300 mon allow pool delete = true [osd] osd recovery max active = 3 osd max backfills = 5 osd max scrubs = 2 osd mkfs type = xfs osd mkfs options xfs = -f -i size=1024 osd mount options xfs = rw,noatime,inode64,logbsize=256k,delaylog filestore max sync interval = 5 osd op threads = 2
Install Ceph software to the specified node
[admin@node3 my-cluster]$ ceph-deploy install --no-adjust-repos node3 node4 node5
– no adjust repos uses the local source directly without generating the official source
Deploy the initial monitors and get the keys
[admin@nod3 my-cluster]$ ceph-deploy mon create-initial
After this step, you will see the following keyrings in the current directory:
[admin@node3 my-cluster]$ ls ceph.bootstrap-mds.keyring ceph.bootstrap-osd.keyring ceph.client.admin.keyring ceph-deploy-ceph.log ceph.bootstrap-mgr.keyring ceph.bootstrap-rgw.keyring ceph.conf ceph.mon.keyring
Copy the configuration file and key to each node of the cluster
The configuration file is the generated ceph Conf, and the key is ceph client. admin. Keyring, the default secret key that needs to be used when connecting to the ceph cluster using the ceph client. Here, all nodes need to copy. The command is as follows.
[admin@node3 my-cluster]$ ceph-deploy admin node3 node4 node5
7, Deploy CEPH Mgr
#A 'manager daemon' is added to 'Ceph' in version L. the following command is used to deploy a 'Manager' daemon [admin@node3 my-cluster]$ ceph-deploy mgr create node3
8, Create osd
#Usage: CEPH deploy OSD create – data {device} {CEPH node} ceph-deploy osd create --data /dev/vdb node3 ceph-deploy osd create --data /dev/vdb node4 ceph-deploy osd create --data /dev/vdb node5
Check osd status
[admin@node3 my-cluster]$ sudo ceph health HEALTH_OK [admin@node3 my-cluster]$ sudo ceph -s cluster: id: 3a2a06c7-124f-4703-b798-88eb2950361e health: HEALTH_OK services: mon: 3 daemons, quorum node5,node4,node3 mgr: node3(active) osd: 3 osds: 3 up, 3 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 MiB usage: 3.2 GiB used, 597 GiB / 600 GiB avail pgs:
By default, ceph client. admin. The permission of the Keyring file is 600, and the owner and group are root. If you use the cephadmin user direct ceph command on a node in the cluster, you will be prompted that / etc / ceph / ceph. Cannot be found client. admin. Keyring file because of insufficient permissions.
If this problem does not exist when using sudo ceph, the permission can be set to 644 for the convenience of directly using ceph command. Execute the following command under node1 admin user on the cluster node.
[admin@node3 my-cluster]$ ceph -s 2021-12-28 07:59:36.062 7f52d08e0700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory 2021-12-28 07:59:36.062 7f52d08e0700 -1 monclient: ERROR: missing keyring, cannot use cephx for authentication [errno 2] error connecting to the cluster [admin@node3 my-cluster]$ sudo chmod 644 /etc/ceph/ceph.client.admin.keyring [admin@node3 my-cluster]$ ceph -s cluster: id: 3a2a06c7-124f-4703-b798-88eb2950361e health: HEALTH_OK services: mon: 3 daemons, quorum node5,node4,node3 mgr: node3(active) osd: 3 osds: 3 up, 3 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 MiB usage: 3.2 GiB used, 597 GiB / 600 GiB avail pgs:
View osds
[admin@node3 my-cluster]$ sudo ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.58589 root default -3 0.19530 host node3 3 hdd 0.19530 osd.3 up 1.00000 1.00000 -5 0.19530 host node4 4 hdd 0.19530 osd.4 up 1.00000 1.00000 -7 0.19530 host node5 5 hdd 0.19530 osd.5 up 1.00000 1.00000
9, Start MGR monitoring module
Mode 1: command operation
ceph mgr module enable dashboard
If the above operation reports an error as follows:
Error ENOENT: all mgr daemons do not support module 'dashboard', pass --force to force enablement
Because CEPH mgr dashboard is not installed, install it on the node of mgr.
yum install ceph-mgr-dashboard
Method 2: configuration file
# Edit CEPH Conf file vi ceph.conf [mon] mgr initial modules = dashboard #Push configuration [admin@node3 my-cluster]$ ceph-deploy --overwrite-conf config push node3 node4 node5 #Restart mgr sudo systemctl restart ceph-mgr@node3
web login configuration By default, all HTTP connections to the dashboard are protected using SSL/TLS.
#To quickly start and run the dashboard, you can generate and install a self signed certificate using the following built-in command: [root@node3 my-cluster]# ceph dashboard create-self-signed-cert Self-signed certificate created #To create a user with the administrator role: [root@node3 my-cluster]# ceph dashboard set-login-credentials admin admin Username and password updated
#To view CEPH Mgr services:
[root@node3 my-cluster]# ceph mgr services { "dashboard": "https://node3:8443/" }
After the above configuration is completed, enter in the browser https://node3:8443 Enter the user name admin and password admin to log in and view
To resolve local hosts