1. Install ntp
We recommend that NTP services (especially Ceph Monitor nodes) be installed on all Ceph nodes to avoid clock drift failures. For details, see Clock.
sudo yum install ntp ntpdate ntp-doc // yum -y install ntpdate ntp
vim /etc/ntp.conf
server ntp1.aliyun.com iburst systemctl restart ntpd
2. Install SSH server
Perform the following steps on all Ceph nodes:
sudo yum install openssh-server
3. Turn off the firewall and selinux, and configure the hosts file
- Turn off firewall
systemctl stop firewalld systemctl disable firewalld
- Turn off selinux
setenforce 0 vim /etc/selinux/config
-
Configure hosts file
vim /etc/hosts
192.168.0.88 controller 192.168.0.197 node1 192.168.0.245 node2 192.168.0.148 node3
- Modify host name
hostnamectl set-hostname node0001
4. Create new users at each Ceph node
//Create account sudo useradd -d /home/ceph-admin -m ceph-admin //Change Password sudo passwd ceph-admin echo "ceph-admin" | passwd --stdin ceph-admin //Ensure that the newly created users on each Ceph node have sudo permission echo "ceph-admin ALL = (root) NOPASSWD:ALL" | tee /etc/sudoers.d/ceph-admin chmod 0440 /etc/sudoers.d/ceph-admin //See cat /etc/sudoers.d/ceph-admin
5. Allow password free SSH login
Because Ceph deploy does not support entering passwords, you must generate SSH keys on the management node and distribute their public keys to each Ceph node. Ceph deploy attempts to generate SSH key pairs for the initial monitors.
- Generate SSH key pairs, but do not use sudo or root. When "Enter passphrase" is prompted, enter directly, and the password is empty:
ssh-keygen Generating public/private key pair. Enter file in which to save the key (/ceph-admin/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /ceph-admin/.ssh/id_rsa. Your public key has been saved in /ceph-admin/.ssh/id_rsa.pub.
2. Copy the public key to each Ceph node
ssh-copy-id ceph-admin@controller ssh-copy-id ceph-admin@node1 ssh-copy-id ceph-admin@node2 ssh-copy-id ceph-admin@node3
3. Configuration of sudo does not require TTY (control node)
sed -i 's/Default requiretty/#Default requiretty/' /etc/sudoers
4. Recommended Practice) modify the ~ /. ssh/config file on the Ceph deploy management node, so that Ceph deploy can log in to the Ceph node with the user name you created, without specifying -- username {username} every time you execute Ceph deploy. This also simplifies the use of ssh and scp. Replace {username} with the user name you created.
Host node1 Hostname node1 User {username} Host node2 Hostname node2 User {username} Host node3 Hostname node3 User {username}
6. Management node installation CEPH deploy tool
- Add the yum configuration file (each node needs to add the yum source)
Luminous The source of the Edition: export CEPH_DEPLOY_REPO_URL=http://mirrors.163.com/ceph/rpm-luminous/el7 export CEPH_DEPLOY_GPG_URL=http://mirrors.163.com/ceph/keys/release.asc Jewel The source of the Edition: yum clean all rm -rf /etc/yum.repos.d/*.repo wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo sed -i '/aliyuncs/d' /etc/yum.repos.d/CentOS-Base.repo sed -i '/aliyuncs/d' /etc/yum.repos.d/epel.repo sed -i 's/$releasever/7/g' /etc/yum.repos.d/CentOS-Base.repo vim /etc/yum.repos.d/ceph.repo //Add the following: [ceph] name=ceph baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/x86_64/ gpgcheck=0 [ceph-noarch] name=cephnoarch baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch/ gpgcheck=0
- Update software source and install CEPH deploy management tool
[root@ceph01 ~]# yum clean all && yum list [root@ceph01 ~]# yum -y install ceph-deploy
7. Create monitor service
mkdir my-cluster && cd my-cluster #mon is installed in node1 node ceph-deploy new node1
8. Number of modified copies
[ceph-admin@controller ceph]# The default number of copies of vim ceph.conf configuration file is changed from 3 to 2, so that only two OSDs can reach the active+clean state. Add the following line to the [global] section (optional configuration) [global] fsid = c255a5ad-b772-402a-bd81-09738693fe10 mon_initial_members = node1 mon_host = 192.168.0.197 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx osd_pool_default_size = 2
9. Install ceph on all nodes
-
Install Ceph
ceph-deploy install node1 node2 node3
-
Install ceph monitor
ceph-deploy mon create node1
-
Collecting keyring files of nodes
ceph-deploy gatherkeys node1
10. Deploy osd service
-
Format
mkfs.xfs -f /dev/sdb
-
mount
mkdir -p /var/local/osd0 mount /dev/sdb /var/local/osd0/ //uninstall fuser -km /dev/sdb umount /dev/sdb //Auto Mount vim /etc/fstab /dev/sdb /var/local/osd0 xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
-
Add permissions to / var/local/osd1 / and / var/local/osd2 / on each node
chmod 777 -R /var/local/osd1/
-
Create active osd
ceph-deploy osd prepare node1:/var/local/osd0 node2:/var/local/osd1 node3:/var/local/osd2 //activation ceph-deploy osd activate node1:/var/local/osd0 node2:/var/local/osd1 node3:/var/local/osd2 //Overwrite osd ceph-deploy --overwrite-conf osd prepare node1:/var/local/osd0 node2:/var/local/osd1 node3:/var/local/osd2
-
View state
//Unified configuration (use Ceph deploy to copy the configuration file and admin key to all nodes, so that you do not need to specify the monitor address and ceph.client.admin.keyring every time you execute Ceph command line) ceph-deploy admin node1 node2 node3 ceph-deploy osd list node1 node2 node3
11. Other operations
Deletion of ceph osd
If you want to delete an osd (whether it is in up state or down state)
A) If OSD is in up state, the first step is to convert up state to down state, execute the command CEPH OSD down osd.num (this is osd id) B) If OSD is in down state, execute the command ceph osd out osd.id directly, and mark OSD as out state. C) Then execute the command ceph osd rm osd.id to delete OSD directly. D) Next, delete the map corresponding to osd.id in the cursor, and execute the command ceph osd crush rm osd.id. E) Finally, delete the auth of osd.id with the command ceph auth del osd.id.