Deploy ceph cluster

Posted by bugz-2849 on Thu, 27 Feb 2020 11:14:42 +0100

Environmental preparation

Deployment environment
System version: CentOS Linux release 7.4.1708 (Core)
ceph version: ceph 12.2.13 (luminous)
Hardware configuration: 5 VMS, 1 core and 1G memory. Each node role machine should mount at least one free disk for osd

Server role

host name IP role
admin admin
node1 mon ,mgr,osd
node2 osd
node3 osd


  1. Fixed IP
  2. Modify host names and resolve each other (all nodes, root user)
  3. Create user (all nodes, root user)
    Do the following on all nodes:
    1) Create user name: cephu, set password:
    useradd cephu
    passwd  cephu
    2) Modify the visudo file, otherwise it will prompt cephu to stop the errors in the sudoer list.
    Add in the / etc/sudoers file
    cephu  ALL=(ALL) ALL
    3) Switch to cephu user and add root permission for the user
    echo "cephu ALL=(root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephu
    sudo  chmod 0440 /etc/sudoers.d/cephu
  4. Implement ssh password free login (admin node)
    1) Generate secret key under cephu user
    2) Copy the generated key to each Ceph node under cephu user
    ssh-copy-id cephu@node1
    ssh-copy-id cephu@node2
    ssh-copy-id cephu@node3
    3) Under root user, add ~ /. ssh/config configuration file and make the following settings
    cat .ssh/config 
    Host node1
    Hostname node1
    User cephu
    Host node2
    Hostname node2
    User cephu
    Host node3
    Hostname node3
    User cephu
  5. Add download source, install CEPH deploy (admin node, root user)
    1) Add ceph source
    cat /etc/yum.repos.d/ceph.repo 
    name=Ceph noarch packages
    2) Update source, install CEPH deploy
    yum makecache
    sudo yum update
    vim /etc/yum.conf
    yum install ceph-deploy -y
  6. Turn off selinux, firewall (all nodes)
  7. Install ntp (all nodes, time must be synchronized)
    Select any machine as ntp time server, and other nodes as time server clients synchronize time with server. My admin is ntp time server
    yum install -y ntp
    vim /etc/ntp.conf //There are 4 lines. server The four lines server Line comment out, fill in the following two lines
    	server     # local clock
        fudge stratum 10 
    systemctl start ntpd 
    systemctl status ntpd  //Confirm to open NTP service
    All other nodes:
    yum install -y ntpdate
    ntpdate admin

Deploy ceph cluster

It is not specially stated that all the following operations are performed under the admin node and cephu user

  1. Create ceph operation directory

    mkdir my-cluster	//Remember that sudo is not available
    cd my-cluster	//After that, all CEPH deploy command operations must be performed in this directory
  2. Create clusters

    ceph-deploy new node1
    //Three files will be created successfully: ceph.conf, ceph.mon.keyring, and a log file
    //May report an error
    	[cephu@master]$ ceph-deploy new node1
    Traceback (most recent call last):
      File "/bin/ceph-deploy", line 18, in <module>
        from ceph_deploy.cli import main
      File "/usr/lib/python2.7/site-packages/ceph_deploy/", line 1, in <module>
        import pkg_resources
    ImportError: No module named pkg_resources
    cd distribution-0.7.3/
    sudo python install
  3. Install luminous (12.2.13)
    Install ceph and ceph radosgw main package on node1, node2 and node3

    ceph-deploy install --release luminous node1 node2 node3
    	If the installation fails, install sudo Yum install CEPH ceph-radosgw - y manually 
    Test whether the installation is completed: confirm that the installation version is 12.2.13 in node1 node2 node3
    ceph --version
  4. Initialize mon

     ceph-deploy mon create-initial
  5. Give each node the user name free permission to use the command

ceph-deploy admin node1 node2 node3
  1. Install CEPH Mgr: only luminous, ready to use dashboard

     ceph-deploy mgr create node1
  2. Add osd

    ceph-deploy osd create --data /dev/sdb node1
    ceph-deploy osd create --data /dev/sdb node2
    ceph-deploy osd create --data /dev/sdb node3

    View cluster status

    ssh node1 sudo ceph -s
     If "health" is displayed, three OSD UPS succeed, as shown in the following figure

Dashboard configuration, operating on node1

Note: install CEPH Mgr and CEPH mon on the same host. It is better to have only one CEPH Mgr

  1. Create management domain key

    sudo ceph auth get-or-create mgr.node1 mon 'allow profile mgr' osd 'allow *' mds 'allow *'
  2. Open CEPH Mgr management domain

    sudo ceph-mgr -i node1
  3. View the status of ceph

    sudo ceph status
     Confirm that the status of mgr is active, as shown in the figure below

  4. Open the dashboard module

    sudo ceph mgr module enable dashboard
  5. Bind the ip address of the CEPH Mgr node that enables the dashboard module

    sudo ceph config-key set mgr/dashboard/node1/server_addr
  6. web login
    Browser address bar input:
    mgr address: 7000, as shown below

Configure the client to use rbd

Note: before creating a block device, you need to create a storage pool. The commands related to the storage pool need to be executed in the mon node

  1. Create storage pool

    sudo ceph osd pool create rbd 128 128
  2. Initialize storage pool

    sudo rbd pool init rbd
  3. Prepare client client
    Another host is available. centos7 is used as the client. The host name is client, ip: Modify the hosts file to realize interworking with the host name of the admin node.
    1) Upgrade the client kernel to 4.x
    Before the update, the kernel version was

    uname -r  

    1.1) import key

    rpm --import 

    1.2) install the yum source of elrepo

    rpm -Uvh

    1.3) install kernel

    yum --enablerepo=elrepo-kernel install  kernel-lt-devel kernel-lt 
    4.4.214 installed

    2) View default startup order

    awk -F\' '$1=="menuentry " {print $2}' /etc/grub2.cfg
    CentOS Linux (4.4.214-1.el7.elrepo.x86_64) 7 (Core)
    CentOS Linux (3.10.0-693.el7.x86_64) 7 (Core)
    CentOS Linux (0-rescue-ac28ee6c2ea4411f853295634d33bdd2) 7 (Core)
    The default boot order is from 0, and the new kernel is inserted from scratch (currently at 0, while 4.4.214 is at 1), so you need to select 0.
    grub2-set-default 0
    Then restart to see if you want to use the new kernel version
    uname -r  

    3) Remove old kernel

    yum remove kernel 
  4. Install ceph for client
    1) Create user name: cephu, set password:

    useradd cephu
    passwd  cephu

    2) Modify the visudo file, otherwise it will prompt cephu to stop the errors in the sudoer list.

    Add in the / etc/sudoers file
    cephu  ALL=(ALL) ALL

    3) Switch to cephu user and add root permission for the user

    echo "cephu ALL=(root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephu
    sudo  chmod 0440 /etc/sudoers.d/cephu

    4) Install Python setuptools

    yum  -y install python-setuptools

    5) Configure client firewall (direct shutdown)

    sudo firewall-cmd --zone=public --add-service=ceph --permanent
    sudo firewall-cmd --reload

    6) Grant the client the user name free permission to use the command in the admin node

    ceph-deploy admin client

    7) Modify the read permission of the file under the client

    sudo chmod +r /etc/ceph/ceph.client.admin.keyring

    8) Modify the ceph configuration file under the client: this step is to solve the problem of mapping the image

    sudo vi /etc/ceph/ceph.conf   stay global section Add below:
        rbd_default_features = 1
  5. The client node creates the block device image: the unit is M, here are 4 G

rbd create foo --size 4096
  1. client node mapping mirror to host

    sudo rbd map foo --name client.admin
    //Possible error:[cephu@client ~]$ sudo  rbd map foo --name client.admin
    rbd: sysfs write failed
    In some cases useful info is found in syslog - try "dmesg | tail".
    rbd: map failed: (110) Connection timed out
    [cephu@client ~]$ sudo ceph osd crush tunables hammer adjusted tunables profile to hammer
    //Then re
    	[cephu@client ~]$ sudo  rbd map docker_test --name client.admin
    //That is success
  2. client node format block device

    sudo mkfs.ext4 -m 0 /dev/rbd/rbd/foo
  3. client node mount block device

    sudo mkdir /mnt/ceph-block-device
    sudo mount /dev/rbd/rbd/foo /mnt/ceph-block-device
    cd /mnt/ceph-block-device
    After the client is restarted, the device needs to be remapped, or it may get stuck
     mgr address: 7000, just visit again
Published 3 original articles, won praise 0, visited 40
Private letter follow

Topics: Ceph sudo osd yum