kubernetes configuring the back-end storage rook CEPH

Posted by hedge on Sun, 06 Mar 2022 15:15:34 +0100

I Rook overview

1.1 Ceph introduction

Ceph is a highly scalable distributed storage solution that provides object, file, and block storage. On each storage node, the file system of Ceph storage objects and the Ceph OSD (object storage daemon) process will be found. On the Ceph cluster, there are also Ceph MON (monitoring) daemons, which ensure that the Ceph cluster maintains high availability.
More Ceph introduction References: https://www.cnblogs.com/itzgr/category/1382602.html

1.2 Rook introduction

Rook is an open source cloud native storage choreography that provides a platform and framework; Provide platforms, frameworks and support for various storage solutions for local integration with cloud native environments. At present, it is mainly used for file, block and object storage services in cloud native environment. It realizes a self managed, self expanding and self repairing distributed storage service.
Rook supports automatic deployment, startup, configuration, provisioning, capacity expansion / reduction, upgrade, migration, disaster recovery, monitoring, and resource management. In order to achieve all these functions, rook relies on the underlying container orchestration platform, such as kubernetes, CoreOS, etc..
Rook currently supports the establishment of Ceph, NFS, Minio Object Store, Edegefs, Cassandra and CockroachDB storage.
Rook mechanism:
Rook provides a volume plug-in to expand the K8S storage system. Using Kubelet agent Pod, you can mount block devices and file systems managed by rook.
Rook Operator is responsible for starting and monitoring the whole underlying storage system, such as Ceph Pod, Ceph OSD, etc. at the same time, it also manages CRD, object storage and file system.
Rook Agent agent is deployed on each node of K8S and runs in Pod container. Each agent Pod is configured with a Flexvolume driver, which is mainly used to integrate with the volume control framework of K8S. Relevant operations on each node, such as adding storage devices, mounting, formatting, deleting storage and so on, are completed by the agent.
For more information, please refer to the following official website:
https://rook.io
https://ceph.com/

II Rook deployment

2.1 preliminary planning

Please create kubernetes cluster automatically
Cluster version v1.0 twenty-one point five
Kernel requirements
RBD

lsmod|grep rbd

CephFS
If you want to use cephfs, the minimum kernel requirement is 4.17.
Disk sdb
Host kernel 5.4.182-1 el7. elrepo. x86_ sixty-four
Cluster node estarhaohao-centos7-master01 estarhaohao-centos7-master02 estarhaohao-centos7-master03
Please go to my gitee warehouse to fetch all the files you need https://gitee.com/estarhaohao/rook.git

2.2 obtaining YAML

[root@k8smaster01 ~]# git clone https://gitee.com/estarhaohao/rook.git

2.3 configuring node labels

[root@estarhaohao-centos7-master01 ~]# kubectl label nodes  {estarhaohao-centos7-master01,estarhaohao-centos7-master02,estarhaohao-centos7-master03} app.rook.role=csi-provisioner app.rook.plugin=csi app.rook=storage ceph-mon=enabled ceph-osd=enabled ceph-mgr=enabled

2.4 deploy Rook Operator

[root@estarhaohao-centos7-master01 ceph]# kubectl apply -f common.yaml
[root@estarhaohao-centos7-master01 ceph]# kubectl apply -f crds.yaml  #Create resource
[root@estarhaohao-centos7-master01 ceph]# kubectl apply -f operator.yaml
configmap/rook-ceph-operator-config created
deployment.apps/rook-ceph-operator created
[root@estarhaohao-centos7-master01 ceph]# kubectl get pod -n rook-ceph
NAME                                  READY   STATUS    RESTARTS   AGE
rook-ceph-operator-56496b9f8f-dblnq   1/1     Running   0          3m37s
rook-discover-2jp7z                   1/1     Running   0          2m53s
rook-discover-hqq27                   1/1     Running   0          2m53s
rook-discover-sx8c6                   1/1     Running   0          2m53s

Create cluster after successful creation

2.5 creating a cluster

[root@estarhaohao-centos7-master01 ceph]# kubectl get pod -n rook-ceph
[root@estarhaohao-centos7-master01 ceph]# kubectl get pod -n rook-ceph
NAME                                                              READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-mwfg8                                            3/3     Running     0          5m53s
csi-cephfsplugin-provisioner-6446d9c9df-4r5xq                     6/6     Running     0          5m52s
csi-cephfsplugin-provisioner-6446d9c9df-rkd4k                     6/6     Running     0          5m52s
csi-cephfsplugin-vrlwm                                            3/3     Running     0          5m53s
csi-cephfsplugin-xfm8n                                            3/3     Running     0          5m53s
csi-rbdplugin-d87pk                                               3/3     Running     0          5m54s
csi-rbdplugin-k292p                                               3/3     Running     0          5m54s
csi-rbdplugin-provisioner-6998bd5986-j7729                        6/6     Running     0          5m53s
csi-rbdplugin-provisioner-6998bd5986-rp2wk                        6/6     Running     0          5m53s
csi-rbdplugin-r56c2                                               3/3     Running     0          5m54s
rook-ceph-crashcollector-estarhaohao-centos7-master01-564fhkv28   1/1     Running     0          4m7s
rook-ceph-crashcollector-estarhaohao-centos7-master02-547djvsw2   1/1     Running     0          3m18s
rook-ceph-crashcollector-estarhaohao-centos7-master03-787cdjq4b   1/1     Running     0          4m20s
rook-ceph-mgr-a-5bbf8f48d7-pdgkt                                  1/1     Running     0          3m51s
rook-ceph-mon-a-77d85f8944-56cgc                                  1/1     Running     0          5m59s
rook-ceph-mon-b-76d6564885-vxxhd                                  1/1     Running     0          5m30s
rook-ceph-mon-c-85858494c5-xjpf9                                  1/1     Running     0          4m7s
rook-ceph-operator-56496b9f8f-dblnq                               1/1     Running     0          9m53s
rook-ceph-osd-0-5c4f45d76-n6qc6                                   1/1     Running     0          3m24s
rook-ceph-osd-1-7f7f575577-v7lg5                                  1/1     Running     0          3m21s
rook-ceph-osd-2-5677f9d654-wzzzq                                  1/1     Running     0          3m18s
rook-ceph-osd-prepare-estarhaohao-centos7-master01-fvxq9          0/1     Completed   0          3m47s
rook-ceph-osd-prepare-estarhaohao-centos7-master02-x7swq          0/1     Completed   0          3m46s
rook-ceph-osd-prepare-estarhaohao-centos7-master03-9vhfc          0/1     Completed   0          3m45s
rook-discover-2jp7z                                               1/1     Running     0          9m9s
rook-discover-hqq27                                               1/1     Running     0          9m9s
rook-discover-sx8c6                                               1/1     Running     0          9m9s

Prompt: if the deployment fails, the master node will execute[ root@k8smaster01 ceph]# kubectl delete -f ./
Perform all node cleanup operations as follows:
rm -rf /var/lib/rook
/dev/mapper/ceph-*
dmsetup ls
dmsetup remove_all
dd if=/dev/zero of=/dev/sdb bs=512k count=1
wipefs -af /dev/sdb

2.6 deploy Toolbox

toolbox is a tool set container of rook. The commands in this container can be used to debug and test rook. The operation of Ceph temporary test is generally executed in this container.

[root@estarhaohao-centos7-master01 ceph]# kubectl apply -f toolbox.yaml 
rook-ceph-tools-8574b74c5d-65x8r  1/1     Running     0          4s

2.7 test rook CEPH

You can add aliases, so you don't have to write so many commands
[root@estarhaohao-centos7-master01 ceph]# kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') --- ceph -s 
  cluster:
    id:     2fb51620-1a29-4d64-9ad9-616e6435924a
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum a,b,c (age 28m)
    mgr: a(active, since 27m)
    mds: myfs:1 {0=myfs-a=up:active} 1 up:standby-replay
    osd: 3 osds: 3 up (since 27m), 3 in (since 27m)

  data:
    pools:   4 pools, 97 pgs
    objects: 30 objects, 49 KiB
    usage:   3.0 GiB used, 897 GiB / 900 GiB avail
    pgs:     97 active+clean

  io:
    client:   852 B/s rd, 1 op/s rd, 0 op/s wr
[root@estarhaohao-centos7-master01 ~]# ceph osd tree
ID  CLASS  WEIGHT   TYPE NAME                              STATUS  REWEIGHT  PRI-AFF
-1         0.87900  root default
-3         0.29300      host estarhaohao-centos7-master01
 0    hdd  0.29300          osd.0                              up   1.00000  1.00000
-7         0.29300      host estarhaohao-centos7-master02
 2    hdd  0.29300          osd.2                              up   1.00000  1.00000
-5         0.29300      host estarhaohao-centos7-master03
 1    hdd  0.29300          osd.1                              up   1.00000  1.00000
 There's basically no problem here

Three Ceph block storage

3.1 create StorageClass

Before providing block storage, you need to create storageclasses and storage pools. K8S needs these two kinds of resources to interact with Rook and allocate persistent volumes (PV).
Interpretation: a storage pool named replicapool and a storageClass of rook CEPH block will be created in the following configuration files.

[root@estarhaohao-centos7-master01 rbd]# pwd
/opt/rook/cluster/examples/kubernetes/ceph/csi/rbd
[root@estarhaohao-centos7-master01 rbd]# kubectl apply -f storageclass.yaml
[root@estarhaohao-centos7-master01 rbd]# kubectl get sc
NAME              PROVISIONER                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-ceph-block   rook-ceph.rbd.csi.ceph.com      Delete          Immediate           true                   64m

3.2 test rbd

[root@estarhaohao-centos7-master01 rbd]# kubectl get sc
NAME              PROVISIONER                  RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-ceph-block   rook-ceph.rbd.csi.ceph.com   Delete          Immediate           true                   4s
[root@estarhaohao-centos7-master01 rbd]# kubectl apply -f pod.yaml
pod/csirbd-demo-pod created
[root@estarhaohao-centos7-master01 rbd]# kubectl apply -f pvc.yaml
persistentvolumeclaim/rbd-pvc created
[root@estarhaohao-centos7-master01 rbd]# kubectl get pvc rbd-pvc
NAME      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
rbd-pvc   Bound    pvc-9f69bfab-a81b-41ea-93c7-59966661c867   1Gi        RWO            rook-ceph-block   5s
[root@estarhaohao-centos7-master01 rbd]# kubectl get pod csirbd-demo-pod
NAME              READY   STATUS    RESTARTS   AGE
csirbd-demo-pod   1/1     Running   0          70s

The running status is basically OK

IV. Ceph file storage

4.1 create CephFilesystem

The default Ceph does not deploy support for CephFS. The following official file system with default yaml deployable file storage is used.

[root@estarhaohao-centos7-master01 ceph]# pwd
/opt/rook/cluster/examples/kubernetes/ceph
[root@estarhaohao-centos7-master01 ceph]# kubectl apply -f filesystem.yaml
[root@estarhaohao-centos7-master01 ceph]# kubectl get cephfilesystems.ceph.rook.io -n rook-ceph
NAME   ACTIVEMDS   AGE
myfs   1           55m

4.2 create cephfs storageclass

Use the following official default yaml deployable file storage StorageClass.

[root@estarhaohao-centos7-master01 cephfs]# pwd
/opt/rook/cluster/examples/kubernetes/ceph/csi/cephfs
[root@estarhaohao-centos7-master01 cephfs]# kubectl apply -f storageclass.yaml 
[root@estarhaohao-centos7-master01 cephfs]# kubectl get sc
NAME              PROVISIONER                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-ceph-block   rook-ceph.rbd.csi.ceph.com      Delete          Immediate           true                   70m
rook-cephfs       rook-ceph.cephfs.csi.ceph.com   Delete          Immediate           true                   56m

4.3 test cephfs

[root@estarhaohao-centos7-master01 cephfs]# kubectl apply -f pvc.yaml
[root@estarhaohao-centos7-master01 cephfs]# kubectl apply -f pod.yaml
[root@estarhaohao-centos7-master01 cephfs]# kubectl get pvc
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
cephfs-pvc   Bound    pvc-e0d04036-a37c-4544-b71f-ac53f79c7832   1Gi        RWO            rook-cephfs    57m
[root@estarhaohao-centos7-master01 cephfs]# kubectl get pod
NAME                 READY   STATUS    RESTARTS   AGE
csicephfs-demo-pod   1/1     Running   0          57m
 

cephfs is basically OK

Five Ceph object storage

5.1 creating CephObjectStore

Before providing object storage, you need to create corresponding support, and use the following official default yaml deployable object store CephObjectStore.

[root@estarhaohao-centos7-master01 ceph]# pwd
/opt/rook/cluster/examples/kubernetes/ceph
[root@estarhaohao-centos7-master01 ceph]# kubectl apply -f object.yaml
[root@estarhaohao-centos7-master01 ceph]# kubectl get pod -n rook-ceph | grep rgw
rook-ceph-rgw-my-store-a-57dd44d5b-lkgfw                          1/1     Running     0          2m51s

5.2 create StorageClass
The default storage objects provided by storage yaml can be used.

[root@estarhaohao-centos7-master01 ceph]# kubectl apply -f storageclass-bucket-delete.yaml
storageclass.storage.k8s.io/rook-ceph-delete-bucket created
[root@estarhaohao-centos7-master01 ceph]# kubectl get sc
NAME                      PROVISIONER                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-ceph-block           rook-ceph.rbd.csi.ceph.com      Delete          Immediate           true                   85m
rook-ceph-delete-bucket   rook-ceph.ceph.rook.io/bucket   Delete          Immediate           false                  5s
rook-cephfs               rook-ceph.cephfs.csi.ceph.com   Delete          Immediate           true                   72m

5.3 creating a bucket

Use the following official default yaml deployable object storage bucket.
[root@k8smaster01 ceph]# kubectl create -f object-bucket-claim-delete.yaml
To be determined.....

Topics: Ceph