Detailed explanation of RAID configuration

Posted by phpmaven on Thu, 23 Dec 2021 06:43:12 +0100

preface

RAID is to combine multiple independent physical hard disks in different ways to form a hard disk group (logical hard disk), so as to provide higher storage performance and data backup technology than a single hard disk

1, RAID disk array introduction

  • It is the abbreviation of Redundant Array of Independent Disks, which is called independent redundant disk array in Chinese

  • The different ways in which disk arrays are composed are called RAID levels

  • Common RAID levels:
    RAID0,RAID1,RAID5,RAID6,RAID 1+0

1.1 RAID 0 disk array introduction

  • RAID 0 continuously divides data in bits or bytes and reads / writes to multiple disks in parallel, so it has a high data transfer rate, but it has no data redundancy

  • RAID 0 simply improves performance and does not guarantee the reliability of data, and the failure of one disk will affect all data

  • RAID 0 cannot be used in situations requiring high data security

Disk space utilization: 100%, so the cost is the lowest.
Read performance: read performance of N single disk
Write performance: write performance of N single disk
Redundancy: none. Any disk damage will result in data unavailability.

1.2 introduction to RAID 1 disk array

  • Data redundancy is realized through disk data mirroring, and mutual backup data is generated on paired independent disks

  • When the original data is busy, the data can be read directly from the mirrored copy, so RAID 1 can improve the reading performance

  • RAID1 is the highest unit cost in disk array, but it provides high data security and availability. When a disk fails, the system can automatically switch to the mirror disk for reading and writing without reorganizing the failed data

Disk space utilization: 50%, so the cost is the highest.
Read performance: it can only be read on one disk, depending on the faster disk in the disk
Write performance: both disks need to be written. Although they are written in parallel, the performance of a single disk is slow because it needs to be compared.
Redundancy: as long as one disk in any pair of mirrored disks in the system can be used, the system can operate normally even when half of the hard disks have problems.

1.3 introduction to RAID 5 disk array

  • N (n > = 3) disks form an array. One piece of data generates N-1 strips and one piece of verification data. A total of N pieces of data are circularly and evenly stored on N disks

  • N disks read and write at the same time, and the read performance is very high. However, due to the problem of verification mechanism, the write performance is relatively low

  • (N-1)/N disk utilization

  • High reliability, allowing one disk to be damaged without affecting all data

Disk space utilization: (N-1)/N, that is, only one disk is wasted for parity.
Read performance: (n-1) * the read performance of a single disk, which is close to that of RAID0.
Write performance: the write performance is worse than that of a single disk (this is not very clear, can it be written in parallel?)
Redundancy: only one disk is allowed to be damaged.

1.4 introduction to raid 6 disk array

  • N (n > = 4) disks form an array, (N-2)/N disk utilization

  • Compared with RAID 5, RAID 6 adds a second independent parity information block

  • Two independent parity systems use different algorithms. Even if two disks fail at the same time, it will not affect the use of data

  • Compared with RAID 5, it has greater "write loss", so the write performance is poor

Introduction to 1.5 RAID 1+0 disk array

  • After n (even number, n > = 4) disks are mirrored, they are combined into a RAID 0

  • oN/2 disk utilization

  • oN/2 disks are written at the same time, and N disks are read at the same time

  • High performance and reliability

Although Raid10 causes 50% disk waste, it provides 200% speed and single disk damaged data security, and can ensure data security when the damaged disks are not in the same Raid1. If one of the disks is broken, the whole logical disk can still work normally

2, Array card introduction

  • Array card is a board used to realize RAID function

  • It is usually composed of a series of components such as I/O processor, hard disk controller, hard disk connector and cache

  • Different RAID cards support different RAID functions
    For example, RAID0, RAID1, RAID5, RAID10, etc. are supported

  • Interface type of RAID card
    IDE interface, sCSI interface, SATA interface and SAS interface

2.1 cache of array card

  • Cache is the place where the RAID card exchanges data with the external bus. The RAID card first transfers the data to the cache, and then the cache exchanges data with the external data bus

  • The size and speed of cache are important factors directly related to the actual transmission speed of RAID card

  • Different RAID cards are equipped with different memory capacities at the factory, generally ranging from a few megabytes to hundreds of megabytes

3, Building a soft RAID disk array case

Requirement Description:

  • Add 4 SCSI hard disks for Linux server
  • Use mdadm software package to build RAID5 disk array to improve the performance and reliability of disk storage

step

  1. Installing mdadm

  2. Preparing partitions for RAID arrays

  3. Add four SCSI hard disks to the Linux server, and use the fdisk tool to divide a 2GB partition respectively, which is / dev/sdb1, / dev/sdc1, / dev/sdd1 and ldev/sde1
    Change its type ID to "fd", corresponding to "Linux raidautodetec", indicating that it supports RAID disk arrays

  4. Create RAID devices and establish file systems

  5. Mount and use the file system

3.1 check whether the mdadm package is installed

[root@localhost ~]# rpm -q mdadm
mdadm-4.0-5.el7.x86_64

Generally, it is installed by default. If it is not installed
yum install -y mdadm

3.2 partition using fdisk tool

Divide the new disk devices / dev/ sdb, / dev/sdc, / dev/sdd, / dev/sde into primary partitions sdb1, sdc1, sdd1 and sde1, and change the ID tag number of the partition type to "fd"

fdisk /dev/ sdb
fdisk /dev/ sdc
fdisk /dev/ sdd
fdisk /dev/ sde
#Divide the main partitions sdb1, sdc1, sdd1 and sde1, and change the ID tag number of the partition type to "fd"

[root@localhost ~]# fdisk -l

disk /dev/sdb: 21.5 GB, 21474836480 Bytes, 41943040 sectors
Units = a sector of 1 * 512 = 512 bytes
 Sector Size (logic/Physics): 512 byte / 512 byte
I/O size(minimum/optimum): 512 byte / 512 byte
 Disk label type: dos
 Disk identifier: 0 xe2dcc4e9

   equipment Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048    41943039    20970496   fd  Linux raid autodetect

disk /dev/sdc: 21.5 GB, 21474836480 Bytes, 41943040 sectors
Units = a sector of 1 * 512 = 512 bytes
 Sector Size (logic/Physics): 512 byte / 512 byte
I/O size(minimum/optimum): 512 byte / 512 byte
 Disk label type: dos
 Disk identifier: 0 x2167955e

   equipment Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048    41943039    20970496   fd  Linux raid autodetect

disk /dev/sda: 64.4 GB, 64424509440 Bytes, 125829120 sectors
Units = a sector of 1 * 512 = 512 bytes
 Sector Size (logic/Physics): 512 byte / 512 byte
I/O size(minimum/optimum): 512 byte / 512 byte
 Disk label type: dos
 Disk identifier: 0 x0009b938

   equipment Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     1026047      512000   83  Linux
/dev/sda2         1026048     9414655     4194304   82  Linux swap / Solaris
/dev/sda3         9414656   125829119    58207232   83  Linux

disk /dev/sde: 21.5 GB, 21474836480 Bytes, 41943040 sectors
Units = a sector of 1 * 512 = 512 bytes
 Sector Size (logic/Physics): 512 byte / 512 byte
I/O size(minimum/optimum): 512 byte / 512 byte
 Disk label type: dos
 Disk identifier: 0 xab6a3c2d

   equipment Boot      Start         End      Blocks   Id  System
/dev/sde1            2048    41943039    20970496   fd  Linux raid autodetect

disk /dev/sdd: 21.5 GB, 21474836480 Bytes, 41943040 sectors
Units = a sector of 1 * 512 = 512 bytes
 Sector Size (logic/Physics): 512 byte / 512 byte
I/O size(minimum/optimum): 512 byte / 512 byte
 Disk label type: dos
 Disk identifier: 0 xfb0dd633

   equipment Boot      Start         End      Blocks   Id  System
/dev/sdd1            2048    41943039    20970496   fd  Linux raid autodetect

3.3 creating RAID devices

Create RAID5

[root@localhost ~]# mdadm -C -v /dev/md5 -l5 -n3 /dev/sd[b-d]1 -x1 /dev/sde1
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 20954112K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md5 started.

-C:Indicates new;
-v:Displays details of the creation process.
/dev/md5: establish RAID5 Name of the.
-ayes:--auto,Indicates that if any device file does not exist, it will be created automatically and can be omitted.
-l: appoint RAID The level of, l5 Represents creation RAID5. .
-n:Specify how many hard disks to create RAID, n3 Indicates that it is created with 3 hard disks RAID. 
/dev/sd [bcd]1: Specify to use these three disk partitions to create RAID. 
-x:Specify how many hard disks to use RAID Hot spare disk, x1 Indicates that one free hard disk is reserved for standby
/dev/sde1: Specifies the disk to use as a spare

3.4 creating and mounting file systems

[root@localhost ~]# mkfs.xfs /dev/md5
#format
meta-data=/dev/md5               isize=512    agcount=16, agsize=654720 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=10475520, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=5120, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

[root@localhost ~]# mkdir /opt/lz_md5
#Create LZ in opt directory_ md5
[root@localhost ~]# vim /etc/fstab 
/dev/md5             /opt/lz_md5     xfs   defaults     0 0
#Configure permanent mount

[root@localhost ~]# mount -a
#Refresh fstab
[root@localhost ~]# df -Th
 file system       type      Capacity used available used% Mount point
/dev/sda3      xfs        56G  3.8G   52G    7% /
devtmpfs       devtmpfs  898M     0  898M    0% /dev
tmpfs          tmpfs     912M     0  912M    0% /dev/shm
tmpfs          tmpfs     912M  9.1M  903M    1% /run
tmpfs          tmpfs     912M     0  912M    0% /sys/fs/cgroup
/dev/sda1      xfs       497M  167M  331M   34% /boot
tmpfs          tmpfs     183M   12K  183M    1% /run/user/42
tmpfs          tmpfs     183M     0  183M    0% /run/user/0
/dev/md5       xfs        40G   33M   40G    1% /opt/lz_md5

[root@localhost ~]# mdadm -D /dev/md5
#View RAID disk details
/dev/md5:
           Version : 1.2
     Creation Time : Thu Aug 12 16:37:27 2021
        Raid Level : raid5
        Array Size : 41908224 (39.97 GiB 42.91 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Thu Aug 12 16:50:28 2021
             State : clean 
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:5  (local to host localhost.localdomain)
              UUID : 0f24eb68:2f549841:4a3ddd16:5208fd7f
            Events : 18

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       4       8       49        2      active sync   /dev/sdd1

       3       8       65        -      spare   /dev/sde1

3.5 fault recovery

[root@localhost /]# mdadm /dev/md5 -f /dev/sdc1
mdadm: set /dev/sdc1 faulty in /dev/md5
#Simulate / dev/sdc1 failure
[root@localhost /]# mdadm -D /dev/md5
#It is found that sde1 has replaced sdc1
/dev/md5:
           Version : 1.2
     Creation Time : Thu Aug 12 16:37:27 2021
        Raid Level : raid5
        Array Size : 41908224 (39.97 GiB 42.91 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Thu Aug 12 17:01:27 2021
             State : clean, degraded, recovering 
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 1
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 7% complete

              Name : localhost.localdomain:5  (local to host localhost.localdomain)
              UUID : 0f24eb68:2f549841:4a3ddd16:5208fd7f
            Events : 21

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       3       8       65        1      spare rebuilding   /dev/sde1
       4       8       49        2      active sync   /dev/sdd1

       1       8       33        -      faulty   /dev/sdc1

3.6 create / etc / mdadm Conf configuration file

It is convenient to manage the configuration of soft RAID, such as start and stop

[root@localhost ~]# echo ' DEVICE /dev/sdc1 /dev/sdb1 /dev/sdd1 /dev/sde1' > /etc/mdadm.conf

[root@localhost ~]# cat /etc/mdadm.conf 
 DEVICE /dev/sdc1 /dev/sdb1 /dev/sdd1 /dev/sde1

[root@localhost ~]# mdadm --detail --scan >> /etc/mdadm.conf
[root@localhost ~]#umount /dev/md5

[root@localhost ~]# mdadm -S /dev/md5
mdadm: stopped /dev/md5

[root@localhost ~]# mdadm -As /dev/md5
mdadm: /dev/md5 has been started with 3 drives.

summary

Use different schemes according to different needs; If you only pursue read-write speed without security, use RAID 0; For security and read-write speed, use RAID 10

Topics: Linux RAID