Linux_ Logical Volume Management Actual

Posted by phamenoth on Tue, 02 Nov 2021 20:36:43 +0100

Management Logical Volume Management LVM Concepts

When we partition, format and mount partitions on regular hard disks, we find that there is not enough disk space or that the disk allocation is too large. If you need to re-plan your hard drive, you need to re-format your hard drive drive. Of course, you will not have any data on your hard drive. Logical volumes will be perfect for this problem.

Logical volumes make it easy to manage disks, and can easily increase and decrease the number of hard disks.

Considering from the physical layer, enable logical volumes to target physical disks, RAID arrays, SAN disks, etc.

Important Contents in Logical Volume Management

1. Physical Volumes (PV s): Register physical devices so that they can be divided into volume groups

2. Volume group (VG): consists of one or more physical volumes, which can be interpreted as a hard disk pool. Note that only one VG can be given per PV

3. Logical Volume (LV): Obtain disk capacity from free space in the volume group or return excess capacity to the volume group

Setup Step - Create Logical Volume

1. Partition First

Partition type is Linux LVM, make_fdisk partition 0x8e

[root@localhost ~]# fdisk /dev/sdb
<Omit partial output>
Command (m for help): n
Partition type:
 p primary (0 primary, 0 extended, 4 free)
 e extended
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-419430399, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-419430399, default 419430399): +10G //Give this partition 10G
Partition 1 of type Linux and of size 10 GiB is set
Command (m for help): t //Modify partition type
Selected partition 1
Hex code (type L to list all codes): 8e //Set partition type
2,Partition Tables Kernel
[root@localhost ~]# partprobe /dev/sdb
3,Create physical volume
4,Create Volume Group
 among lewis Is the name of the volume group. The capacity of a volume group is sdb1 The muscles of this division. If Volume Group lewis It is convenient to build a new partition or add a hard disk to expand the capacity when the capacity is complete.
5,Create Logical Volume
-n Set the name of the logical volume to: lvlewis
-L The logical volume specified is:2 G
lewis Refers to slave volume group lewis Medium, take 2 G Space
6,Create part system (formatting)
Note the formatted path:/dev/Volume Group Name/Logical Volume Name
Changed type of partition 'Linux' to 'Linux LVM'
Command (m for help): w //Save Exit
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.

2. Partition Tables Kernel

[root@localhost ~]# partprobe /dev/sdb

3. Create physical volumes

[root@localhost ~]# pvcreate /dev/sdb1
WARNING: xfs signature detected on /dev/sdb1 at offset 0. Wipe it? [y/n]: y //The warning should not pop up in the new section
 Wiping xfs signature on /dev/sdb1.
 Physical volume "/dev/sdb1" successfully created.

4. Create Volume Groups

Where lewis is the name of the volume group. The volume group capacity is the capacity of the sdb1 partition. If the volume group lewis has run out of capacity, it is convenient to create a new partition or add a hard disk to expand it.

[root@localhost ~]# vgcreate lewis /dev/sdb1
 Volume group "lewis" successfully created

5. Create Logical Volume

-n Sets the name of the logical volume to lvlewis

-L Specifies the logical volume for:2G

lewis refers to taking 2G of space from volume group lewis

[root@localhost ~]# lvcreate -n lvlewis -L 2G lewis
Logical volume "lvlewis" created.

6. Create a component system (formatting)

Note the formatted path: /dev/volume group name/logical volume name

[root@localhost ~]# mkfs.xfs /dev/lewis/lvlewis
meta-data=/dev/lewis/lvlewis isize=256 agcount=4, agsize=131072 blks
 = sectsz=512 attr=2, projid32bit=1
 = crc=0 finobt=0
data = bsize=4096 blocks=524288, imaxpct=25
 = sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal log bsize=4096 blocks=2560, version=2
 = sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0

7. Mount

[root@localhost ~]# mkdir lewisfile //Create 12 mount points, that is, 12 records
[root@localhost ~]# Vim/etc/fstab //Edit Configuration
/dev/lewis/lvlewis /root/lewisfile xfs defaults 0 0 //Add the following
[root@localhost ~]# Mount-a //mount test

8. Inspection

/dev/mapper/lewis-lvlewis 2.0G 33M 2.0G 2%/root/lewis file found, indicating successful creation

[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 50G 8.4G 42G 17% /
devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs 7.8G 0 7.8G 0% /dev/shm
tmpfs 7.8G 57M 7.8G 1% /run
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/mapper/centos-home 142G 23G 119G 17% /home
/dev/sda1 497M 130M 368M 27% /boot
tmpfs 1.6G 0 1.6G 0% /run/user/0
/dev/mapper/lewis-lvlewis 2.0G 33M 2.0G 2% /root/lewisfile

View physical, volume group, and logical volume information

1. View physical volume information - pvdisplay

Not following parameters after command, all physical volumes will be displayed

[root@localhost ~]# pvdisplay /dev/sdb1
 --- Physical volume ---
 PV Name /dev/sdb1
 VG Name lewis
 PV Size 10.00 GiB / not usable 4.00 MiB
 Allocatable yes
 PE Size 4.00 MiB
 Total PE 2559
 Free PE 2047
 Allocated PE 512
 PV UUID hRUtFB-sTAc-V3CH-BL42-dMMZ-QxHu-G1zYCR

Primary Field Interpretation

Volume group name to which the PV Name partition VG Name is assigned

Space of PV Size physical volumes, including non-existent

PE Size physical volume is settled with PE as the most unit, i.e. 4M

How many PEs are available in Free PE, so physical volumes must be multiples of 4M, so in a system case, we see some differences between the logical volumes and the actual situation.

2. View Volume Group Information - vgdisplay

After command, all volume groups will be displayed without parameters

[root@localhost ~]# vgdisplay lewis
 --- Volume group ---
 VG Name lewis
 System ID
 Format lvm2
 Metadata Areas 1
 Metadata Sequence No 2
 VG Access read/write
 VG Status resizable
 MAX LV 0
 Cur LV 1
 Open LV 1
 Max PV 0
 Cur PV 1
 Act PV 1
 VG Size <10.00 GiB
 PE Size 4.00 MiB
 Total PE 2559
 Alloc PE / Size 512 / 2.00 GiB
 Free PE / Size 2047 / <8.00 GiB
 VG UUID 5AQA0E-b3qZ-qDJj-4AYq-pBk5-xpDO-xJ1UFe

Primary Field Interpretation

1. VG Name Volume Group Name

2. VG Size Volume Group

3. Total PE denotes total in PE units

4. How many PE are left (how much space is left) for free PE / Size

3. View Logical Volume-lvdisplay

All logical volumes will be displayed without following parameters after command

[root@localhost ~]# lvdisplay /dev/lewis/lvlewis
 --- Logical volume ---
 LV Path /dev/lewis/lvlewis
 LV Name lvlewis
 VG Name lewis
 LV UUID bdaiVL-Ue2l-5pkQ-RhWv-720q-emRI-36b12v
 LV Write Access read/write
 LV Creation host, time localhost.localdomain, 2019-02-12 15:51:52 +0800
 LV Status available
 # open 1
 LV Size 2.00 GiB
 Current LE 512
 Segments 1
 Allocation inherit
 Read ahead sectors auto
 - currently set to 8192
 Block device 253:3

Primary Field Interpretation

Path to LV Path Logical Volume

Name of LV Name Logical Volume

VG Name Volume Group Name

LV Size Logic Volume

Level Logical Volume Configuration

1. Increase volume group size

Volume groups are increased by physical volumes, which are given by partitions. So add more partitions sdb2 to 50G and set sdb2 to physical volume

Between specific operations

Add partition sdb2 to volume group lewis

[root@localhost ~]# vgextend lewis /dev/sdb2
 Volume group "lewis" successfully extended

Looking at the volume group, you see that the VG Size field is 59.99 GiB, indicating success

[root@localhost ~]# vgdisplay lewis //Omit partial output VG Size 59.99 GiB //Omit partial output

2. Reduce volume group size

Take the sdb2 partition from the lewis volume group and reduce the volume group if there is no data

[root@localhost ~]# vgreduce lewis /dev/sdb2
 Removed "/dev/sdb2" from volume group "lewis"

3. Increase the size of logical volume based on XFS

There are four main steps

1. Check if the volume group has space - vgdisplay

2. Add Logical Volume - lvextend

[root@localhost ~]# lvextend -L +20G /dev/lewis/lvlewis
 Size of logical volume lewis/lvlewis changed from 2.00 GiB (512 extents) to 22.00 GiB
(5632 extents).
 Logical volume lewis/lvlewis successfully resized.

3. Refresh Mushroom System

[root@localhost ~]# xfs_growfs /dev/lewis/lvlewis
meta-data=/dev/mapper/lewis-lvlewis isize=256 agcount=4, agsize=131072 blks
 = sectsz=512 attr=2, projid32bit=1
 = crc=0 finobt=0
data = bsize=4096 blocks=524288, imaxpct=25
 = sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal bsize=4096 blocks=2560, version=2
 = sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 524288 to 5767168

4. Check Confirmation

[root@localhost ~]# df -h
/dev/mapper/lewis-lvlewis 22G 34M 22G 1% /root/lewisfile

4. Increase Logical Volume based on ext4 Part System

All steps are xfs-like, only the third step is to refresh the system

3. Refresh System Commands

[root@localhost ~]# resize2fs /dev/lewis/lvlewis

Delete Logical Volume

Note: Please make a backup of your data before deleting it. Follow these four steps

1. Uninstall the mounted hard disk and check the df.

Delete the configuration of mounted content under /etc/fstab_

[root@localhost ~]# umount /dev/lewis/lvlewis

2. Delete Logical Volume

[root@localhost ~]# lvremove /dev/lewis/lvlewis
Do you really want to remove active logical volume lewis/lvlewis? [y/n]: y
 Logical volume "lvlewis" successfully removed

3. Delete Volume Group

[root@localhost ~]# vgremove lewis
 Volume group "lewis" successfully removed

4. Delete physical volumes

[root@localhost ~]# pvremove /dev/sdb1 /dev/sdb2
 Labels on physical volume "/dev/sdb1" successfully wiped.
 Labels on physical volume "/dev/sdb2" successfully wiped.

Video reference: https://www.bilibili.com/video/BV1eM4y1N7MQ/?spm_id_from=trigger_reload

Topics: Linux Operation & Maintenance server