Storage pool
Files, directories, or storage devices managed by libvirt are provided to virtual machines. Storage pools are divided into storage volumes that hold virtual images or connect to virtual machines as additional storage.
Command:
virsh storage pool related commands
pool-autostart Automatically start a pool pool-build Build pool pool-create-as Create a pool from a set of variables pool-create From one XML Create a pool in the file pool-define-as Define a pool in a set of variables pool-define In a XML Define (but do not start) a pool in the file or modify an existing pool pool-delete Delete pool pool-destroy Destroy (delete) pool pool-dumpxml Save pool information to XML In the document pool-edit Edit for storage pool XML to configure pool-info Viewing storage pool information pool-list List pools pool-name Will pool UUID Convert to pool name pool-refresh Refresh pool pool-start Start an inactive pool (previously defined) pool-undefine Undefine an inactive pool pool-uuid Convert a pool name to a pool UUID
Commands related to storage volumes
vol-clone Clone the volume. vol-create-as Create a volume from a set of variables vol-create From one XML File to create a volume vol-create-from Generate a volume, using another volume as input. vol-delete remove volume vol-download Download the contents of the volume to a file vol-dumpxml Save volume information to XML In the document vol-info Viewing storage volume information vol-key Returns the name of the volume based on the volume name or path key vol-list List volumes vol-name According to the given volume key Or the path returns the volume name vol-path By volume name or key Return volume path vol-pool Returns the storage pool for the given key or path vol-resize Redefine volume size vol-upload Upload file content to volume vol-wipe Erase volume
Create storage pools and volumes in local directories
- Create test1 pool. The directory is in / test1
format: virsh pool-define-as <pool> <type> --target <dir> virsh pool-define-as test1 dir --target /test1
- Build test1 pool
format: virsh pool-build <pool> virsh pool-build test1
- Start test1 pool
format: virsh pool-start <pool> virsh pool-start test1
- Self starting test1 pool
format: virsh pool-autostart <pool> virsh pool-autostart test1
- View the status of test1
virsh pool-list --all test1 active yes
- View information about specific storage pool VMS
format: virsh pool-info <pool>
- Viewing volumes in storage pool VM S
virsh vol-list <pool>
- View disk devices for virtual machines
virsh domblklist <domain>
- Create disk file for test1 pool
virsh vol-create -as <pool> <diskfilename> <size> --format <disktype> virsh vol-create-as test1 test1.qcow2 10G --format qcow2
- Use the disk file under test1 disk to install the system
virt-install -n test1 -r 1024 -l /kvm/iso/centos7.iso --disk /test1/test1.qcow2 --nographics -x 'console=ttyS0'
Create test2 pool with partition
Create test2 pool in logical
- View the newly added hard disk (sdb)
[root@localhost ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 100G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 99G 0 part ├─centos-root 253:0 0 50G 0 lvm / ├─centos-swap 253:1 0 3.9G 0 lvm [SWAP] └─centos-home 253:2 0 45.1G 0 lvm /home sdb 8:16 0 20G 0 disk sr0 11:0 1 9.5G 0 rom Create physical volume # pvcreate /dev/sdb Create volume group #vgcreate test2 /dev/sdb Viewing volume groups # vgs VG #PV #LV #SN Attr VSize VFree centos 1 3 0 wz--n- <99.00g 4.00m test2 1 0 0 wz--n- <20.00g <20.00g
- Create test2 pool, and the directory is under / dev/test2
virsh pool-define-as test2 logical --target /dev/test2
- Start test2 pool
virsh pool-start test2
- Self starting test2 pool
virsh pool-autostart test2
- View test2 pool status
virsh pool-list --all test2 active yes
- Create test2 storage volume and test2 Qcow2 disk file (the disk file is in / dev/test2 /)
virsh vol-create-as test2 test2.qcow2 10G --format qcow2
- Use test2.0 that you just created Qcow2 disk file installation system
virt-install -n test2 -r 1024 --vcpus 1 -l /kvm/iso/centos7.iso --disk /dev/test2/test2.qcow2 --nographics -x 'console=ttyS0'
Use nfs warehouse common pool (open another server and add a new hard disk)
- Create physical volume
pvcreate /dev/sdb
- Create volume group
vgcreate test3 /dev/sdb
- View volume group details (view total pe)
vgdisplay test3 Total PE 12799
- Create logical volume
lvcreate -l 12799 -n test3 /dev/test3
- Viewing logical volumes
lvs ... test3 test3 -wi-a----- <50.00g
- Format logical volume
mkfs -t xfs /dev/test3/test3
- Installing nfs and rpcbind
yum -y install nfs-utils rpcbind
- Start rpcbind
systemctl start rpcbind && systemctl enable rpcbind
- Start nfs
systemctl start nfs && systemctl enable nfs
- Create nfs shared directory
mkdir /mnt/nfs echo "/mnt/nfs *(rw,sync,no_root_squash)" >> /etc/exports exportfs -av exporting *:/mnt/nfs
- Mount the logical volume to the nfs directory
mount /dev/test3/test3 /mnt/nfs/
- Permanent mount
echo "/dev/test3/test3 /mnt/nfs xfs default 0 0" >> /etc/fstab
- The kvm host also needs to install rpcbind
rpm -q rpcbind rpcbind-0.2.0-49.el7.x86_64
- Start rpcbind
systemctl start rpcbind && systemctl enable rpcbind
- Check whether the nfs server can be viewed
format: showmount -e nfs The server IP showmount -e 192.168.42.2
- Create test3 pool and create it in netfs type (Mount remote nfs to local / mnt)
virsh pool-define-as test3 netfs --source-host 192.168.42.1 --source-path /mnt/nfs --target /mnt
- Build test3 pool
virsh pool-build test3
- Open test3 pool
virsh pool-start test3
- Self starting test3 pool
virsh pool-autostart test3
- View the status of test3 pool
virsh pool-list --all test3 active yes
- Create the storage volume of test3 pool and create a disk test3 qcow2,
virsh vol-create-as test3 test3.qcow2 10G --format qcow2
- Check the nfs server to see if there is 1 no test3 qcow2
ls /mnt/nfs/ test3.qcow2
- Use test3 Qcow2 disk installation system (turn off firewall and selinux)
virt-install -n test3 -r 1024 --vcpus 1 -l /kvm/iso/centos7.iso --disk /mnt/test3.qcow2 --nographics -x 'console=ttyS0'