kvm installation and thermal migration

Posted by rwachowiak on Sat, 18 May 2019 23:21:01 +0200

Address planning:

kvm112: 192.168.5.112

kvm113: 192.168.5.113

nfs101: 192.168.5.101

The experimental steps are as follows:

Deploying kvm environments on 112 and 113,

Create a virtual machine vm1 locally at 112.

Create a snapshot named first for vm1.

Publish shared directories on 101.

Create nfs-based storage pool on 112 and clone Vm1 into NFS storage pool named nfs_vm1.

Add bridge network card to nfs_vm1 and configure IP address to ensure ping.

Heat transfer of nfs_vm1 to 113.

 

OS version

# cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core) 

1. Deployment of kvm environments on 112 and 113

Installation of management tools

# yum install -y qemu-img qemu-kvm libvirt libvirt-python libguestfs-tools virt-install bridge-utils

Enabling libvirtd system unit

# systemctl enable libvirtd && systemctl restart libvirtd

Loading Kernel Modules

# modprobe kvm && modprobe kvm-intel

# lsmod | grep kvm
kvm_intel             174841  0
kvm                   578518  1 kvm_intel
irqbypass              13503  1 kvm

2. Creating Virtual Machines Locally at 112

Create a disk file in qcow2 format

# mkdir -pv /kvm/store

# qemu-img create -f qcow2 /kvm/store/vm1.qcow2 100G

Command Line Installation Tool: virt-install

# virt-install --help

# virt-install \
     -n vm1 \
     --vcpus 2 -r 4096 \
     --disk path=/kvm/store/vm1.qcow2,format=qcow2,size=100 \
     --location=/iso/CentOS-7-x86_64-DVD-1804.iso \
     --nographics \
     -x 'console=ttyS0'

# virsh
Welcome to virsh, the virtualization interactive terminal.

Type:  'help' for help with commands
       'quit' to quit

virsh # list --all
 Id    Name                           State
----------------------------------------------------
 1     vm1                            running

Connect to Virtual Machine

# virsh console vm1

Exit Connection Terminal

Ctrl + ]

To create a snapshot of the virtual machine, close the virtual machine first

# virsh shutdown vm1

virsh # snapshot-create-as --domain vm1 --name first
Domain snapshot first created
virsh # snapshot-list vm1
 Name                 Creation Time             State
------------------------------------------------------------
 first                2018-09-26 06:10:21 -0400 shutoff

3. 101 issue nfs shared directory, 112 and 113 mount nfs shared directory

Publish directories on 101

# yum install -y rpcbind nfs-utils

# mkdir /home/shares

# cat /etc/exports
/home/shares    192.168.5.0/24(rw,no_root_squash)

# exportfs -arv
exporting 192.168.5.0/24:/home/shares

# systemctl enable nfs rpcbind && systemctl restart nfs rpcbind

If the firewall is turned on

# vim nfs_firewalld.sh

#!/bin/bash
# Program: CentOS 7.x

yum install -y nfs-utils rpcbind

cat >> /etc/sysconfig/nfs << EOF
RQUOTAD_PORT=1001
MOUNTD_PORT=1002
LOCKD_UDPPORT=3001
LOCKD_TCPPORT=3001
EOF

for isrv in rpcbind.service nfs.service nfs-lock.service
do
    systemctl enable  ${isrv}
    systemctl restart ${isrv}
done

for tport in 111 2049 1001 1002 3001
do
    firewall-cmd --permanent --add-port=${tport}/tcp
    firewall-cmd --permanent --add-port=${tport}/udp
done

firewall-cmd --reload

Mount nfs shared directories on 112 and 113

# yum install -y rpcbind nfs-utils

# vim /etc/fstab

192.168.5.101:/home/shares  /kvm/nfspool  nfs defaults,_netdev,rw 0 0

# mount -a

Note: The / kvm/{nfspool,store} directory needs to be created on 113. For future migration, the directories on the two Host s should be the same.

4. Define NFS storage pool on 112, clone Vm1 to NFS storage pool, add network card to nfs_vm1

Define nfs storage pool on 112

virsh # pool-define-as --name nfs_pool --type netfs --source-host 192.168.5.101 --source-path /home/shares --target /kvm/nfspool/

virsh # pool-build nfs_pool

virsh # pool-start nfs_pool

virsh # pool-autostart nfs_pool

virsh # pool-list
 Name                 State      Autostart
-------------------------------------------
 iso                  active     yes       
 nfs_pool             active     yes       
 store                active     yes       

Clone Vm1 onto NFS storage pool, named nfs_vm1

# virsh shutdown vm1

# virt-clone -o vm1 -n nfs_vm1 --file /kvm/nfspool/nfs_vm1.qcow2
WARNING  The requested volume capacity will exceed the available pool space when the volume is fully allocated. (102400 M requested capacity > 42051 M available)
Allocating 'nfs_vm1.qcow2'                                                                                     | 100 GB  00:01:01     

Clone 'nfs_vm1' created successfully.

# virsh start nfs_vm1

# virsh console nfs_vm1

Add network card to nfs_vm1

# nmcli con add con-name br0 type bridge ifname br0 autoconnect yes

# nmcli con add con-name br0-ens34 type bridge-slave ifname ens34 autoconnect yes master br0

# nmcli connection up br0-ens34

# nmcli connection up br0

virsh # attach-interface --domain vm1 --type bridge --source br0 --current
Interface attached successfully

virsh # attach-interface --domain vm1 --type bridge --source br0 --config
Interface attached successfully

Connect virtual machines

# virsh console nfs_vm1

# nmcli connection modify "Wired connection 1" ipv4.addresses 192.168.5.44/24 ipv4.method manual autoconnect yes

# nmcli connection up "Wired connection 1"

# ping -c2 192.168.5.112
PING 192.168.5.112 (192.168.5.112) 56(84) bytes of data.
64 bytes from 192.168.5.112: icmp_seq=1 ttl=64 time=1.48 ms
64 bytes from 192.168.5.112: icmp_seq=2 ttl=64 time=1.69 ms

5. Heat transfer of nfs_vm1 to 113

Dynamic migration (thermal migration)
If the source host and the destination host share the storage system, only the client's vCPU execution status, memory content and virtual machine device status are sent to the destination host through the network. Otherwise, you also need to send the client's disk storage to the destination host. Shared storage system refers to the source and destination virtual machine mirror file directory is on a shared storage.

The specific process of KVM dynamic migration based on shared storage system is as follows:
1. At the beginning of migration, the client is still running on the host, while the memory pages of the client are transferred to the destination host.
2. QEMU/KVM monitors and records any changes to all transferred memory pages during migration, and begins to transmit the changes to the memory pages in the previous process after all the pages have been transferred.
3. QEMU/KVM estimates the transmission speed during migration. When the remaining amount of memory data can be transmitted within a set time period (default 30 milliseconds), QEMU/KVM closes the client on the source host, transfers the remaining amount of data to the destination host, and finally restores the running status of the client on the destination host. State.
4. So far, the dynamic migration operation of KVM has been completed. The migrated client is as consistent as possible with the previous one, unless there is a lack of configuration on the destination host, such as bridges.
Note that the dynamic migration process is not complete when the memory usage in the client is very high and the data in memory is constantly modified faster than the memory speed that KVM can transmit. At this time, only static migration can be achieved.
On the efficiency of real-time migration, many people in the industry have put forward suggestions for improvement, such as using memory compression technology to reduce the size of memory that needs to be transferred.

Relocation Notes:
1. The best migrated server cpu brand is the same
2. 64 bits can only migrate between 64-bit hosts, 32 bits can migrate between 32-bit and 64-bit hosts.
3. Host name cannot conflict
4. The target host and source host have the same software configuration as possible, such as the same bridge network card, resource pool, etc.
5. The settings of two migrated hosts cat/proc/cpuinfo | grep NX are the same
NX, full name "No eXecute", or "No Run", is a technology used in CPU, used as memory.
Areas are separated into memory processor instruction sets or data only. Any memory using NX technology represents data only
So the instruction set of the processor cannot be stored in these areas. This technology can prevent most buffer overflow attacks, that is, some malicious programs, put their own malicious instruction set in the data storage area of other programs and run, thus controlling the whole computer.

 

1. Image files must be placed in shared storage

2. The target hypervisor must be compatible with the original hypervisor and the KVM version number must be compatible.

3. Shared storage on both hosts must be mounted at the same location, otherwise configuration files will not be shared

4. CPUs should have the same type of CPU features, either Interl or AMD

5. The time of two physical machines must be synchronized

6. Two physical hosts must have the same network configuration

 

Add parsing to hosts file

# vim /etc/hosts

192.168.5.112   kvm112.qufujin.top kvm112
192.168.5.113   kvm113.qufujin.top kvm113

Create ssh passwordless login between 112 and 113

# ssh-keygen -t rsa

# cat /root/ip.txt
192.168.5.112
192.168.5.113

# vim ssh.sh

#!/bin/bash
# Batch issue public key.

file="/root/ip.txt"
pass="pwd@123"
port="1804"

yum install -y expect

for i in $(cat $file)
do
    expect -c "   
    spawn ssh-copy-id -p${port} -i /root/.ssh/id_rsa.pub root@${i}
        expect {   
            \"*yes/no*\" {send \"yes\r\"; exp_continue}   
            \"*password*\" {send \"${pass}\r\"; exp_continue}   
            \"*Password*\" {send \"${pass}\r\";}   
        } "
done

# bash ssh.sh

Heat transfer of nfs_vm1 to 113

Note that create br0 on 113 beforehand

# virsh migrate --help

# virsh migrate \
    --domain nfs_vm1 \
    --live \
    --unsafe \
    --verbose \
    qemu+ssh://192.168.5.113:1804/system

Migration: [100 %]

View nfs_vm1 on 113

# virsh list
 Id    Name                           State
----------------------------------------------------
 1     nfs_vm1                        running

Save to configuration file

# virsh dumpxml nfs_vm1 > /etc/libvirt/qemu/nfs_vm1.xml

Topics: ssh network snapshot yum