multipath configuration for iscsi

Posted by saidbakr on Sat, 20 Jul 2019 02:10:34 +0200

Introduction to iSCSI

iSCSI (Internet Small Computer System Interface, pronounced / sk zi/), Internet Small Computer System Interface, also known as IP-SAN, is a storage technology based on the Internet and SCSI-3 protocol. It was proposed by the IETF and became an official standard on February 11, 2003.

iSCSI is a TCP/IP-based protocol for establishing and managing interconnections between IP storage devices, hosts, clients, and so on, and for creating storage area networks (SAN s).

SAN s make it possible to apply the SCSI protocol to high-speed data transmission networks, which operate at the block-level between multiple data storage networks.SCSI

The architecture is based on C/S mode, which is usually used in environments where devices are close to each other and connected by SCSI bus.

The main functions of iSCSI are the host system (initiator) and storage device (target) on the TCP/IP network.

Compared with traditional SCSI technology, iSCSI technology has three revolutionary changes:

  1. Send the original native-only SCSI synonym over a TCP/IP network, so that the connection distance can be extended infinitely;

  2. The number of servers connected is unlimited (the original upper limit for SCSI-3 was 15);

  3. Because of the server architecture, you can also scale up online to deploy dynamically.

Environmental introduction

  1. Vmware creates two centos7 virtual machines, Noe1 and Noe2, with SCSI as the disk type;

  2. Noe01 is ready to configure iSCSI target (disk sharing provider) with two network cards eth33 and eth37 for multipath configuration, ip 192.168.191.130 and 192.168.191.132, respectively.

  3. Noe02 is used to configure iSCSI initiator (disk mount side), with only one network card eth33, ip 192.168.191.131;

  4. Both node01 and node02 have/dev/sdb and/dev/sdc hard disks. The next configuration will share the/dev/sdb of node01 as iSCSI block devices to node02.

Preparing for installation

  • Close selinux

    setenforce 0
    sed -i '/^SELINUX=.*/ s//SELINUX=disabled/' /etc/selinux/config
  • Close Firewall

    systemctl stop firewalld.service
    systemctl disable firewalld.service
  • Install the epel Extension Pack

    yum install -y epel-release


target settings

Install scsi-target-utils

yum --enablerepo=epel -y install scsi-target-utils

To configure

vim /etc/tgt/target.conf
//Add a configuration
<target test12>  #Tes12 Optional Settings
    #Share/dev/sdc as a block device
    backing-store /dev/sdc
    #Optional, iSCSI Initiator limit
    initiator-address 192.168.191.131    
    #Optional, Authentication Configuration, username and password configurations to your own information
    incominguser username password
</target>

Open Service

systemctl enable tgtd.service
systemctl start tgtd.service
//View service status started by tgtd:

[root@test ~]# systemctl  status  tgtd.service 
● tgtd.service - tgtd iSCSI target daemon
   Loaded: loaded (/usr/lib/systemd/system/tgtd.service; disabled; vendor preset: disabled)
   Active: active (running) since Fri 2019-07-19 17:05:12 CST; 11min ago
  Process: 2646 ExecStop=/usr/sbin/tgtadm --op delete --mode system (code=exited, status=0/SUCCESS)
  Process: 2640 ExecStop=/usr/sbin/tgt-admin --update ALL -c /dev/null (code=exited, status=0/SUCCESS)
  Process: 2638 ExecStop=/usr/sbin/tgtadm --op update --mode sys --name State -v offline (code=exited, status=0/SUCCESS)
  Process: 2686 ExecStartPost=/usr/sbin/tgtadm --op update --mode sys --name State -v ready (code=exited, status=0/SUCCESS)
  Process: 2658 ExecStartPost=/usr/sbin/tgt-admin -e -c $TGTD_CONFIG (code=exited, status=0/SUCCESS)
  Process: 2656 ExecStartPost=/usr/sbin/tgtadm --op update --mode sys --name State -v offline (code=exited, status=0/SUCCESS)
  Process: 2650 ExecStartPost=/bin/sleep 5 (code=exited, status=0/SUCCESS)
 Main PID: 2649 (tgtd)
   CGroup: /system.slice/tgtd.service
           └─2649 /usr/sbin/tgtd -f

Jul 19 17:05:07 test systemd[1]: Starting tgtd iSCSI target daemon...
Jul 19 17:05:07 test tgtd[2649]: tgtd: iser_ib_init(3436) Failed to initialize RDMA; load kernel modules?
Jul 19 17:05:07 test tgtd[2649]: tgtd: work_timer_start(146) use timer_fd based scheduler
Jul 19 17:05:07 test tgtd[2649]: tgtd: bs_init_signalfd(267) could not open backing-store module directory /usr/li...-store
Jul 19 17:05:07 test tgtd[2649]: tgtd: bs_init(386) use signalfd notification
Jul 19 17:05:12 test tgtd[2649]: tgtd: device_mgmt(246) sz:14 params:path=/dev/sdb
Jul 19 17:05:12 test tgtd[2649]: tgtd: bs_thread_open(408) 16
Jul 19 17:05:12 test systemd[1]: Started tgtd iSCSI target daemon.

//The target installed with yum will start with the following prompts:
    systemd: Configuration file /usr/lib/systemd/system/tgtd.service is marked executable. Please remove executable permission bits. Proceeding anyway
    
    //Modify permissions to change files
    chmod 700 /usr/lib/systemd/system/tgtd.service

View Services

[root@test ~]# tgtadm --mod target  --op show 
Target 1: test12
    System information:
        Driver: iscsi
        State: ready
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: IET     00010000
            SCSI SN: beaf10
            Size: 0 MB, Block size: 1
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            SWP: No
            Thin-provisioning: No
            Backing store type: null
            Backing store path: None
            Backing store flags: 
        LUN: 1
            Type: disk
            SCSI ID: IET     00010001
            SCSI SN: beaf11
            Size: 5369 MB, Block size: 512
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            SWP: No
            Thin-provisioning: No
            Backing store type: rdwr
            Backing store path: /dev/sdb
            Backing store flags: 
    Account information:
    ACL information:
        192.168.191.131

Initiator configuration (mount disk side)

install

yum -y install iscsi-initiator-utils

To configure

# vim /etc/iscsi/initiatorname.iscsi
InitiatorName=test12  #Same name as in / etc/tgt/target.conf on the target side
# vim /etc/iscsi/iscsi.conf
#The following configuration can be skipped if your target does not configure ACL restrictions, just use the default configuration
#57 lines: uncomment node.session.auth.authmethod = CHAP
#Line 61,62: Uncomment, username and password set as previously set in target
node.session.auth.username = username
node.session.auth.password = password

Scan and display devices

[root@accur-test ~]# iscsiadm  -m discovery -t st -p 192.168.191.130   
192.168.191.130:3260,1 test12
[root@accur-test ~]# iscsiadm  -m discovery -t st -p 192.168.191.132 
192.168.191.132:3260,1 test12


[root@accur-test ~]# iscsiadm  -m node -o show
# BEGIN RECORD 6.2.0.874-10
node.name = test12
node.tpgt = 1
node.startup = automatic
node.leading_login = No
iface.hwaddress = <empty>
.........
iface.vlan_id = 0
iface.vlan_priority = 0
iface.vlan_state = <empty>
iface.iface_num = 0
iface.mtu = 0
iface.port = 0
iface.bootproto = <empty>
iface.dhcp_alt_client_id_state = <empty>
iface.dhcp_alt_client_id = <empty>
iface.dhcp_dns = <empty>
iface.dhcp_learn_iqn = <empty>
.............
iface.strict_login_compliance = <empty>
iface.discovery_auth = <empty>
iface.discovery_logout = <empty>
node.discovery_address = 192.168.191.130
node.discovery_port = 3260
....................
node.conn[0].iscsi.HeaderDigest = None
node.conn[0].iscsi.IFMarker = No
node.conn[0].iscsi.OFMarker = No
# END RECORD
# BEGIN RECORD 6.2.0.874-10
node.name = test12
.
node.conn[0].iscsi.IFMarker = No
node.conn[0].iscsi.OFMarker = No
# END RECORD

Land

[root@accur-test ~]# iscsiadm -m node -l  #Or -l instead of--login
Logging in to [iface: default, target: test12, portal: 192.168.191.130,3260] (multiple)
Logging in to [iface: default, target: test12, portal: 192.168.191.132,3260] (multiple
Login to [iface: default, target: test12, portal: 192.168.191.130,3260] successful.
Login to [iface: default, target: test12, portal: 192.168.191.132,3260] successful.

//Sign out:
[root@accur-test ~]# iscsiadm -m node -u
Logging out of session [sid: 7, target: test12, portal: 192.168.191.130,3260]
Logging out of session [sid: 8, target: test12, portal: 192.168.191.132,3260]
Logout of [sid: 7, target: test12, portal: 192.168.191.130,3260] successful.
Logout of [sid: 8, target: test12, portal: 192.168.191.132,3260] successful.

Confirmation Information

[root@accur-test ~]# iscsiadm  -m session -o show
tcp: [10] 192.168.191.130:3260,1 test12 (non-flash)
tcp: [11] 192.168.191.132:3260,1 test12 (non-flash)

Confirm partition information

[root@accur-test ~]# cat /proc/partitions 
major minor  #blocks  name

   8        0   20971520 sda
   8        1    1048576 sda1
   8        2   19921920 sda2
   8       16   10485760 sdb
  11        0    4415488 sr0
 253        0   17821696 dm-0
 253        1    2097152 dm-1
   8       32    5242880 sdc
 253        2    5242880 dm-2
   8       48    5242880 sdd

As you can see, sdd and sde are disks mounted through iSCSI. With the fdisk -l command, you can see that these disks are exactly the same as / dev/sdb of node01. To use this disk properly, we also need to configure disk multipaths.

If viewing the disk information on the initiator side does not have the disk information for the target side synchronization, you can view the service status information of iscsid on the initiator side (there was a failure to connect to the target side during this validation), such as

[root@accur-test ~]# systemctl status   iscsid.service 

● iscsid.service - Open-iSCSI

   Loaded: loaded (/usr/lib/systemd/system/iscsid.service; disabled; vendor preset: disabled)

   Active: active (running) since Fri 2019-07-19 16:44:40 CST; 5h 1min ago

     Docs: man:iscsid(8)

           man:iscsiadm(8)

  Process: 983 ExecStart=/usr/sbin/iscsid (code=exited, status=0/SUCCESS)

 Main PID: 1001 (iscsid)

   CGroup: /system.slice/iscsid.service

           ├─1000 /usr/sbin/iscsid

           └─1001 /usr/sbin/iscsid


Jul 19 21:35:31 accur-test iscsid[1000]: Connection6:0 to [target: test12, portal: 192.168.191.132,3260] through [...tdown.

Jul 19 21:35:31 accur-test iscsid[1000]: Connection5:0 to [target: test12, portal: 192.168.191.130,3260] through [...tdown.

Jul 19 21:35:38 accur-test iscsid[1000]: Connection7:0 to [target: test12, portal: 192.168.191.130,3260] through [...al now

Jul 19 21:35:38 accur-test iscsid[1000]: Connection8:0 to [target: test12, portal: 192.168.191.132,3260] through [...al now

Jul 19 21:35:59 accur-test iscsid[1000]: Connection7:0 to [target: test12, portal: 192.168.191.130,3260] through [...tdown.

Jul 19 21:35:59 accur-test iscsid[1000]: Connection8:0 to [target: test12, portal: 192.168.191.132,3260] through [...tdown.

Jul 19 21:36:02 accur-test iscsid[1000]: Connection9:0 to [target: test12, portal: 192.168.191.130,3260] through [...al now

Jul 19 21:38:37 accur-test iscsid[1000]: Connection9:0 to [target: test12, portal: 192.168.191.130,3260] through [...tdown.

Jul 19 21:38:42 accur-test iscsid[1000]: Connection10:0 to [target: test12, portal: 192.168.191.130,3260] through ...al now

Jul 19 21:38:42 accur-test iscsid[1000]: Connection11:0 to [target: test12, portal: 192.168.191.132,3260] through ...al now

Hint: Some lines were ellipsized, use -l to show in full.

Configure Multipath software

What is multipath?

A common computer host is a hard disk attached to a bus, which is a one-to-one relationship.In a SAN environment consisting of optical fibers or an IPSAN environment consisting of iSCSI, a many-to-many relationship is formed because the host and storage are connected through an optical switch or multiple network cards and IP.That is, there are multiple paths from the host to the store to choose from.The IO between host and storage can be selected by multiple paths.How many different paths can each host take to the corresponding storage, and how can I/O traffic be allocated if it is used at the same time?One of the paths is broken. What should I do?Also, from the operating system perspective, each path is considered to be a physical disk, but it is actually just a different path to the same physical disk, which confuses the user when used.Multipath software is designed to solve these problems.

The main function of multipath is to work with storage devices to achieve the following functions:

  1. Failure Switching and Recovery

  2. Load balancing for IO traffic

  3. Virtualization of disks

In order for hosts to access storage devices using iSCSI multipaths, we need to install a multipath device mapper (DM-Multipath) on the host.Multipath Device Mapper enables multiple I/O paths between the host node and backend storage to be configured as a single logical device, providing link redundancy and improved performance.Hosts can effectively improve the reliability of back-end storage systems by accessing logical devices that contain multiple I/O paths.

Introduction to multipath-related tools and parameters:

1. device-mapper-multipath: that is, multipath-tools.Tools such as multipathd and multipath and configuration files such as multipath.conf are provided.

These tools create and configure multipaths through the device mapper's ioctr interface, and the multipath device mappings created by the device are in/dev/mapper.

2. device-mapper: It mainly consists of two parts: the kernel part and the user part.The kernel part consists mainly of the device mapper core (dm.ko) and some target driver s (md-multipath.ko).

The core completes the device mapping, while the target handles the i/o from mappered device according to the mapping relationship and its own characteristics.

At the same time, in the core section, there is an interface through which users can communicate with the kernel section via ioctr to guide kernel-driven behavior, such as how to create mappered device s, the properties of these divece s, and so on.

The user space section of the linux device mapper mainly includes the device-mapper package.These include the dmsetup tool and libraries to help you create and configure mappered device.

These libraries are essentially abstract and encapsulate interfaces to communicate with ioctr to facilitate the creation and configuration of mappered device s.These libraries need to be called in the program of multipath-tool

Install on initiator server

yum install device-mapper-multipath  -y

Set up boot-up

systemctl enable multipathd.service

Add Profile

The following configurations are all you need for multipath to work properly. For more detailed configurations, refer to the Multipath

# vi /etc/multipath.confblacklist {
    devnode "^sda"}
defaults {
    user_friendly_names yes
    path_grouping_policy multibus
    failback immediate
    no_path_retry fail
}

Start Services

systemctl start multipathd.service

View Services

[root@accur-test ~]# multipath -ll
mpatha (360000000000000000e00000000010001) dm-2 IET     ,VIRTUAL-DISK    
size=5.0G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  |- 12:0:0:1 sdc 8:32 active ready running
  `- 13:0:0:1 sdd 8:48 active ready running

At this point, you can see the multipath disk mpatha by executing the lsblk command:

[root@accur-test ~]# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda               8:0    0   20G  0 disk  
├─sda1            8:1    0    1G  0 part  /boot
└─sda2            8:2    0   19G  0 part  
  ├─centos-root 253:0    0   17G  0 lvm   /
  └─centos-swap 253:1    0    2G  0 lvm   [SWAP]
sdb               8:16   0   10G  0 disk  
sdc               8:32   0    5G  0 disk  
└─mpatha        253:2    0    5G  0 mpath 
sdd               8:48   0    5G  0 disk  
└─mpatha        253:2    0    5G  0 mpath 
sr0              11:0    1  4.2G  0 rom

Partition mounted disks

# parted /dev/sdc

Format into GPT partition format:

(parted) mklabel gpt

Make all capacity one primary partition

(parted) mkpart primary xfs 0% 100%

Sign out

(parted) q

The /dev/sdc1 file appears after the partitioning operation described above (viewed using lsblk)

format partition

# mkfs.xfs /dev/sdc1

Mount partition

#mkdir/data (create mount directory)
# mount -t xfs /dev/sdc1 /data
#df-h (view partition results)

Start-up automount

1) Automount in fstab file
Many articles say to start, mount the modify/etc/fstab file, and add it at the end

/dev/sdc1   /data    xfs    defaults    0 0

However, during the operation, I found that the above modifications could not successfully start and mount the system.Because iSCSI is a network device, the correct mount should be as follows:

/dev/sdc1  /data    xfs    default,_netdev    0 0

2) Self-start script implementation:
Add on last line of/etc/profile

mount -t xfs /dev/sdc1 /data

Or, as suggested at the beginning of the /etc/profile file, script the mount operation into the / etc/profile.d directory.
Script name: mount_iscsi.sh
Content:

#!/bin/bash
mount -t xfs /dev/sdc1 /data

Topics: Linux network Session yum SELinux