VMmanager: Administrator guide

Pre-configuring SAN

This article discusses an example of configuring iSCSI on a SAN for external storage and cluster nodes running on AlmaLinux 8. For other operating systems the configuration procedure may be different.

The order of SAN configuration before creating a network LVM storage:

  1. Configure the external storage as an iSCSI target.
  2. Configure the VMmanager cluster nodes as initiators (iSCSI initiator).
  3. Install the LVM2 software on the cluster nodes.

Key terms

  • initiator (iSCSI initiator) – the client device that sends the connection request to the target. In this example, VMmanager cluster nodes act as initiators;
  • target (iSCSI target) – a program or device that emulates a disk and handles initiator connection requests. The target can be logically divided into LUNs;
  • LUN (Logical Unit Number) – a part of the target, the address of the storage device. An equivalent of a disk partition or a separate logical volume;
  • TPG (Target Portal Group) – a group of targets united by a common functional feature. As a rule, targets on the same device are combined into one TPG;
  • IQN (iSCSI Qualified Name) – the unique identifier of the initiator or target;
  • ACL (Access Control List) — this list specifies which initiators can connect to the target and their authentication data.

External storage configuration

Connect to the server via SSH with a superuser account and perform the following steps:

  1. Make sure that you are using the latest version of the software packages:

    AlmaLinux
    dnf update
    Astra Linux
    apt update
  2.  Install the target management shell:

    AlmaLinux
    dnf -y install targetcli
    Astra Linux
    apt -y install targetcli-fb
  3. Check the amount of free space on the disk:

    df -hT
    We recommend that you use a separate partition or a physical disk to create a target.
  4. Create a directory for the target. For example, /var/targetdisk01:

    mkdir /var/targetdisk01
  5.  Launch the targetcli console:

    targetcli
  6. Create a file for the target:

    cd /backstores/fileio
    create <target name> <path> <size> 
    Comments to the command
  7. Set the IQN for the target:

    cd /iscsi
    create iqn.<year>-<month>.<reverse domain>:<name> 
    Comments to the command

    The response will contain the number of the created TPG.

  8. Create a LUN:

    cd <iqn>/<tpg>/luns
    Comments to the command
    create /backstores/fileio/<target name>
    Comments to the command

    The response will contain the number of the created LUN.

  9. Configure ACL for the target:

    1. Set the initiator's IQN:

      cd /iscsi/<iqn>/<tpg>/acls
      Comments to the command
      create iqn.<year>-<month>.<reverse domain>:<initiator name> 
      Comments to the command
    2. Set the user id and the initiator password:

      cd iqn.<year>-<month>.<reverse domain>:<initiator name> 
      set auth userid=<id>
      set auth password=<pass>
      Comments to commands
    3. Enable authorization for the initiator: 

      set attribute authentication=1
  10. Make sure that all settings have been successfully created:

    ls /iscsi/
    An example of the command output
    o- iscsi .......................................................... [Targets: 1]
      o- iqn.2020-03.com.example:mytarget1 ............................... [TPGs: 1]
        o- tpg1 ............................................. [no-gen-acls, no-auth]
          o- acls ........................................................ [ACLs: 1]
          | o- iqn.2020-03.com.domain>:initiator1 ................. [Mapped LUNs: 1]
          |   o- mapped_lun0 ........................ [lun0 fileio/targetdisk1 (rw)]
          o- luns ........................................................ [LUNs: 1]
          | o- lun0  [fileio/targetdisk1 (/var/targetdisk01/targetdisk1.img) (default_tg_pt_gp)]
          o- portals .................................................. [Portals: 1]
            o- 0.0.0.0:3260 ................................................... [OK]
  11. To save the settings, exit the targetcli console:

    exit
  12. Add the target service to the autorun:

    systemctl enable target
  13.  If you are using firewalld, set the necessary permissions and restart the service:

    firewall-cmd --add-service=iscsi-target --permanent
    firewall-cmd --reload

Configuration of cluster nodes

Connect to the cluster nodes via SSH with a superuser account and perform the following steps:

  1. Install the required software:

    AlmaLinux
    dnf -y install iscsi-initiator-utils
    Astra Linux
    apt install -y open-iscsi
  2. Specify the previously created initiator IQN in the InitiatorName parameter of /etc/iscsi/initiatorname.iscsi file:

    InitiatorName=iqn.<year>-<month>.<reverse domain>:<initiator name>
    Comments
  3. Edit the /etc/iscsi/iscsid.conf file:

    1. Uncomment the strings:

      node.session.auth.authmethod = CHAP
      
    2. Uncomment the node.session.auth.username, node.session.auth.password parameters. Specify the user id and password set in the ACL settings for the target:

      node.session.auth.username = <id>
      node.session.auth.password = <pass>
      
      Comments
  4. Check access to the target:

    iscsiadm -m discovery -t sendtargets -p <target ip>
    Comments to the command
    An example of a successful execution of command
    192.0.2.123:3260,1
    iqn.2020-02.example.com:MyTarget1
  5. Connect to the target:

    iscsiadm -m node --login
    iscsiadm --mode node --target <target name> --portal <target ip> --logout
    Comments to the command
    iscsiadm -m node --login
  6. Make sure the target is connected as a block device:

    lsblk
    Обратите внимание!
    To make the connection settings not depend on block device names, you can connect the target via UUID or WWID. See the official Red Hat documentation for details.

Installing the LVM2 software

For VMmanager to be able to connect network LVM storage to the cluster nodes, install the LVM2 software on all nodes:

AlmaLinux
dnf -y install lvm2
Astra Linux
apt install -y lvm2

When you add a storage to VMmanager, the platform will automatically make the necessary settings on the block device: it will create PV and VG.

If VMmanager detects a VG created by other software on the block device when adding storage, the storage will not be connected.

Increasing storage size

If the LUN size has been increased in the storage, the information about the storage size will not be sent to the platform automatically. To send the information:

  1. On all cluster nodes:
    1. Perform a disk scan: 
      echo 1 > /sys/block/<device_name>/device/rescan
      Comments to the command
    2. If multipath is used in the storage, perform a device map scan: 
      multipath -r
  2. On one of the cluster nodes, resize the VG: 
    pvresize /dev/mapper/<LUN_name>
    Comments to the command
  3. On the server with the platform, update the storage information: 
    docker exec -it vm_box curl -X POST -d '{}' input:1500/vm/v3/cluster/<cluster_id>/storage/<storage_id>/check -H 'internal-auth:on'
    Comments to the command