VMmanager 5 KVM: Administrator guide

RBD-storage

The virtual disk of a virtual machine is the HDD image. Virtual disks are stored in a local or network storage. RBD is a network storage which is a distributed file system designed to provide excellent reliability.  For more information please refer to the article Network storages.

Installation

Note
You need to perform the operations on every node of the Ceph cluster.
  1. Install Ceph:

    rpm --import 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'
    rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
    yum install snappy leveldb gdisk python-argparse gperftools-libs
    rpm -Uvh http://ceph.com/rpm-cuttlefish/el6/noarch/ceph-release-1-0.el6.noarch.rpm
    yum install ceph
  2. Install the plug-in YUM Priorities (for QEMU installation from the Ceph repository):

    yum install yum-plugin-priorities
  3. Enable the plug-in the configuration file /etc/yum/pluginconf.d/priorities.conf:

    [main]
    enabled = 1
  4. Create the file /etc/yum.repos.d/ceph-qemu.repo with the following contents:

    [ceph-qemu]
    name=Ceph Packages for QEMU
    baseurl=http://ceph.com/packages/ceph-extras/rpm/rhel6/$basearch
    enabled=1
    priority=2
    gpgcheck=1
    type=rpm-md
    gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
    
    [ceph-qemu-noarch]
    name=Ceph QEMU noarch
    baseurl=http://ceph.com/packages/ceph-extras/rpm/rhel6/noarch
    enabled=1
    priority=2
    gpgcheck=1
    type=rpm-md
    gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
    
    [ceph-qemu-source]
    name=Ceph QEMU Sources
    baseurl=http://ceph.com/packages/ceph-extras/rpm/rhel6/SRPMS
    enabled=1
    priority=2
    gpgcheck=1
    type=rpm-md
    gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
  5. Install QEMU from Ceph:

    yum update
    yum install qemu-img qemu-kvm qemu-kvm-tools

How VMmanager works with RBD-storage

When creating the RBD-storage VMmanager is trying to find the ceph cluster monitor specified on the cluster creation form using the key that is normally located in /usr/local/mgr5/etc/ssh_id_rsa. Then VMmanager performs the following operations:

  1. Adds the storage with the same name as in VMmanager:

    ceph osd pool create <Storage_name> 128 128
  2. Checks/adds a ceph user-client (the client name is specified by the CephAuthUserName parameter in the vmmgr.conf configuration file, the default value is vmmgr):

    ceph auth get-or-create client.<client_name> mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=<storage_name>'
  3. Creates a "secret" in libvirt and adds the above key.
  4. Defines a list of cluster monitors and specifies it into the rbdmonitor table of the database.

When adding a new hard drive image in the RBD-storage, VMmanager connects via ssh to the first monitor of the cluster and executes the command

qemu-img create -f rbd rbd:<storage_name>/<Image_name> 2G


Execute the command when deleting the disk:

rbd -p <storage_name> rm <Image_name>
Note
By default VMmanager doesn't work with cache-pool, therefore the vmmgr user doesn't have permissions to work with it. If ceph uses cache-pool, the vmmgr user should have sufficient permissions.