The virtual disk of a virtual machine is the HDD image. Virtual disks are stored in a local or network storage. A Ceph RBD storage is a network fault-tolerant storage. This article describes how to configure a Ceph cluster. For more information please refer to the article Network storages.
Ceph-cluster
Consult the Documentation before you start. We will provide you only with general information.
Ceph cluster must not be located on the same server with VMmanager. "Nodes" and "Cluster" have nothing to do with "cluster nodes" in VMmanager.
System requirements
- at least 5 physical or virtual servers: one server with data (OSD), one server for metadata (MDS), at least two server-monitors and admin server (the first client)
- all servers should locate as close to each other as possible (in one rack or within one segment of the network).
- For Ceph we recommend using high-speed network connection both between cluster nodes and client nodes.
- The same operating system must be installed on all servers.
Preparing nodes
Complete the steps on all the cluster nodes:
- Install open-ssh on all the cluster nodes
-
Create the ceph user on each node
# ssh user@ceph-server # useradd -d /home/ceph -m ceph # passwd ceph
-
Set the permissions for the user:
# echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph # chmod 0440 /etc/sudoers.d/ceph
On the admin server:
-
Add the ssh keys to be able to access all the cluster nodes from the admin server. Create and open the /ceph-client directory:
# ssh root@adm # cd /ceph-client
-
Add the ssh keys:
# ssh-keygen -f ./id_rsa
Generating public/private rsa key pair. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in ./id_rsa. Your public key has been saved in ./id_rsa.pub. The key fingerprint is: 23:2c:d9:5c:64:4f:ea:43:5b:48:b3:69:3d:e8:25:a4 root@adm The key's randomart image is: +--[ RSA 2048]----+ | * . | | * @ | | E @ * | | = * = . | | o = S | | . . o | | | | | | | +-----------------+
-
If the cluster nodes are not resolved with DNS, update /etc/hosts. We have the following result::
# getent hosts | grep ceph 172.31.240.91 adm.ceph.ispsystem.net adm 172.31.240.87 mon-1.ceph.ispsystem.net mon-1 172.31.240.84 mon-2.ceph.ispsystem.net mon-2 172.31.240.28 osd-1.ceph.ispsystem.net osd-1 172.31.240.44 osd-2.ceph.ispsystem.net osd-2 172.31.240.90 mds-1.ceph.ispsystem.net mds-1
-
Add a variable (for your convenience):
# nodes=`getent hosts | grep ceph | grep -v 'adm' | awk '{print $3}' | xargs echo` # echo $nodes mon-1 mon-2 osd-1 osd-2 mds-1
-
Copy the keys on the cluster nodes:
# for i in $nodes; do ssh-copy-id -i /ceph-client/id_rsa.pub ceph@$i ; done
-
Edit the ~/.ssh/config file. File template looks like the following:
# cat ~/.ssh/config Host mon-1 Hostname mon-1.ceph.ispsystem.net IdentityFile /ceph-client/id_rsa User ceph Host mon-2 Hostname mon-2.ceph.ispsystem.net IdentityFile /ceph-client/id_rsa User ceph Host osd-1 Hostname osd-1.ceph.ispsystem.net IdentityFile /ceph-client/id_rsa User ceph Host osd-2 Hostname osd-2.ceph.ispsystem.net IdentityFile /ceph-client/id_rsa User ceph Host mds-1 Hostname mds-1.ceph.ispsystem.net IdentityFile /ceph-client/id_rsa User ceph
Installing OS and creating cluster
On the admin server:
-
Install ceph-deploy:
wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | apt-key add — echo deb http://ceph.com/debian-cuttlefish/ $(lsb_release -sc) main | tee /etc/apt/sources.list.d/ceph.list apt-get update apt-get install ceph-deploy
-
Initialize nodes-monitors:
ceph-deploy new mon-1 mon-2
-
Please note: if you want to use one node for your Ceph Storage Cluster, you will need to modify the default osd crush chooseleaf type setting (it defaults to 1 for node) to 0 for device so that it will peer with OSDs on the local node. Add the following line to your Ceph configuration file
osd crush chooseleaf type = 0
-
Install ceph on cluster nodes:
# ceph-deploy install --stable cuttlefish $nodes OK OK OK OK OK
-
Install the software for cluster monitor:
# ceph-deploy mon create mon-1 mon-2
-
Get the cluster monitor keys:
# ceph-deploy gatherkeys mon-1 # ls -l ... -rw-r--r-- 1 root root 72 Jul 12 05:01 ceph.bootstrap-mds.keyring -rw-r--r-- 1 root root 72 Jul 12 05:01 ceph.bootstrap-osd.keyring -rw-r--r-- 1 root root 64 Jul 12 05:01 ceph.client.admin.keyring ...
-
Prepare the storage (in our case we have the empty /dev/sdb disk on the cluster nodes)
# ceph-deploy osd prepare osd-1:sdb osd-2:sdb # ceph-deploy osd activate osd-1:sdb osd-2:sdb
-
Clear the /dev/sdb disks on the storages. All the data on the disks will be deleted:
# ceph-deploy disk zap osd-1:sdb osd-2:sdb
-
Prepare the metadata node
# ceph-deploy mds create mds-1