Before connecting Ceph storage to the VMmanager cluster, you should pre-configure the Ceph cluster nodes. This article provides general information about the installation. It is recommended that you create a Ceph cluster according to the official documentation.
Before creating a cluster, make sure that the equipment used matches the system requirements. It is recommended using Ceph software version no older than 13.2.0.
Requirements for cluster nodes
The following physical or virtual servers should be part of the cluster:
- data server (OSD);
- at least three monitor servers (MON);
- administrative server (ADM);
- monitoring service (MGR);
- metadata server (MDS). It is necessary if you use the CephFS file system.
Servers must meet the following requirements:
- do not use a VMmanager platform server or VMmanager cluster nodes as Ceph nodes;
- it is recommended using servers that are in the same rack and the same network segment;
- it is recommended using a high-speed network connection between cluster nodes;
- the same operating system must be installed on all servers;
- port 6789/TCP should be available on the monitor servers; and ports 6800 to 7300/TCP should be available on the data servers;
- all nodes in the cluster must have an unmounted partition or disk to install the Ceph software.
Example of cluster node preparation
The example describes how to create a cluster in the 172.31.240.0/20 network using servers:
- ceph-cluster-1 with IP address 172.31.245.51. Designation — MON, OSD, ADM, MGR.
- ceph-cluster-2 with IP address 172.31.246.77. Designation — MON, OSD.
- ceph-cluster-3 with IP address 172.31.246.82. Designation — MON, OSD.
All servers
- Install Ceph software:
-
Execute the command:
yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
-
Create the file /etc/yum.repos.d/ceph.repo and add the following lines to it:
[ceph-noarch] name=Ceph noarch packages baseurl=https://download.ceph.com/rpm-nautilus/el7/noarch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://download.ceph.com/keys/release.asc
-
Execute the command:
yum update
-
-
Install software to run NTP. This will prevent problems arising from the system time shift.
yum install ntp ntpdate ntp-doc
-
Create a ceph user and set the necessary permissions:
useradd -d /home/ceph -m ceph passwd ceph echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph chmod 0440 /etc/sudoers.d/ceph
-
Create aliases for cluster nodes in /etc/hosts:
172.31.245.51 ceph1.example.com ceph1 172.31.246.77 ceph2.example.com ceph2 172.31.246.82 ceph3.example.com ceph3
-
Add Ceph services to firewalld settings:
firewall-cmd --zone=public --add-service=ceph-mon --permanent firewall-cmd --zone=public --add-service=ceph --permanent firewall-cmd --reload
Administrative server
-
Install the ceph-deploy and python-setuptools packages:
yum install ceph-deploy python-setuptools
-
Create ssh keys and copy them to all nodes of the cluster:
ssh-keygen ssh-copy-id ceph@ceph1 ssh-copy-id ceph@ceph2 ssh-copy-id ceph@ceph3
-
Add lines to the file ~/.ssh/config:
Host ceph1 Hostname ceph1 User ceph Host ceph2 Hostname ceph2 User ceph Host ceph3 Hostname ceph3 User ceph
-
Create my-cluster directory for configuration and ceph-deploy files and enter to that directory:
mkdir my-cluster cd my-cluster
-
Create the cluster configuration file:
ceph-deploy new ceph1 ceph2 ceph3
WhereWhen using Ceph Storage with one cluster node replace the value of "osd_crush_chooseleaf_type" in configuration ceph.conf with 0.
-
Add information about the cluster node network to the ceph.conf configuration file:
echo "public_network = 172.31.240.0/20" >> ceph.conf
-
Install ceph-deploy on cluster codes:
ceph-deploy install ceph1 ceph2 ceph3
-
Deploy a monitoring service:
ceph-deploy mgr create ceph1
-
Create and initialize the monitor servers:
ceph-deploy mon create-initial
-
Copy the configuration files to the cluster nodes:
ceph-deploy admin ceph1 ceph2 ceph3
-
Add data servers to the cluster:
ceph-deploy osd create --data /dev/sdb ceph1 ceph-deploy osd create --data /dev/sdb ceph2 ceph-deploy osd create --data /dev/sdb ceph3
Where -
Erase /dev/sdb drives on data servers:
ceph-deploy disk zap ceph1:sdb ceph2:sdb ceph3:sdb