A cluster is a set of servers located at a single location. Distinctive features of such servers (cluster nodes) are their location and high speed of data transfer between them. This article describes the requirements to cluster nodes.
Nodes homogeneity
We recommend using nodes in one cluster that are homogeneous in terms of network settings, routing, and software versions. This will provide the best conditions for virtual machine migration between cluster nodes.
The platform does not support clusters with both Red Hat (CentOS, AlmaLinux) and Debian based OS nodes.
Virtualization support
A KVM cluster node must support CPU-level virtualization. To check whether support of Intel and AMD CPUs is available, enter the command:
grep -P 'vmx|svm' /proc/cpuinfo
If the reply is not an empty string, the CPU supports virtualization.
To use virtualization, make sure it is enabled in the server BIOS.
Hardware requirements
The cluster node must be a physical server with the following characteristics:
Motherboard
We recommend using a server motherboard. A cluster node with a desktop motherboard may not work properly.
If you have problems with disk subsystem performance it is recommended to enable the power consumption mode with maximum performance in the BIOS settings.
CPU
Supported processors are Intel and AMD with x86_64 architecture. Processors with ARM architecture are not supported.
Disk and storages
When partitioning the disk, allocate as much space as possible for the root (/) directory.
Before adding a node to an existing cluster, configure all storage used in the cluster on the server. LXD clusters require configuring a ZFS storage. Read more in LXD.
Software requirements
Operating system
The operating system (OS) requirements depend on the type of virtualization and cluster network configurations:
Minor versions of the OS and OS kernel versions may be different between the cluster nodes.
Use an unmodified OS in minimal edition: no third-party repositories and no additional services installed.
To ensure system software homogeneity, it is recommended to periodically update the OS on the cluster nodes.
AlmaLinux
- For AlmaLinux versions below 8.8-3.el8, before adding a cluster node, follow the instructions in the Knowledge Base article Almalinux repositories GPG key validation error.
- Make sure that the nftables service is configured to autostart.
CentOS
Cluster nodes with CentOS 8 are not supported. If you have CentOS 8 installed, you can migrate to AlmaLinux 8 OS according to the instructions.
CentOS 7 operating system:
- not supported for new product installations;
- for existing product installations is supported until EOL on June 30, 2024.
Software
For the cluster node to work correctly, do not change the default command prompt greeting in the .bashrc file.
In order for the platform to connect to the cluster node, the dmidecode and python3 software packages must be installed on the node. If this software is not included in the OS, install it manually.
Disabling SELinux service
The SELinux service is used as an additional security feature for the operating system. We recommend disabling the SELinux service, as it slows down the operation of the platform and may prevent its correct installation.
Once SELinux is disabled, the server resources will remain protected by the built-in discretionary access control service.
To disable SELinux:
-
Check the service status:
sestatus
-
If the reply contains enforcing or permissive:
-
In the /etc/selinux/config file, set the SELINUX parameter to disabled:
sed -i s/^SELINUX=.*$/SELINUX=disabled/ /etc/selinux/config
- Reboot the server.
-
OS update
After updating the OS on the cluster node, restart the libvirt service:
systemctl restart libvirtd
System time
Before connecting the node, the system time must be synchronized with an NTP server. To do this, configure synchronization using the chrony software.
For servers in OVH data center
When installing the OS through the OVH control panel, enable the Install original kernel option. Use the original OS kernel for the cluster servers to work correctly.
Network settings
The network configuration of the cluster nodes must meet the following requirements:
- each server must have a unique hostname;
- in clusters with "Switching" and "Routing" configuration type, the server must have access to the Internet. This is required to download OS templates and software packages from external sources;
- the IP address must be on the physical interface of the server or in the VLAN and be static;
- the IP address must be static and set via the network interface configuration file (without using DHCP);
- If IPv6 addresses are allocated to VMs in a cluster with the "Switching" configuration type, one of the addresses of the IPv6 network must be assigned to the node interface;
- the default gateway should be available for verification by ping utility;
- names of the network interfaces should not contain letters other than Latin.
See the requirements to cluster nodes with two network interfaces in Main and additional network.
If the server is located in a Hetzner data center, we do not recommend using the vSwitch feature. This feature limits the total number of MAC addresses used by physical and virtual server interfaces to 32.
/etc/hosts file
Make sure that the /etc/hosts file has an entry for the server in the format:
<server IP address> <server hostname>
/etc/resolv.conf file
Make sure that the /etc/resolv.conf file has entries in the format:
nameserver <IP address of the DNS server>
If the IP address of the systemd-resolved local service (127.0.0.53) is specified as the DNS server, check that the DNS server addresses are specified in /etc/systemd/resolved.conf:
DNS=<servers list>
/etc/NetworkManager/NetworkManager.conf file
If the NetworkManager service manages DNS settings, it can delete the /etc/resolv.conf file. To disable DNS management, add a line to the main section of the /etc/NetworkManager/NetworkManager.conf file:
dns=none
Incoming connection settings
KVM virtualization
Allow incoming connections to the ports:
- 22/tcp — SSH service;
- 179/tcp, 4789/udp — Virtual networks (VxLAN);
- 5900-6900/tcp — QEMU VNC, SPICE. If access is only provided through the server with VMmanager, the port range must be open to the network connecting the cluster nodes; If you are going to host more than 1000VMs on a cluster node, follow the instructions in the knowledge base article How to expand the port range for VNC and SPICE?
- 16514/tcp — libvirt virtual machines management service;
- 49152-49215/tcp — libvirt migration services.
LXD virtualization
Allow incoming connections to the ports:
- 22/tcp — SSH service;
- 179/tcp, 4789/udp — Virtual networks (VxLAN);
- 8443/tcp — LXD container management service.