VMmanager OVZ is an operating-system level virtualization management system. VMmanager OVZ runs on OpenVZ. This technology is based on the Linux kernel and allows to run isolated copies of a selected operating system on a physical server. Virtual servers are grouped into a single Linux kernel.
The main utilities for OpenVZ management:
- vzctl — the main utility for container management;
- vzlist — this utility allows to get information about containers;
- vzmigrate — migration utility;
- vzmemcheck, vzcpucheck, vzcalc —this utility allows to get information about resource usage;
- vzsplit — the utility generates configuration files of containers;
- vzquota — this utility allows to work with container disk quota.
OpenVZ configuration
vz is the OpenVZ global configuration file. It is located in /etc/vz/vz.conf and consists of the strings in the format PARAMETER="VALUE".
VMmanager OVZ uses the values VE_PRIVATE, VE_ROOT, TEMPLATE of the configuration file.
TEMPLATE=directory— the directory with container templates data.
VE_ROOT=directory — the root mounting point of the container. The value must contain the macro $VEID that will be changed into the value VE ID.
VE_PRIVATE=directory — the directory with files and directories specific for a certain container. The value must contain the macro $VEID that will be changed into the actual numerical value VE ID.
VMmanager OVZ monitors the change of the paths but does not automatically copy the already created containers and OS templates into new directories. We recommend that you use standard values of the parameters VE_PRIVATE, VE_ROOT, TEMPLATE or change the paths after installation of VMmanager OVZ before you start working with containers. The main requirement for container migration between cluster nodes is that the paths VE_PRIVATE, VE_ROOT must match on all the cluster nodes. All the directories must be present in the file system of the cluster node. The cache subdirectory where OS templates are installed must exist in the TEMPLATE directory. When creating a new cluster node, VMmanager changes VE_PRIVATE, VE_ROOT, TEMPLATE of the vz.conf file in accordance with the values on other cluster nodes. It also creates the required directories in the file system.
Container configuration file
Examples of the configuration files are stored in the /etc/vz/conf directory. When creating a container, VMmanager uses the /etc/vz/conf/ve-ispbasic.conf-sample template. The template name is defined in the control panel code and cannot be modified. Deleting the template will generate the file again.
Container creation:
-
A container is created based on the template /etc/vz/conf/ve-ispbasic.conf-sample:
/bin/sh -c /usr/sbin/vzctl\ create\ CTID\ --ostemplate\ <ostemplate>\ --layout\ <layout>\ --config\ ispbasic'
View details -
Parameters of the VM template (from the preset table of the vemgr base) and the data that were specified for the virtual machine are applied to the newly created container:
Run '/bin/sh -c /usr/sbin/vzctl\ set\ 102\ --quotaugidlimit\ 2048\ --save' pid Run '/bin/sh -c /usr/sbin/vzctl\ set\ 102\ --ipadd\ \ --save' Run '/bin/sh -c /usr/sbin/vzctl\ set\ 102\ --hostname\ ct2.org\ --save' Run '/bin/sh -c /usr/sbin/vzlist\ -H\ -o\ layout\ 102' Run '/bin/sh -c /usr/sbin/vzctl\ set\ 102\ --diskspace\ 3000M\ --save' Run '/bin/sh -c /usr/sbin/vzctl\ set\ 102\ --diskinodes\ 384000\ --save' Run '/bin/sh -c /usr/sbin/vzctl\ set\ 102\ --swap\ 1024M\ --save' Run '/bin/sh -c /usr/sbin/vzctl\ set\ 102\ --ram\ 512M\ --save' Run '/bin/sh -c /usr/sbin/vzctl\ set\ 102\ --lockedpages\ 131072:unlimited\ --oomguarpages\ 131072:unlimited\ --vmguarpages\ 393216\ --privvmpages\ unlimited:unlimited\ --save' Run '/bin/sh -c /usr/sbin/vzctl\ set\ 102\ --cpus\ 1\ --save' Run '/bin/sh -c /usr/sbin/vzctl\ set\ 102\ --cpuunits\ 100\ --save' Run '/bin/sh -c /usr/sbin/vzctl\ set\ 102\ --cpulimit\ 100\ --save' Run '/bin/sh -c /usr/sbin/vzctl\ set\ 102\ --numproc\ unlimited\ --save' Run '/bin/sh -c /usr/sbin/vzctl\ set\ 102\ --numfile\ unlimited\ --save' Run '/bin/sh -c /usr/sbin/vzctl\ set\ 102\ --numtcpsock\ unlimited\ --save' Run '/bin/sh -c /usr/sbin/vzctl\ set\ 102\ --numothersock\ unlimited\ --save' Run '/bin/sh -c /usr/sbin/vzctl\ set\ 102\ --tcprcvbuf\ unlimited\ --save'
Resource management
Consists of the following elements
- two-level disk quota;
- fair CPU scheduling;
- user beancaunters.
Two-level disk quota
The first level: a server administrator can set disk quotas for containers.
The second level: a container administrator can use common utilities inside a container to configure standard disk quotas of the operating system for users and groups.
Fair CPU scheduling;
The first level: the scheduler selects a container to allocate the CPU time quantum depending on the cpuunits parameters for containers.
The second level: a standard Linux scheduler selects a process in the selected container to allocate the time quantum depending on standard CPU priorities.
A server administrator sets different values of cpuunits for containers. CPU time is distributed based on the relative ratio of these values. The cpulimit parameters set the maximum CPU time in % for a container. The cpus parameter sets the number of CPU that will be available for a container.
User Beancounters
User Beancounters — limits are controlled by each of the containers in the node. You can view the resources being controlled and limited in the /proc/user_beancounters file. There are 5 values for every resource: current usage, maximum usage (during the container's lifetime), barrier, limit, and fault counter. If a resource is reaching the threshold, its fault counter will be enlarged.
Container network configuration
Venet is a default virtual network device for containers. It allows setting a point-to-point connection between a container and a cluster node.It does packet switching based on IP header. Venet drops ip-packets from the container with a source address, and in the container with the destination address, which is not corresponding to an ip-address of the container.
Venet is created automatically on container start.
The following command is used to add IP address to the virtual machine:
vzctl set CTID --ipadd <IP-address> --save
Venet starts the script to add the IP address. IP addresses are added as aliases (venet0:0, etc.).
vzctl
vzctl performs the following operations:
- vzctl create CTID — creates a container;
- vzctl start CTID — runs a container;
- vzctl stop CTID — disables a container;
- vzctl restart CTID — restarts a container;
- vzctl snapshot CTID — makes a container snapshot;
- vzctl status CTID — container status;
- vzctl set CTID — change container parameters;
- vzctl destroy CTID — destroy a container.
CTID —container id.