Artem Senyuts, DevOps at Skywell Software 04/03/2021 #Popular #Tips 13 min read (A virtualization story from our DevOps Artem Senuyts with small nuances)For a very long time now, wherever you look, and whatever you may say, you cannot do without virtualization. The disadvantages and benefits of virtual machines over iron servers for each service have long been spelled out over and over again. Therefore, we will not be returning to this topic. However, the question of virtualization environments themselves is much more interesting. Although five hundred different products do not represent them, there is no monopoly either. Therefore, the moment has come for us to optimize the services, servers, and the entire infrastructure as a whole and the first question that came up was: what are we going to build on?When we discussed this issue, there was already an infrastructure deployed on a cluster of two Proxmoxes. The reasons for exactly why we decided to rebuild it are not so important. But rather, what’s more important, is that the desire has arisen and the tasks were set. There is even a piece of metal with which you can have fun! As they say – let’s get started!And I started …with my forehead against the wall. What to take? One of the conditions was that we had to use free software. ESXi and Hyper-V were ruled out. Proxmox? I looked at it … I looked again … “I picked it up”… No, I didn’t like it. Whoever disagrees with me – it’s your problem. It was a matter of personal preference. So I decided to stop using Proxmox. I didn’t even look towards OpenVZ or LXC – there were virtual machines on the Ventochka (Windows Server) infrastructure. My friendship with Xen was on the rocks for a while now. So I decided that the good old KVM would be the best solution. Then, a couple of months ago (at that time), a fresh version of CentOS came out. It was interesting to try it out. So, it was decided – CentOS 8 + KVM.We have a SuperMicro server, and we have enough processors, RAM, and disks for the tasks at hand. There are two network cards. We will use one to work with the hypervisor itself and give the second to the virtual environment. What else? I think that’s it. The addresses are distributed via DHCP, and let’s not forget about reservations either.1. Installing the CentOSThe server has no drive, which means that the PXE server for network booting needs to be deployed elsewhere. Therefore, we installed it from a flash drive and downloaded the image from the official site centos.org. I didn’t bother too much with ISO’s fill. Since I had a laptop with Windows 10 at hand, I downloaded Fedora Media Writer and assembled an installation flash drive. Then what? Fedora from CentOS is not all that different, so what’s the problem?We set the minimum because we don’t need a whole bunch of unnecessary junk. I decided to enable both network interfaces at once. The first is DHCP, and the second is disabled.I have two RAID arrays – a larger and a smaller one. The smaller one was left for the system itself, and the larger one immediately created a mount point. We will give this one to the storage of the hypervisor.Then everyone will have fun as they like. I updated and installed a set of my favorite packages, “colored” the bash … I looked at what exactly the system can offer us from the start and started working (during the initial setup, I worked with root, so all actions are without sudo). systemctl disable –now firewalldyum remove -y firewalldyum install -y iptables iptables-services iptables-utilsyum install -y network-scriptsThis was more familiar to me. vim /ets/ssh/sshd.configWe check the X11Forwarding. Uncomment and enable (X11Forwarding yes) if necessary. It will come in handy in one of the nuances.Nuance One:NeworkManager is, of course, is all good and convenient, but I don’t want to give it interface control. Set up redundancy on the dhcp server and go to / etc / sysconfig / network-scripts /. Then we write the interface configuration files (I have eno1 and eno2):vim ifcfg-eno1TYPE=EthernetPROXY_METHOD=noneBROWSER_ONLY=noBOOTPROTO=dhcpDEFROUTE=yesNM_CONTROLLED=noIPV4_FAILURE_FATAL=noNAME=eno1UUID=[*]DEVICE=eno1ONBOOT=yesvim ifcfg-eno2DEVICE=eno2TYPE=EthernetONBOOT=yesNM_CONTROLLED=noThe line “NM_CONTROLLED = no” speaks for itself. NetworkManager, get out!All in all, the system is ready for further action.2. Installing the KVMEverything here is quite trivial and straightforward.In my case, I was confident in support of virtualization by the processor. If in doubt, you can do it in one of the following ways:lscpuIn the output, we are looking for a line related to virtualization:or:cat /proc/cpuinfo | egrep “(vmx|svm)”You should get something like this (vmx is intel hardware virtualization, svm is AMD):If none of the two options shows anything, then you have a direct path to the BIOS, or you’re simply out of luck with the processor. If all is well, then move on.yum install -y libvirt qemu-kvm virt-install virt-viewerThe second nuance:You can, of course, manage virtual machines from the console, but why, if a convenient graphical interface has been available for a long time. That’s why we set up X forwarding.yum install -y virt-manager3. We Need a Network (Third Nuance)At some point, the ability to control network interfaces was removed from the KVM. The only thing left is to manage virtual networks. But what if we want to release our VMs to the general network? The answer is simple – Bridge! In my case, there were 4 VLANs in the network. Consequently, the procedure was simple.Create a separate interface for each VLAN.We bind the corresponding bridge to each interface.In practice, it looks like this:cd /etc/sysconfig/network-scripts/vim ifcfg-eno2.10# Indicates that this is a virtual network interfaceVLAN=yes# An indication of which physical interface it is bound toPHYSDEV=eno2# Set the name of our virtual interface# After the dot, specify the tag of the required VLANDEVICE=eno2.10BRIDGE=br010ONBOOT=yesNM_CONTROLLED=novim ifcfg-br010DEVICE=br010TYPE=BridgeONBOOT=yesNM_CONTROLLED=noBOOTPROTO=staticDELAY=0# We can create a Mac using any macaddress generatorMACADDR=**:**:**:**:**:**That’s about it … The only thing left to do is restart the network (systemctl restart network) or restart the server itself. I prefer the second method – you still need to reconnect to it. Although…Fourth NuanceMy connection as root is disabled, and virt-manager is very convenient. Therefore, do not forget to add your user to the KVM, qemu, libvirt groups.usermod -aG kvm, qemu, libvirt [user]Do not forget just to substitute your username. And now definitely reboot!4. After assembly, file out the edgesWe now have a working environment for virtualization. But after all, it should be not only efficient but also convenient. Think about the anecdote about the rocket sold by the Soviet Union to the Chinese and take on the file!We connect:ssh -X [user] @ [ip servera]virt-managerAnd we see a quite nice window:To begin with, our path lies in Edit> Preferences and bring the tabs to the following form (this will be the fifth nuance):Configuring the virtual machine devices through virt-manager is done in the graphical interface. However, it still has the ability to view the corresponding part of the virtual machine configuration file in XML format. Setting the corresponding checkbox (Enable XML editing) will allow editing the configuration file directly from the manager window without going through the file system to the corresponding file. And I prefer to hide the manager in the tray.By checking all the items on this tab, in the View> Graph menu, we can enable the display of more graphs, which will give more information about the state of our virtual machines.We will make it easier for ourselves to work in the virtual machine console by enabling scrolling and zooming, especially if we have a monitor with a low resolution.Then we connect to the virtualization system itself and go to Edit> Connection details. In the Virtual Networks tab, delete everything, unless, of course, we will use virtual networks in the future and, if we do, we can always create new ones.In the Storage tab, select default, switch to XML, and in the target section, specify the full path to the directory where we will store the virtual machine disks. It looked like this to me:Post Scriptum That’s it. Our virtual environment is ready to welcome its first tenants. You can safely disregard the nuances described in this article. As a result, you will get the virtualization environment in its stock form. I wanted to optimize it a little for my convenience. And not in all instructions for deploying a hypervisor-based on KVM, you will find at least one of them.Later, shortly before writing this article, I deployed another hypervisor with a similar configuration after the shocking news from Red Hat. But the Fedora Server OS was chosen for it. So, this article is 100% applicable for it as well.