Understanding a hypervisor the simple way
How does a hypervisor work?
A hypervisor, or virtual machine monitor (VMM), is software, hardware or firmware and is used to create and run virtual machines (VMs).
A Hypervisor or host server are the systems responsible for managing the physical resources (e.g., CPU, memory, disk space, network bandwidth) on which virtual machines run. The resources are carved up virtually to create VM’s (virtual machines).
Running multiple operating systems and applications means each VM is containerized and cannot interfere with another VM, everything is controlled by the hypervisor. Should a guest (VM) operating system require an update this VM can be updated or all the VM’s running the same operating system.
Configuring a VM takes just a few clicks, using the hypervisor simply select the following :
- Choose the guest OS to run the VM.
- How many processors are required?
- How much memory?
- Amount of disk space?
- Number of network ports?
- Set the IP address and login credentials.
The VM will then run appearing as a virtual machine running on the hypervisor.
there are two types of hypervisors, imaginatively named Type I or Type II
- Type I hypervisors run on bare metal
- Type II hypervisors run on top of an OS.
Both have their pros and cons of which we’ll delve into below.
Type I have direct access to the underlying hardware, and no other software, such as OS’s or drivers, this leads to Type 1’s being more efficient, and better performing than their Type II counterparts.
Due to the nature of how Type I hypervisors operate they’re also highly secure. Security flaws and vulnerabilities that are often inherited with OS’s are absent from bare-metal hypervisors because the OS isn’t present for attack.
This isolation from malicious software and activity keeps every guest (VM) safe.
Virtualized systems host at least one VM with an OS and management software; admins can manage physical systems using tools such as Microsoft System Center.
Many of the most popular hypervisors for enterprise applications are type 1 and these are:
- Microsoft Hyper-V
- VMware ESX/ESXi
- Oracle VM server for x86
- Citrix XenServer
Type II is normally installed on top of an existing OS; it’s called a hosted hypervisor . This relies on the host machine’s pre-existing OS to manage calls to CPU, memory, storage, and network resources.
Type II originate from the days of x86 virtualization when existing systems already used OS’s and the hypervisor was used as a higher software layer.
This means Type II’s isn’t commonly used for data centre deployments and are used more for client or end-user systems, where security and performance aren’t so significant.
These hypervisors are mainly:
- VMware workstation
- VMware Fusion
- Oracle VM VirtualBox
- Oracle VM Server for x86
- Oracle Solaris Zones
- Microsoft Virtual PC
- Oracle VM VirtualBox
Hypervisor Data Protection
Hypervisors are a very efficient and feature rich solution and must be protected. With the advent of tools, including VMware vMotion, HA, and DRS, users obtained the ability to provide VM high availability and migrate compute workloads dynamically. The only caveat was the reliance on centralized storage, causing the two paths to merge.
Many hypervisors allow the creation of clone volumes or allow for creation of snapshots that can be mounted as read only or read/write.
It is normal to backup the hypervisor rather than each VM due to the difficulties in managing system resources through the host machine, although this can be done by loading an agent on each VM.
Another method is to use storage-based replication to replicate the storage volumes across the network and then migrate or copy the VM to another host machine. This allows the hypervisor to perform the replication such as VMware vSphere or Hyper-V Replica.
Hypervisors for VDI
The COVID-19 pandemic has seen the landscape of infrastructures change quite substantially and brings forth the perfect opportunity for utilization of a hypervisor for virtual desktops.
Virtual desktop hypervisors provide the ability to run 100’s or 1,000’s of desktop machines to provide Persistent VDI (Virtual Desktop Infrastructure) and this provides the user with customized desktops that can be saved. Non-persistent VDI provides the same desktop every time the machine is rebooted.
Both instances provide the desktop image is over the network to an endpoint device. The user experience is indistinguishable from a normal desktop computer as all the processing is done on the server. An endpoint device could be a traditional PC, thin client or mobile device.
A VDI infrastructure brings benefits to the business by extending the lifespan of ageing desktop machines. Another key benefit is security, as no data resides on the endpoint device. Anti-virus software and protection is run on the server, rather than individual client machines. Finally, management is far simpler as the same desktop image is deployed across the entire VDI estate.
Whilst VDI provides many benefits including reduced desktop deployments costs. The VDI information now resides centrally on storage arrays and this can increase significantly based on the number of users. A downside could be network performance as VDI isn’t particularly good at handling motion on the desktop, although there are solutions starting to emerge for collaborative working on the desktop using video, 3D, CGI, CAD/CAE and animation.
The world’s most popular virtualization software is VMware with the largest market share. This is followed by Microsoft Hyper-V, although this is likely to increase with Windows Server 2022. VMware is licensed normally by CPU and then pay for annual support/software updates and additional software functionality.
Microsoft provides Hyper-V as part of it’s operating system and is priced per-core rather than per CPU as was the case before. It is dependent on the Windows Server 2022 edition and pricing can be found here.
There are free hypervisors available, one of the most popular is Linux KVM.
A business might want to migrate their current hypervisor to an alternative. Converting VMs to another hypervisors format could be problematic.
To overcome this StarWind provides V2V Converter and allows for migrations of the following VM formats: VMDK, VHD/VHDX.
StarWind V2V Converter supports all industry-standard hypervisors including Microsoft Hyper-V, VMware ESXi, Citrix XenServer, and KVM (coming soon).
By converting VMs from and to any selected format, it allows easy migration between different hypervisors .
Best of all it’s FREE and available to download from here.
Consolidation in the data centre
Before virtualization, the drive to reduce the complexities of managing multiple standalone servers had begun with the advent of multi-core processors.
For the first time multiple applications could be run on a single server. The next phase was to move away from DAS (direct attached storage) to a SAN/NAS (Storage Area Network / Network Attached Storage) infrastructure sometimes referred to as storage virtualization. These meant servers in the data centre could be much smaller 1U/2U and the storage for these servers would be highly redundant and provide greatly increased performance.
The next period was the introduction of virtualization provided by VMware in 1999 and was a VMware Workstation. So, the hypervisor isn’t new, and it has been with us for nearly 20+ years, but it’s just as powerful and even more useful with the requirements of modern business cultures and demands.
One of the key potential problems with a hypervisor is performance. If running 200 VMs then each VM is going to be making I/O requests on the server and storage, this is where many virtualized deployments fail due to bad planning.
How a virtualized environment is configured is based on several factors:
- Number of available CPU compute cores
- Amount of available memory per server
- Number of Ethernet network switches
- Number of storage area network switches
- Number of required network ports
- Ethernet network speed
- Storage network speed
- Single or multiple storage pools
- Number of storage controllers
- Types of storage
It’s imperative this part is done properly, contact us and we’ll leverage our vendor partnerships to ensure no slip ups.
Why use a hypervisor?
Virtualization provides huge savings in data management, power, cooling, storage utilization, computing resources and data availability.
The hypervisor is software that interacts with the physical hardware and this could be a problem as many virtualized deployments are a DIY affair.
To provide a completely seamless virtualized environment the hypervisor needs to be able to directly interact with all below:
- Network Switches and Ports
- Data Storage
This is called “Hyper Convergence” or a “Hyper Converged Infrastructure” and everything is controlled through the hypervisor. So rather than take the DIY approach there are several hyper-converged infrastructures (HCI) that allow control of the complete software and hardware stack.
HCI are faster to deploy, fully tested and approved and work out of the box. Purchasing the right HCI solution involves knowing how many VMs that will be running and what type of workload i.e. Office applications, database, OLTP etc?
The problem with hypervisors
A hypervisor typically resides on server hardware that in turn sits on a network that connects to pools of storage. Each one of these component parts – software, storage, networking and storage has a management layer and this adds to the complexity.
Each component needs to be individually upgraded and this takes time.
The latest version of the VMware Product Guide is over 70 pages long and includes dozens of products and more than twenty bundles and suites to choose from.
Due to this complexity, VMware publishes a knowledge-based article that defines the multi-step process and order of operations required just for software upgrades. Though sold in bundles or suites, most of the tools are loosely integrated independent software packages (many from acquisitions) with their own management consoles and software life-cycles.
All these component parts have their own security feature set and functionality.
The network bottleneck
All Flash arrays have made a massive change in the way applications perform with colossal IOPS and instantaneous performance running in to the 1,000 MB/s!
A normal SSD drive with a read speed of 500MB/s and write speed of 350MB/s. The table below shows how many drives it would take to saturate the network bandwidth.
SSDs required to saturate network bandwidth
The latest generation of flash technology NVMe is on average 5x the performance of an SSD, therefore network bandwidth is critical when designing a virtualized environment.
Next is 3D XPoint memory and is 1,000x faster than NVMe. Currently under development and due to ship within the next 1-2 years.
The next phase of Virtualization
Whilst HCI goes above and beyond the initial deployments of a virtualized infrastructure there are limitations to how far it can scale for performance and capacity as the HCI is based on component blocks i.e. data storage arrays, network switches, servers etc, all these component parts cause latency and network bandwidth issues.
One of our solution partners is Nutanix and they take a completely different, and genius approach to building a HCI. Whilst Nutanix can use hypervisors from Microsoft Hyper-V, VMware ESXi and Citrix XenServer it also offers its own free hypervisor Acropolis, so there are no additional licensing costs.
What is Nutanix?
Nutanix is a converged storage, compute and virtualization platform that provides a distributed and massively scalable cluster ready to run any application out of the box.
With Acropolis and AHV, virtualization is tightly integrated into the Enterprise Cloud OS rather than being layered on as a standalone product that needs to be licensed, deployed and managed separately.
Common tasks such as deploying, cloning and protecting VMs are managed centrally through Nutanix Prism, rather than utilizing disparate products and policies in a piecemeal strategy.
Acropolis provides enterprise-grade VM-centric storage for virtualized applications. Unlike traditional storage solutions that were built in a pre-virtualization era, operations in Acropolis are optimized to work at a granularity of a single VM or vDisk.
Additionally, complex storage operations such as LUN provisioning, zoning and masking are non-existent in Acropolis enabling deployment of highly available storage with just a few clicks.
It takes a completely new and different approach to overcome the issues around latency, bandwidth, scalability and performance.
Nutanix uses the idea of compute nodes as building blocks allowing the cluster to scale for performance and capacity as and when required.
Like Lego, just a little more complex.
Each Nutanix compute node contains:
- Computing power
- Network ports
The beauty about Nutanix is that not all compute nodes need to be identical, they don’t even need to be from the same vendor, although we advise that they are for support purposes.
Do it matter which Hypervisor to choose?
Well, no not really, providing applications are running and performing as expected it shouldn’t really matter.
Nutanix virtualization offers an attractive alternative when streamlining data centre operations and driving costs out of the datacentre.
With thousands of deployments worldwide, Nutanix provides an open platform for virtualization, network virtualization, security, and application mobility.
When combined with comprehensive operational insights and virtualization management from Nutanix Prism, Nutanix provides a complete solution for virtualisation and enterprise cloud.
By adopting Nutanix, an organization can not only eliminate the direct costs associated with the hypervisor licensing but drive down soft costs and reduce the OpEx associated with virtualization.
The Nutanix Enterprise Cloud OS offers multiple advantages versus VMware vSphere:
- Platform Security
- Application Security • Analytics
- Automation & Orchestration