How does a hypervisor work?
A hypervisor, or virtual machine monitor (VMM), is software, hardware or firmware that is used to create and run virtual machines (VMs).
A hypervisor allows a host server to share resource with VMs by virtually carving and distributing the available resources e.g. memory, or processing.
It is the hypervisors job to carve up the underlying hardware and provide this to each VM depending on the application the VM is running. By doing this, the hypervisor manages system resources far more efficiently than a standalone server and can provide extra processing power, memory, disk space and network ports as and when required from the resources available on the host machine.
As a hypervisor can run multiple operating systems and applications, each VM is containerised and cannot interfere with another VM as everything is controlled by the hypervisor. Should a guest (VM) operating system require an update the IT manager can update just this VM, or all the VM’s running the same operating system.
When it comes to installing your hypervisor, things can be a little more complicated and there are some limitations. To discuss this with one of our experts contact us here.
Post installation, things really are a breeze. Configuring a VM takes just a few clicks, using the hypervisor simply select the following:
- Firstly, choose the guest OS to run the VM.
- How many processors are required?
- How much memory?
- Amount of disk space?
- Number of network ports?
- Set the IP address and login credentials.
The VM will then run appearing as a physical machine running in a virtual environment.
There are two types of hypervisor, imaginatively named Type I or Type II
Type 1 hypervisors run on bare metal, type 2 hypervisors run on top of an OS, both have their pros and cons of which we’ll delve into below.
Type 1 hypervisors have direct access to the underlying hardware, and no other software, such as OS’s or drivers, this leads to Type 1’s being more efficient, and better performing than their Type 2 counterparts.
Due to the nature of how Type-1 hypervisors operate they’re also highly secure. Security flaws and vulnerabilities that are often inherited with OS’s are absent from bare-metal hypervisors because the OS isn’t present for attack.
This isolation from malicious software and activity keeps every guest (VM) safe.
In many cases, the virtualized system hosts at least one VM with an OS and management software, which enables admins to manage the physical system using system management tools such as Microsoft System Center.
Many of the most popular hypervisors for enterprise applications are type 1 and these are:
A Type 2 hypervisor is typically installed on top of an existing OS, it’s called a hosted hypervisor because it relies on the host machine’s pre-existing OS to manage calls to CPU, memory, storage and network resources.
Type 2 hypervisors originate from the days of x86 virtualisation when existing systems already used OS’s and the hypervisor was used as a higher software layer.
The purpose of Type 1 and 2’s are the same, but the presence of an OS with Type 2 catalyses unavoidable latency; all hypervisor activities have to pass through the host OS, it’s a bottleneck.
Additionally, if the OS software isn’t fully secure, this could compromise the whole environment!
You might’ve guessed that this means Type 2’s aren’t commonly used for data centre deployments and are used more for client or end-user systems, where security and performance aren’t so significant.
Type II hypervisors are mainly:
- VMware workstation
- VMware Fusion
- Oracle VM VirtualBox
- Oracle VM Server for x86
- Oracle Solaris Zones
- Microsoft Virtual PC
- Oracle VM VirtualBox
Hypervisor Data Protection
The hypervisor became a very efficient and feature-filled solution. With the advent of tools, including VMware vMotion, HA, and DRS, users obtained the ability to provide VM high availability and migrate compute workloads dynamically. The only caveat was the reliance on centralised storage, causing the two paths to merge.
Many hypervisors allow the creation of clone volumes or allow for creation of snapshots that can be mounted as read only or read/write.
It is normal to backup the hypervisor rather than each VM due to the difficulties in managing system resources through the host machine, although this can be done by loading an agent on each VM.
Another method is to use storage-based replication to replicate the storage volumes across the network and then migrate or copy the VM to another host machine. The last method is to let the hypervisor perform the replication such as VMware vSphere or Hyper-V Replica.
Hypervisors for VDI
The Covid-19 pandemic has seen the landscape of infrastructures change quite substantially and brings forth the perfect opportunity for utilisation of a hypervisor for virtual desktops.
Virtual desktop hypervisors allow you to run 100’s or 1,000’s of desktop machines to provide either Persistent VDI (Virtual Desktop Infrastructure) that provides the user with customised desktops that can be saved. In addition, Non-persistent VDI provides the same desktop every time the machine is rebooted.
In both instances the desktop image is delivered over a network to an endpoint device. The user experience is indistinguishable from a normal desktop computer as all the processing is done on the server. An endpoint device could be a traditional PC, thin client or mobile device.
A VDI infrastructure brings benefits to the business by extending the lifespan of ageing desktop machines. Another key benefit is security, as no data resides on the endpoint device. Also, anti-virus software and protection is run on the server, rather than individual client machines. Finally, management is far simpler as the same desktop image is deployed across the entire VDI estate.
Whilst VDI provides many benefits including reduced desktop deployments costs. The VDI data now resides centrally on storage arrays and this can increase significantly based on the number of users. Another downside could be network performance as VDI isn’t particularly good at handling motion on the desktop, although there are solutions starting to emerge for collaborative working on the desktop using video, 3D, CGI, CAD/CAE and animation.
Are you running a work from home environment that could benefit from the unique benefits of a virtualised environment? If so, let’s chat!
The world’s most popular virtualisation software is VMware with the largest market share. This is followed by Microsoft Hyper-V, although this is likely to increase with Windows Server 2016. VMware is licensed normally by CPU and then you pay for annual support/software updates and additional software functionality.
Microsoft provides Hyper-V as part of it’s operating system and is priced per-core rather than per CPU as was the case before. It is dependent on the Windows Server 2016 edition and pricing can be found here.
Depending on the datacentre environment running, the cost of deploying and maintaining a virtual environment could be expensive.
There are free hypervisors available, one of the most popular is Linux KVM.
A business might want to migrate their current hypervisor to an alternative. Converting VMs to another hypervisors format could be problematic.
To overcome this a company, we work with StarWind provides V2V Converter and allows for migrations of the following VM formats: VMDK, VHD/VHDX.
StarWind V2V Converter supports all industry-standard hypervisors including Microsoft Hyper-V, VMware ESXi, Citrix XenServer, and KVM (coming soon).
By converting VMs from and to any selected format, it allows easy migration between different hypervisors.
Best of all it’s FREE and available to download from here.
Consolidation in the datacentre
Before virtualisation, the drive to reduce the complexities of managing multiple standalone servers had begun with the advent of multi-core processors.
For the first time multiple applications could be run on a single server. The next phase was to move away from DAS (direct attached storage) to a SAN/NAS (Storage Area Network/ Network Attached Storage) infrastructure sometimes referred to as storage virtualisation. This meant servers in the datacentre could be much smaller 1U/2U and the storage for these servers would be highly redundant and provide greatly increased performance.
The next phase was the introduction of virtualisation provided by VMware in 1999 and was VMware Workstation. So, the hypervisor isn’t new, and it has been with us for nearly 20 years, but it’s just as powerful and even more useful with the requirements of modern business cultures and demands.
One of the key potential problems with a hypervisor is performance. If you are running 200 VMs then each VM is going to be making I/O requests on the server and storage, this is where many hypervisor deployments fail.
How a virtualised environment is configured is based on several factors:
- Number of available CPU compute cores
- Amount of available memory per server
- Number of Ethernet network switches
- Number of storage area network switches
- Number of required network ports
- Ethernet network speed
- Storage network speed
- Single or multiple storage pools
- Number of storage controllers
- Types of storage
It’s imperative this part is done properly, contact us and we’ll leverage our partnerships from vendors to ensure you don’t slip up.
Why use a hypervisor?
Virtualisation provides huge savings in data management, power, cooling, storage utilisation, computing resources and data availability.
The hypervisor is software that interacts with the physical hardware and this could be a problem as many virtualised deployments are a DIY affair.
To provide a completely seamless virtualised environment the hypervisor needs to be able to directly interact with all below:
- Network Switches and Ports
- Data Storage
This is called “Hyper Convergence” or a “Hyper Converged Infrastructure” and everything is controlled through the hypervisor. So rather than take the DIY approach there are several hyper-converged infrastructures (HCI) that allow control of the complete software and hardware stack.
HCI are faster to deploy, fully tested and approved and work out of the box. Purchasing the right HCI solution involves knowing how many VMs will you be running and what type of workload i.e. Office applications, database, OLTP etc?
The problem with hypervisors
The hypervisor typically resides on server hardware, that in turn sits on a network that connects to pools of storage. Each one of these component parts – software, storage, networking and storage has a management layer and this adds to the complexity.
Each component needs to be individually upgraded and this takes time. If you’ve worked with VMware, you are most likely well aware of how complex the VMware ecosystem has become.
The latest version of the VMware Product Guide is over 70 pages long and includes dozens of products and more than twenty bundles and suites to choose from.
Due to this complexity, VMware publishes a knowledge based article that defines the multi-step process and order of operations required just for software upgrades. Though sold in bundles or suites, most of the tools are loosely integrated independent software packages (many from acquisitions) with their own management consoles and software life-cycles.
All of these component parts have their own security feature set and functionality.
The network bottleneck
Flash arrays have made a massive change in the way applications perform with colossal IOPS and instantaneous performance running in to the 1,000 MB/s!
If we take a normal SSD drive with a read speed of 500MB/s and write speed of 350MB/s the table below shows how many drives it would take to saturate the network.
|SSDs required to saturate network bandwidth|
|Controller Connectivity||Available Network Bandwidth||Read I/O||Write I/O|
|Dual 4Gb FC||8Gb = =1GB||2||3|
|Dual 8Gb FC||16Gb = = 2GB||4||5|
|Dual 16Gb FC||32Gb = = 4GB||8||11|
|Dual 32Gb FC||64Gb = = 8GB||16||22|
|Dual 1Gb ETH||2Gb == 0.25GB||1||1|
|Dual 10Gb ETH||20Gb == 2.5GB||5||7|
The latest generation of flash technology NVMe is on average 5x the performance of an SSD, divide the sum of the numbers from the two right columns from the table above by 5 and this is how many next generation drives will saturate the network!
Moving on from NVMe, we have 3D XPoint memory currently and development and due to ship within the next 2-3 years. The performance of this memory is 1,000x faster than NVMe.
As we adopt these newer technologies the network connectivity will become more of an issue for both latency and network bandwidth.
The next phase of Virtualisation
Whilst HCI goes above and beyond the initial deployments of a virtualised infrastructure there are limitations to how far you can scale for performance and capacity as the HCI is based on component blocks i.e. data storage arrays, network switches, servers etc, all these component parts cause latency and network bandwidth issues.
One of our partners is Nutanix. They take a completely different, and genius approach to building a HCI. Whilst Nutanix can use hypervisors from Microsoft Hyper-V, VMware ESXi and Citrix XenServer it also offers its own free hypervisor Acropolis, so there are no additional licensing costs.
Nutanix is a converged storage, compute and virtualisation platform that provides a distributed and massively scalable cluster ready to run any application out of the box.
With Acropolis and AHV, virtualization is tightly integrated into the Enterprise Cloud OS rather than being layered on as a standalone product that needs to be licensed, deployed and managed separately.
Common tasks such as deploying, cloning and protecting VMs are managed centrally through Nutanix Prism, rather than utilising disparate products and policies in a piecemeal strategy.
Acropolis provides enterprise-grade VM-centric storage for virtualised applications. Unlike traditional storage solutions that were built in a pre-virtualization era, operations in Acropolis are optimised to work at a granularity of a single VM or vDisk.
Additionally, complex storage operations such as LUN provisioning, zoning and masking are non-existent in Acropolis enabling deployment of highly available storage with just a few clicks.
Nutanix takes a completely new and different approach to overcome the issues around latency, bandwidth, scalability and performance.
Nutanix uses the idea of compute nodes as building blocks allowing you to scale for performance and capacity as and when required.
Like Lego, just a little more complex.
Each Nutanix compute node contains:
- Computing power
- Network ports
The beauty about Nutanix is that not all compute nodes need to be identical, they don’t even need to be from the same vendor, although we advise that they are for support purposes.
Do you really care about which Hypervisor to chose?
Well, no not really, providing your applications are running and performing as expected it shouldn’t really matter.
Nutanix virtualisation offers an attractive alternative when streamlining datacentre operations and driving costs out of the datacentre.
With thousands of deployments worldwide, Nutanix provides an open platform for virtualisation, network virtualisation, security, and application mobility.
When combined with comprehensive operational insights and virtualisation management from Nutanix Prism, Nutanix provides a complete solution for virtualisation and enterprise cloud.
By adopting Nutanix, your organisation can not only eliminate the direct costs associated with the hypervisor licensing but drive down soft costs and reduce the OpEx associated with virtualisation.
The Nutanix Enterprise Cloud OS offers multiple advantages versus VMware vSphere:
- Platform Security
- Application Security • Analytics
- Automation & Orchestration
If you would like to know “How to stop paying for Virtualisation” or have any additional questions, please contact us or call us on 01256 331614.
Smarter, Strategic, Thinking.