The history of virtualization has gone through all the eras of IT, returning several times in vogue to changing technologies and market demands.
Virtualization was conceived when the personal computer was science fiction, and those who had to process data was forced to contend for a few hundred large mainframes. Today, virtualization techniques are used virtually everywhere, from network infrastructure to video games, from software to services, from storage to the microcode inside a processor.
The most common type of virtualization is based on creating virtual machines, which are programs that behave to simulate a complete hardware system, complete with a CPU, RAM, and peripherals. Virtual machines are referred to as “guest”, while the system that hosts them and can make it turn a number in parallel is the “host.” The host will run the software capable of enabling the creation and management of virtual machines, called “hypervisor.”
The hypervisor was first produced by the IBM Cambridge Scientific Center in 1967 and was called CP-40. It was able to run up to 14 instances of a client operating system, each with its own virtual S/360 hardware and 256K of dedicated RAM.
The CP-40 process is called hardware virtualization, and it is not to be confused with the well easier hardware emulation whose goal is not to create a set of independent virtual machines from underlying system, but rather to force a hardware system to imitate another.
What is the difference between the concept of virtualization and multitasking? Both aims are to make the best use of available hardware resources. But in the first case, it is achieved by running multiple operating systems on the same machine, while the multitasking is to run different software on the same operating system.
Virtual machines are not only useful to maximize the exploitation of computing resources; they can also significantly improve the safety of a complex system. With each virtual machine, it is possible to save the state (the so-called snapshot) easily for backup purposes at any time.
Furthermore, the speed with which they can be switched on or off allows making them available on demand with maximum ease. This feature has made virtualization the primary enabling technology for cloud computing, where a limited number of real servers are running several virtual servers that actually deliver the services.
Also, being independent of the underlying hardware, the virtual machines can be easily exported to a different host, provided it can run the same hypervisor.
The Market Leader
The number of software components on the market that are managing virtual machine system are growing steadily. The consolidated enterprise solutions are heading towards more innovative products and sometimes more specialized products.
The corporate world is still dominated by only two companies – VMware and Microsoft. It is because the inability to export virtual machines from one hypervisor to another imposes a highly standardized market.
The architecture of VMware ESX is the one that has the largest number of implementations, which results in a broad compatibility and availability of additional solutions. Hyper-V from Microsoft has been chosen by many companies, especially those related to other Microsoft applications. For those who want to experiment with little cost, there are also open source solutions like KVM available, which is driven by Red Hat and particularly appreciated in the educational field.
Virtualization and Storage
Mass storage is one of the areas that has most benefited from the application of virtualization techniques. Where it is necessary to optimize the available space and ensure a balance between redundancy (which implies security and reliability), savings and performance, virtualization is a winner that makes logical blocks of separate hardware data storage.
The mode called Block virtualization is the abstraction from the physical layer. It allows for maximum flexibility and scalability of offers for those who sell storage as a service. It also simplifies the migration processes, making it possible to transfer data from one physical host without even interrupting access.
So, you can develop algorithms that can transfer data faster such as solid-state storage, when higher performance is needed, and then bring them back to less expensive disk when access requests are more sporadic.
Read our other Digital Literacy Series here.