Monday, September 2, 2013

VIirtualization Concept and History






What is Virtualization?

Virtualization is a broad topic, as Bob Muglia, Senior vice president for server and tools business at Microsoft Corporation, says “Virtualization is an approach to deploying computer resource that isolate different layers – hardware, software, data, network, storage – from each other”.
So simply we can define virtualization as:
A framework or methodology of dividing the resources of a computer hardware into multiple execution environments, by applying one or more concepts or technologies such as hardware and software partitioning, time sharing, partial or complete machine simulation, emulation, quality of service, and many others.
Bob goes on and says “typically today, operating system is installed directly onto computer’s hardware. Applications are installed directly onto the operating system. The interface is presented through a display connected directly to the local machine. Altering one layer often affects the others, making changes difficult to implement.
“by using software to isolate these layers from each other, virtualization makes it easier to implement changes. The result is simplified management, more efficient use of it resources, and the flexibility to provide the right computing resources, when and where they are needed.”
Now to understand the concept of virtualization more and more let us take a closer look to the history of virtualization.
History of Virtualization
In its conceived form, virtualization was better known in the 1960s as time sharing. Christopher Strachey, the first Professor of Computation at Oxford University and leader of the Programming Research Group, brought this term to life in his paper Time Sharing in Large Fast Computers. Strachey, who was a staunch advocate of maintaining a balance between practical and theoretical work in computing, was referring to what he called multiprogramming. This technique would allow one programmer to develop a program on his console while another programmer was debugging his, thus avoiding the usual wait for peripherals. Multiprogramming, as well as several other groundbreaking ideas, began to drive innovation, resulting in a series of computers that burst onto the scene. Two are considered part of the evolutionary lineage of virtualization as we currently know it -t h e Atlas and IBM's M44/44X.
The Atlas Computer
The first of the supercomputers of the early 1960s took advantage of concepts such as time sharing, multiprogramming, and shared peripheral control, and was dubbed the Atlas computer. A project run by the Department of Electrical Engineering at Manchester University and funded by Ferranti Limited, the Atlas was the fastest computer of its time. The speed it enjoyed was partially due to a separation of operating system processes in a component called the supervisor and the component responsible for executing user programs. The supervisor managed key resources, such as the computer's processing time, and was passed special instructions, or extra codes, to help it provision and manage the computing environment for the user program's instructions. In essence, this was the birth of the hypervisor, or virtual machine monitor. In addition, Atlas introduced the concept of virtual memory, called one-level store, and paging techniques for the system memory. This core store was also logically separated from the store used by user programs, although the two were integrated. In many ways, this was the first step towards creating a layer of abstraction that all virtualization technologies have in common.
The M44/44X Project
Determined to maintain its title as the supreme innovator of computers, and motivated by the competitive atmosphere that existed, IBM answered back with the M44/44X Project. Nested at the IBM Thomas J. Watson Research Center in Yorktown, New York, the project created a similar architecture to that of the Atlas computer. This architecture was first to coin the term virtual machines and became IBM's contribution to the emerging time-sharing system concepts. The main machine was an IBM 7044 (M44) scientific computer and several simulated 7044 virtual machines, or 44Xs, using hardware and software, virtual memory, and multiprogramming, respectively.
Unlike later implementations of time-sharing systems, M44/44X virtual machines did not implement a complete simulation of the underlying hardware. Instead, it fostered the notion that virtual machines were as efficient as more conventional approaches. To nail that notion, IBM successfully released successors of the M44/44X project that showed this idea was not only true, but could lead to a successful approach to computing.
CP/CMS
A later design, the IBM 7094, was finalized by MIT researchers and IBM engineers and introduced Compatible Time Sharing System (CTSS). The term "compatible" refers to the compatibility with the standard batch processing operating system used on the machine, the Fortran Monitor System (FMS). CTSS not only ran FMS in the main 7094 as the primary facility for the standard batch stream, but also ran an unmodified copy of FMS in each virtual machine in a background facility. The background jobs could access all peripherals, such as tapes, printers, punch card readers, and graphic displays, in the same fashion as the foreground FMS jobs as long as they did not interfere with foreground time-sharing processors or any supporting resources.
MIT continued to value the prospects of time sharing, and developed Project MAC as an effort to develop the next generation of advances in time-sharing technology, pressuring hardware manufacturers to deliver improved platforms for their work. IBM's response was a modified and customized version of its System/360 (S/360) that would include virtual memory and time-sharing concepts not previously released by IBM. This proposal to Project MAC was rejected by MIT, a crushing blow to the team at the Cambridge Scientific Center (CSC), whose only purpose was to support the MIT/IBM relationship through technical guidance and lab activities.
The fallout between the two, however, led to one of the most pivotal points in IBM's history. The CSC team, lead by Norm Rassmussen and Bob Creasy, a defect from Project MAC, contributed to the development of CP/CMS. In the late 1960s, the CSC developed the first successful virtual machine operating system based on fully virtualized hardware, the CP-40. The CP-67 was released as a reimplementation of the CP-40, as was later converted and implemented as the S/360-67 and later as the S/370. The success of this platform won back IBM's credibility at MIT as well as several of IBM's largest customers. It also led to the evolution of the platform and the virtual machine operating systems that ran on them, the most popular being VM/370. The VM/370 was capable of running many virtual machines, with larger virtual memory running on virtual copies of the hardware, all managed by a component called the virtual machine monitor (VMM) running on the real hardware. Each virtual machine was able to run a unique installation of IBM's operating system stably and with great performance.
Virtualization Explosion (1990’s and after)
Many companies, such as Sun, Microsoft, and VMware, have released enterprise class products that have wide acceptance, due in part to their existing customer bases.
CIO Magazine even has an article on up and coming virtualization vendors to keep your eyes on (“10 virtualization vendors to watch in 2008” http://www.cio.com/article/print/160951 ). But why all this happened suddenly and why the intense interest from all kind of customers to implement the virtualization technologies in their environments?

No comments:

ಆರ್ಯ ಮತ್ತು ದ್ರಾವಿಡ ನಡುವಿನ ವ್ಯತ್ಯಾಸವೇನು?

  ಮೊದಲಿಗೆ ಈ ಪದಗಳ ಪರಿಚಯ ನೋಡೋಣ. ಆರ್ಯ  ಅನ್ನೋದು ಉತ್ತಮ/ದೊಡ್ಡವರು ಎಂಬ ಅರ್ಥ ನೀಡುತ್ತದೆ  ಅಷ್ಟೇ.  ಉತ್ತಮ ಕುಲದಲ್ಲಿ ಹುಟ್ಟಿದವನು, ಯಜಮಾನ, ಹಿಡಿದ ಕೆಲಸವನ್ನು ಬಿಡ...