Search for notes by fellow students, in your own course and all over the country.

Browse our notes for titles which look like what you need, you can preview any of the notes via a sample of the contents. After you're happy these are the notes you're after simply pop them into your shopping cart.

My Basket

You have nothing in your shopping cart yet.

Title: Virtualization
Description: Introduction to virtualization, Key concepts of Virtualization

Document Preview

Extracts from the notes are below, to see the PDF you'll receive please use the links above


CHAPTER

Virtual Machines and Virtualization
of Clusters and Data Centers

3

CHAPTER OUTLINE
Summary
...
1 Implementation Levels of Virtualization
...
1
...
130
3
...
2 VMM Design Requirements and Providers
...
1
...
135
3
...
4 Middleware Support for Virtualization
...
2 Virtualization Structures/Tools and Mechanisms
...
2
...
140
3
...
2 Binary Translation with Full Virtualization
...
2
...
143
3
...
145
3
...
1 Hardware Support for Virtualization
...
3
...
147
3
...
3 Memory Virtualization
...
3
...
150
3
...
5 Virtualization in Multi-Core Processors
...
4 Virtual Clusters and Resource Management
...
4
...
156
3
...
2 Live VM Migration Steps and Performance Effects
...
4
...
162
3
...
4 Dynamic Deployment of Virtual Clusters
...
5 Virtualization for Data-Center Automation
...
5
...
169
3
...
2 Virtual Storage Management
...
5
...
172
3
...
4 Trust Management in Virtualized Data Centers
...
6 Bibliographic Notes and Homework Problems
...
179
References
...
183

Distributed and Cloud Computing
© 2012 Elsevier, Inc
...


129

130

CHAPTER 3 Virtual Machines and Virtualization of Clusters and Data Centers

SUMMARY
The reincarnation of virtual machines (VMs) presents a great opportunity for parallel, cluster, grid,
cloud, and distributed computing
...
This chapter covers virtualization levels, VM architectures, virtual networking,
virtual cluster construction, and virtualized data-center design and automation in cloud computing
...


3
...
The idea of VMs can be dated back to the 1960s [53]
...
Hardware resources (CPU, memory, I/O devices, etc
...
This virtualization technology has been revitalized as the
demand for distributed and cloud computing increased sharply in recent years [41]
...
For
example, computer users gained access to much enlarged memory space when the concept of virtual
memory was introduced
...
In this chapter we will discuss VMs and their applications
for building distributed systems
...
With sufficient storage, any computer
platform can be installed in another host computer, even if they use processors with different
instruction sets and run with distinct operating systems on the same hardware
...
1
...
1(a)
...
This
is often done by adding additional software, called a virtualization layer as shown in Figure 3
...

This virtualization layer is known as hypervisor or virtual machine monitor (VMM) [54]
...

The main function of the software layer for virtualization is to virtualize the physical hardware
of a host machine into virtual resources to be used by the VMs, exclusively
...
The virtualization software creates the
abstraction of VMs by interposing a virtualization layer at various levels of a computer system
...
2)
...
1 Implementation Levels of Virtualization

Host operating system
Virtualization layer (Hypervisor or VMM)

Hardware

Hardware running the Host OS

(a) Traditional computer

(b) After virtualization

FIGURE 3
...


Application level
JVM /
...
2
Virtualization ranging from hardware to applications in five abstraction levels
...
1
...
1 Instruction Set Architecture Level
At the ISA level, virtualization is performed by emulating a given ISA by the ISA of the host
machine
...
With this approach, it is possible to run a large amount of legacy binary code written for various processors on any given new hardware host machine
...

The basic emulation method is through code interpretation
...
One source instruction may require tens or
hundreds of native target instructions to perform its function
...
For better performance, dynamic binary translation is desired
...
The basic blocks can also be extended
to program traces or super blocks to increase translation efficiency
...
A virtual instruction set architecture (V-ISA) thus
requires adding a processor-specific software translation layer to the compiler
...
1
...
2 Hardware Abstraction Level
Hardware-level virtualization is performed right on top of the bare hardware
...
On the other hand, the process manages
the underlying hardware through virtualization
...
The intention is to upgrade the hardware utilization rate by
multiple users concurrently
...
More
recently, the Xen hypervisor has been applied to virtualize x86-based machines to run Linux or other
guest OS applications
...
3
...
1
...
3 Operating System Level
This refers to an abstraction layer between traditional OS and user applications
...
The containers behave like real servers
...
It is also used, to a lesser extent, in consolidating server
hardware by moving services on separate hosts into containers or VMs on one server
...
1
...


3
...
1
...
Since most systems provide well-documented APIs, such an interface becomes another
candidate for virtualization
...
The software
tool WINE has implemented this approach to support Windows applications on top of UNIX hosts
...
This approach is detailed in Section 3
...
4
...
1
...
5 User-Application Level
Virtualization at the application level virtualizes an application as a VM
...
Therefore, application-level virtualization is also known as

3
...
The most popular approach is to deploy high level language (HLL)
VMs
...
Any program written in the HLL and compiled for this
VM will be able to run on it
...
NET CLR and Java Virtual Machine (JVM) are two
good examples of this class of VM
...
The process involves wrapping the application in a layer that
is isolated from the host OS and other applications
...
An example is the LANDesk application virtualization platform which deploys software applications as self-contained, executable files in an isolated
environment without requiring installation, system modifications, or elevated security privileges
...
1
...
6 Relative Merits of Different Approaches
Table 3
...
The column
headings correspond to four technical merits
...
“Implementation Complexity” implies the cost to implement that particular virtualization level
...
Each row corresponds to a particular level of virtualization
...

Five X’s implies the best case and one X implies the worst case
...
However, the hardware and application levels are also the most
expensive to implement
...
ISA implementation offers
the best application flexibility
...
1
...
This layer is commonly called the Virtual Machine Monitor (VMM) and it
manages the hardware resources of a computing system
...
In this sense, the VMM acts as a traditional OS
...
Therefore, several traditional operating systems which are the same or different can sit on the same set of hardware simultaneously
...
1 Relative Merits of Virtualization at Various Levels (More “X”’s Means Higher Merit, with a
Maximum of 5 X’s)
Level of Implementation

Higher
Performance

Application
Flexibility

Implementation
Complexity

Application
Isolation

ISA
Hardware-level virtualization
OS-level virtualization
Runtime library support
User application level

X
XXXXX
XXXXX
XXX
XX

XXXXX
XXX
XX
XX
XX

XXX
XXXXX
XXX
XX
XXXXX

XXX
XXXX
XX
XX
XXXXX

134

CHAPTER 3 Virtual Machines and Virtualization of Clusters and Data Centers

There are three requirements for a VMM
...
Second, programs run in this environment should show, at worst, only minor decreases in speed
...
Any program run under a VMM should exhibit a function identical
to that which it runs on the original machine directly
...
The former arises when more than one VM is running on the same machine
...
The latter qualification is required because of
the intervening level of software and the effect of any other VMs concurrently existing on the same
hardware
...
However, the identical environment requirement
excludes the behavior of the usual time-sharing operating system from being classed as a VMM
...
Compared with a physical machine,
no one prefers a VMM if its efficiency is too low
...
Such a method
provides the most flexible solutions for VMMs
...
To guarantee the efficiency of a VMM, a statistically dominant subset
of the virtual processor’s instructions needs to be executed directly by the real processor, with no
software intervention by the VMM
...
2 compares four hypervisors and VMMs that are in
use today
...
Not all processors satisfy these requirements for a VMM
...
It is difficult to

Table 3
...
1 Implementation Levels of Virtualization

135

implement a VMM for some types of processors, such as the x86
...
If a processor is not designed to support virtualization primarily, it is necessary to modify the hardware to satisfy the three requirements for a VMM
...


3
...
3 Virtualization Support at the OS Level
With the help of VM technology, a new computing mode known as cloud computing is emerging
...
However, cloud computing has at
least two challenges
...
For example, a task may need only a single CPU during some phases of execution but may need hundreds of CPUs at other times
...
Currently, new VMs originate either as fresh
boots or as replicates of a template VM, unaware of the current application state
...


3
...
3
...
In a cloud computing environment, perhaps thousands of VMs need to be initialized simultaneously
...
As a
matter of fact, there is considerable repeated content among VM images
...
To reduce the performance overhead of
hardware-level virtualization, even hardware modification is needed
...

Operating system virtualization inserts a virtualization layer inside an operating system to
partition a machine’s physical resources
...
This kind of VM is often called a virtual execution environment (VE), Virtual
Private System (VPS), or simply container
...
This means a VE has its own set of processes, file system, user accounts, network interfaces
with IP addresses, routing tables, firewall rules, and other personal settings
...
Therefore, OS-level
virtualization is also called single-OS image virtualization
...
3 illustrates operating system
virtualization from the point of view of a machine stack
...
1
...
2 Advantages of OS Extensions
Compared to hardware-level virtualization, the benefits of OS extensions are twofold: (1) VMs at the
operating system level have minimal startup/shutdown costs, low resource requirements, and high
scalability; and (2) for an OS-level VM, it is possible for a VM and its host environment to synchronize state changes when necessary
...
In cloud

user
Application
software

root
Application
software
user
user

OpenVZ
Application
software

root

Application
software

root
user

root
user
user
Application
software

user

OpenVZ
Application
software

Virtual private server

user
user

Virtual private server

user
user

Virtual private server
Virtual private server

root

Physical server (Hardware node) #3

user

Virtual private server
Virtual private server

root

Physical server (Hardware node)
Physical server (Hardware node) #1 #2

Virtual private server

CHAPTER 3 Virtual Machines and Virtualization of Clusters and Data Centers

Physical server (Hardware node) #1

136

root
user
user
Application
software

templates

OpenVZ layer

templates
Host operating system

OpenVZ layer

OpenVZ templates
Host operating system
OpenVZ layer

Hardware
Network

Hardware

Host operating system
Network
Hardware
Network

FIGURE 3
...

(Courtesy of OpenVZ User’s Guide [65] )

computing, the first and second benefits can be used to overcome the defects of slow initialization of
VMs at the hardware level, and being unaware of the current application state, respectively
...
1
...
3 Disadvantages of OS Extensions
The main disadvantage of OS extensions is that all the VMs at operating system level on a single
container must have the same kind of guest operating system
...
For example, a Windows distribution such as Windows XP cannot run on a
Linux-based container
...
Some prefer
Windows and others prefer Linux or other operating systems
...

Figure 3
...
The virtualization layer is inserted
inside the OS to partition the hardware resources for multiple VMs to run their applications in
multiple virtual environments
...
Furthermore, the access requests from
a VM need to be redirected to the VM’s local resource partition on the physical machine
...
1 Implementation Levels of Virtualization

137

example, the chroot command in a UNIX system can create several virtual root directories within a
host OS
...

There are two ways to implement virtual root directories: duplicating common resources to each
VM partition; or sharing most resources with the host environment and only creating private
resource copies on the VM on demand
...
This issue neutralizes the benefits of OS-level virtualization, compared with
hardware-assisted virtualization
...


3
...
3
...
Virtualization support on the
Windows-based platform is still in the research stage
...
New hardware may need a new Linux kernel to support
...

However, most Linux platforms are not tied to a special kernel
...
Table 3
...
Two OS tools (Linux vServer
and OpenVZ) support Linux platforms to run other platform-based applications through virtualization
...
1
...


Example 3
...
OpenVZ is an open source container-based virtualization solution built on
Linux
...
The overall picture of the OpenVZ system is illustrated
in Figure 3
...
Several VPSes can run simultaneously on a physical machine
...
3 Virtualization Support for Linux and Windows NT Platforms
Virtualization Support and Source of
Information

Brief Introduction on Functionality and
Application Platforms

Linux vServer for Linux platforms (http://linuxvserver
...
openvz

...
pdf)

FVM (Feather-Weight Virtual Machines) for
virtualizing the Windows NT platforms [78])

138

CHAPTER 3 Virtual Machines and Virtualization of Clusters and Data Centers

Linux servers
...

The resource management subsystem of OpenVZ consists of three components: two-level disk allocation, a two-level CPU scheduler, and a resource controller
...
This is the first level of disk allocation
...
Hence, the VM administrator is responsible for allocating disk space for each user and
group
...
The first-level CPU scheduler of OpenVZ decides which VM to
give the time slice to, taking into account the virtual CPU priority and limit settings
...
OpenVZ has a set of about 20 parameters
which are carefully chosen to cover all aspects of VM operation
...
OpenVZ also supports checkpointing and live migration
...
This file can then be transferred to another physical machine and the VM can
be restored there
...
However, there is still a delay
in processing because the established network connections are also migrated
...
1
...
This type of virtualization can create execution environments for running alien programs
on a platform rather than creating a VM to run the entire operating system
...
This section provides an overview of several
library-level virtualization systems: namely the Windows Application Binary Interface (WABI),
lxrun, WINE, Visual MainWin, and vCUDA, which are summarized in Table 3
...


Table 3
...
sun
...
ugcs
...
edu/
~steven/lxrun/)
WINE (http://www
...
org/)

Visual MainWin (http://www
...
com/)

vCUDA (Example 3
...
1 Implementation Levels of Virtualization

139

The WABI offers middleware to convert Windows system calls to Solaris system calls
...
Similarly, Wine offers library support for virtualizing x86 processors to run Windows applications on UNIX hosts
...
The vCUDA is explained in Example 3
...
4
...
2 The vCUDA for Virtualization of General-Purpose GPUs
CUDA is a programming model and library for general-purpose GPUs
...
However, it is difficult to run CUDA
applications on hardware-level VMs directly
...
When CUDA applications run on a guest OS and issue a call to the CUDA API, vCUDA
intercepts the call and redirects it to the CUDA API running on the host OS
...
4 shows the basic
concept of the vCUDA architecture [57]
...
It consists of three user
space components: the vCUDA library, a virtual GPU in the guest OS (which acts as a client), and the
vCUDA stub in the host OS (which acts as a server)
...
It is responsible for intercepting and redirecting API calls from
the client to the stub
...

The functionality of a vGPU is threefold: It abstracts the GPU structure and gives applications a uniform view of the underlying hardware; when a CUDA application in the guest OS allocates a device’s memory the vGPU can return a local virtual address to the application and notify the remote stub to allocate the
real device memory, and the vGPU is responsible for storing the CUDA API flow
...


CUDA library

vCUDA library vGPU

Device driver
VMM
Device (GPU, Hard disk, Network card)
FIGURE 3
...

(Courtesy of Lin Shi, et al
...
The vCUDA stub also manages actual physical resource
allocation
...
2 VIRTUALIZATION STRUCTURES/TOOLS AND MECHANISMS
In general, there are three typical classes of VM architecture
...
1 showed the architectures of
a machine before and after virtualization
...
After virtualization, a virtualization layer is inserted between the hardware and the operating system
...
Therefore, different operating systems such as Linux and Windows
can run on the same physical machine, simultaneously
...
The hypervisor is also known as the VMM (Virtual
Machine Monitor)
...


3
...
1 Hypervisor and Xen Architecture
The hypervisor supports hardware-level virtualization (see Figure 3
...
The hypervisor software sits directly between the physical hardware and its OS
...

The hypervisor provides hypercalls for the guest OSes and applications
...
Or it can
assume a monolithic hypervisor architecture like the VMware ESX for server virtualization
...
The device drivers and other changeable components
are outside the hypervisor
...
Therefore, the size of the hypervisor code of a micro-kernel hypervisor is smaller than that of a monolithic hypervisor
...


3
...
1
...
Xen is a microkernel hypervisor, which separates the policy from the mechanism
...
5
...
It just provides a mechanism by which a guest OS
can have direct access to the physical devices
...
Xen provides a virtual environment located between the hardware and the OS
...

The core components of a Xen system are the hypervisor, kernel, and applications
...
Like other virtualization systems, many guest OSes
can run on top of the hypervisor
...
2 Virtualization Structures/Tools and Mechanisms

Control, I/O (Domain 0)

Application

Application

Application

XenoLinux

Guest domain
Application

Application

Application

Application

Application

Application

Application

Domain0

Guest domain

141

XenoWindows

XEN (Hypervisor)
Hardware devices

FIGURE 3
...

(Courtesy of P
...
[7] )

particular controls the others
...
Domain 0 is a privileged guest OS of Xen
...
Domain 0 is designed to access hardware
directly and manage devices
...

For example, Xen is based on Linux and its security level is C2
...
If Domain
0 is compromised, the hacker can control the entire system
...
Domain 0, behaving as a VMM, allows users to
create, copy, save, read, modify, share, migrate, and roll back VMs as easily as manipulating a file,
which flexibly provides tremendous benefits for users
...

Traditionally, a machine’s lifetime can be envisioned as a straight line where the current state of
the machine is a point that progresses monotonically as the software executes
...
In such an environment,
the VM state is akin to a tree: At any point, execution can go into N different branches where multiple
instances of a VM can exist at any point in this tree at any given time
...
g
...
g
...


3
...
2 Binary Translation with Full Virtualization
Depending on implementation technologies, hardware virtualization can be classified into two categories: full virtualization and host-based virtualization
...
It relies on binary translation to trap and to virtualize the execution of certain
sensitive, nonvirtualizable instructions
...
In a host-based system, both a host OS and a guest OS are used
...
These two classes of VM architecture are introduced next
...
2
...
1 Full Virtualization
With full virtualization, noncritical instructions run on the hardware directly while critical
instructions are discovered and replaced with traps into the VMM to be emulated by software
...
Why are only critical
instructions trapped into the VMM? This is because binary translation can incur a large performance
overhead
...
Therefore, running noncritical instructions on hardware not only can
promote efficiency, but also can ensure system security
...
2
...
2 Binary Translation of Guest OS Requests Using a VMM
This approach was implemented by VMware and many other software companies
...
6, VMware puts the VMM at Ring 0 and the guest OS at Ring 1
...
When
these instructions are identified, they are trapped into the VMM, which emulates the behavior of
these instructions
...
Therefore, full virtualization combines binary translation and direct execution
...
Consequently, the guest OS is unaware that it is being virtualized
...
In particular, the full virtualization of I/O-intensive applications is
a really a big challenge
...
At the time of this writing, the
performance of full virtualization on the x86 architecture is typically 80 percent to 97 percent that
of the host machine
...
2
...
3 Host-Based Virtualization
An alternative VM architecture is to install a virtualization layer on top of the host OS
...
The guest OSes are installed and run on top of the
virtualization layer
...
Certainly, some other applications
can also run with the host OS directly
...
First, the user can install this
of user
Ring 2
VM architecture without modifying the host OS
...
This will simplify the VM design and ease
translation
VMM
Ring 0
its deployment
...
Compared to
the hypervisor/VMM architecture, the perforFIGURE 3
...
When an application requests hardware
Indirect execution of complex instructions via binary
access, it involves four layers of mapping which
translation of guest OS requests using the VMM plus
direct execution of simple instructions on the same host
...
When the
ISA of a guest OS is different from the ISA of
(Courtesy of VM Ware [71] )

3
...
Although the host-based architecture has
flexibility, the performance is too low to be useful in practice
...
2
...
A para-virtualized VM provides
special APIs requiring substantial OS modifications in user applications
...
No one wants to use a VM if it is much slower than using a
physical machine
...
However, para-virtualization attempts to reduce the virtualization overhead, and thus
improve performance by modifying only the guest OS kernel
...
7 illustrates the concept of a para-virtualized VM architecture
...
They are assisted by an intelligent compiler to replace the nonvirtualizable
OS instructions by hypercalls as illustrated in Figure 3
...
The traditional x86 processor offers four
instruction execution rings: Rings 0, 1, 2, and 3
...
The OS is responsible for managing the hardware and the privileged instructions to execute at Ring 0, while user-level applications run at Ring 3
...


3
...
3
...
According to the x86 ring definition, the virtualization layer should also be installed at
Ring 0
...
In Figure 3
...
However, when the guest OS kernel is modified for virtualization, it
can no longer run on the hardware directly
...
7
Para-virtualized VM architecture, which involves
modifying the guest OS kernel to replace
nonvirtualizable instructions with hypercalls for the
hypervisor or the VMM to carry out the virtualization
process (See Figure 3
...
)

FIGURE 3
...

(Courtesy of VMWare [71] )

144

CHAPTER 3 Virtual Machines and Virtualization of Clusters and Data Centers

Although para-virtualization reduces the overhead, it has incurred other problems
...
Second, the cost of maintaining para-virtualized OSes is high, because they may require
deep OS kernel modifications
...
Compared with full virtualization, para-virtualization is
relatively easy and more practical
...
To speed up binary translation is difficult
...
The popular Xen, KVM, and VMware ESX
are good examples
...
2
...
2 KVM (Kernel-Based VM)
This is a Linux para-virtualization system—a part of the Linux version 2
...
20 kernel
...
The KVM does
the rest, which makes it simpler than the hypervisor that controls the entire machine
...


3
...
3
...
The guest OS
kernel is modified to replace the privileged and sensitive instructions with hypercalls to the hypervisor or VMM
...

The guest OS running in a guest domain may run at Ring 1 instead of at Ring 0
...
The
privileged instructions are implemented by hypercalls to the hypervisor
...
On an UNIX
system, a system call involves an interrupt or service routine
...


Example 3
...
The company has developed virtualization tools
for desktop systems and servers as well as virtual infrastructure for large data centers
...
It accesses hardware
resources such as I/O directly and has complete resource management control
...
9
...

The VMM layer virtualizes the physical hardware resources such as CPU, memory, network and disk
controllers, and human interface devices
...
The
resource manager allocates CPU, memory disk, and network bandwidth and maps them to the virtual
hardware resource set of each VM created
...
3 Virtualization of CPU, Memory, and I/O Devices

Guest
OS

Guest
OS

Guest
OS

145

Guest
OS
Console
OS

VMM
VMkernel

VMM

VMM

Scheduler

Memory
mgmt

SCSI
driver

x86 SMP
hardware
CPU

Memory

disk

VMM
Ethernet
driver

NIC

FIGURE 3
...

(Courtesy of VMware [71] )

VMware ESX Server File System
...
It also facilitates
the process for system administrators
...
3 VIRTUALIZATION OF CPU, MEMORY, AND I/O DEVICES
To support virtualization, processors such as the x86 employ a special running mode and instructions,
known as hardware-assisted virtualization
...
To
save processor states, mode switching is completed by hardware
...


3
...
1 Hardware Support for Virtualization
Modern operating systems and processors permit multiple processes to run simultaneously
...
Therefore, all processors have at least two modes, user
mode and supervisor mode, to ensure controlled access of critical hardware
...
Other instructions are unprivileged instructions
...
Example 3
...


146

CHAPTER 3 Virtual Machines and Virtualization of Clusters and Data Centers

At the time of this writing, many hardware virtualization products were available
...
This software suite allows users
to set up multiple x86 and x86-64 virtual computers and to use one or more of these VMs simultaneously with the host operating system
...
Xen is a hypervisor for use in IA-32, x86-64, Itanium, and PowerPC 970 hosts
...

One or more guest OS can run on top of the hypervisor
...
KVM can support hardware-assisted virtualization and
paravirtualization by using the Intel VT-x or AMD-v and VirtIO framework, respectively
...


Example 3
...

Figure 3
...
For processor virtualization,
Intel offers the VT-x or VT-i technique
...
This enhancement traps all sensitive instructions in the VMM automatically
...
For I/O virtualization, Intel implements VT-d and VT-c to
support this
...


EPT

VT-d

VT-c
Storage

FIGURE 3
...

(Modified from [68], Courtesy of Lizhong Chen, USC)

3
...
3
...
Thus, unprivileged instructions of VMs run directly on the
host machine for higher efficiency
...
The critical instructions are divided into three categories: privileged instructions, controlsensitive instructions, and behavior-sensitive instructions
...
Control-sensitive instructions attempt to change
the configuration of resources used
...

A CPU architecture is virtualizable if it supports the ability to run the VM’s privileged and
unprivileged instructions in the CPU’s user mode while the VMM runs in supervisor mode
...
In this case, the VMM acts as a unified mediator for hardware
access from different VMs to guarantee the correctness and stability of the whole system
...
RISC CPU architectures can be naturally virtualized
because all control- and behavior-sensitive instructions are privileged instructions
...
This is because about 10
sensitive instructions, such as SGDT and SMSW, are not privileged instructions
...

On a native UNIX-like system, a system call triggers the 80h interrupt and passes control to the
OS kernel
...
On a paravirtualization system such as Xen, a system call in the guest OS first triggers the 80h interrupt normally
...
Incidentally,
control is passed on to the hypervisor as well
...
Certainly, the guest OS kernel may
also invoke the hypercall while it’s running
...


3
...
2
...

Intel and AMD add an additional mode called privilege mode level (some people call it Ring-1) to
x86 processors
...
All the privileged and sensitive instructions are trapped in the hypervisor automatically
...
It
also lets the operating system run in VMs without modification
...
5 Intel Hardware-Assisted CPU Virtualization
Although x86 processors are not virtualizable primarily, great effort is taken to virtualize them
...
Virtualization of x86 processors is detailed in the following sections
...
11
...
In order to control the start and stop of a VM and allocate a memory page to maintain the

148

CHAPTER 3 Virtual Machines and Virtualization of Clusters and Data Centers

VM0

VM0
Ring 3

Apps

Apps

Ring 0

WinXP

WinXP

VMX
root
mode

VM
entry

VM
exit

VMCS
configuration

VM control structure

VT-x
CPU0
VT-x
CPU0

VMM
Memory and I/O
virtualization

Processors with
VT-x (or VT-i)

FIGURE 3
...

(Modified from [68], Courtesy of Lizhong Chen, USC)

CPU state for VMs, a set of additional instructions is added
...

Generally, hardware-assisted virtualization should have high efficiency
...
Hence, virtualization systems such as VMware now use a hybrid
approach, in which a few tasks are offloaded to the hardware but the rest is still done in software
...


3
...
3 Memory Virtualization
Virtual memory virtualization is similar to the virtual memory support provided by modern operating systems
...
All modern x86 CPUs include a memory management unit (MMU)
and a translation lookaside buffer (TLB) to optimize virtual memory performance
...

That means a two-stage mapping process should be maintained by the guest OS and the VMM,
respectively: virtual memory to physical memory and physical memory to machine memory
...

The guest OS continues to control the mapping of virtual addresses to the physical memory
addresses of VMs
...
The VMM
is responsible for mapping the guest physical memory to the actual machine memory
...
12
shows the two-level memory mapping procedure
...
3 Virtualization of CPU, Memory, and I/O Devices

VM1
Process1

149

VM2
Process2

Process1

Process2
Virtual VA
memory

Physical PA
memory
Machine MA
memory

FIGURE 3
...

(Courtesy of R
...
[68] )

Since each page table of the guest OSes has a separate page table in the VMM corresponding to
it, the VMM page table is called the shadow page table
...
The MMU already handles virtual-to-physical translations as defined
by the OS
...
Since modern operating systems maintain a set of
page tables for every process, the shadow page tables will get flooded
...

VMware uses shadow page tables to perform virtual-memory-to-machine-memory address translation
...
When the guest OS changes the virtual memory
to a physical memory mapping, the VMM updates the shadow page tables to enable a direct
lookup
...
It provides hardware assistance to the two-stage address translation in a virtual execution
environment by using a technology called nested paging
...
6 Extended Page Table by Intel for Memory Virtualization
Since the efficiency of the software shadow page table technique was too low, Intel developed a hardwarebased EPT technique to improve it, as illustrated in Figure 3
...
In addition, Intel offers a Virtual Processor
ID (VPID) to improve use of the TLB
...
In Figure 3
...

When a virtual address needs to be translated, the CPU will first look for the L4 page table pointed to by
Guest CR3
...
In this procedure, the CPU will check the
EPT TLB to see if the translation is there
...
If the CPU cannot find the translation in the EPT, an EPT violation exception will be raised
...
If the entry corresponding to the GVA in the L4

150

CHAPTER 3 Virtual Machines and Virtualization of Clusters and Data Centers

Process in guest OS
L1
L2
GVA

L3
L4
GPA

Guest OS
kernel

Virtual
machine

CR3

VMM
EPT MMU
Hardware
EPT TLB
GVA: virtual memory
address of a process in
guest OS
GPA: physical memory
address in guest OS
HPA: physical memory
address of the host
machine

HPA
GPA

EPT
pointer

FIGURE 3
...


page table is a page fault, the CPU will generate a page fault interrupt and will let the guest OS kernel
handle the interrupt
...
To get the HPA corresponding to a GVA, the CPU
needs to look for the EPT five times, and each time, the memory needs to be accessed four times
...
To overcome this shortcoming, Intel increased the size of the EPT TLB to decrease the number of memory accesses
...
3
...
At the time of this writing, there are three ways to implement I/O virtualization: full device emulation, para-virtualization, and direct I/O
...
Generally, this approach emulates well-known, real-world devices
...
3 Virtualization of CPU, Memory, and I/O Devices

Guest OS

151

• Guest device driver

Virtualization layer

• Virtual device
Device driver

Device
emulation
I/O Stack
Device driver

• Virtualization layer
– emulates the virtual device
– remaps guest and real I/O addresses
– multiplexes and drives the physical device
– I/O features
...
g
...
14
Device emulation for I/O virtualization implemented inside the middle layer that maps real I/O devices into the
virtual devices for the guest device driver to use
...
Chadha, et al
...
Dong, et al
...
This software is located in the VMM and acts as a
virtual device
...
The full device emulation approach is shown in Figure 3
...

A single hardware device can be shared by multiple VMs that run concurrently
...
The para-virtualization method of
I/O virtualization is typically used in Xen
...
The frontend driver is running in Domain U and the backend driver is running in Domain 0
...
The frontend
driver manages the I/O requests of the guest OSes and the backend driver is responsible for managing
the real I/O devices and multiplexing the I/O data of different VMs
...

Direct I/O virtualization lets the VM access devices directly
...
However, current direct I/O virtualization implementations focus
on networking for mainframes
...
For
example, when a physical device is reclaimed (required by workload migration) for later reassignment, it may have been set to an arbitrary state (e
...
, DMA to some arbitrary memory locations)
that can function incorrectly or even crash the whole system
...

Intel VT-d supports the remapping of I/O DMA transfers and device-generated interrupts
...

Another way to help I/O virtualization is via self-virtualized I/O (SV-IO) [47]
...
All tasks associated with virtualizing an
I/O device are encapsulated in SV-IO
...
SV-IO defines one virtual interface (VIF) for every kind of virtualized I/O device, such as virtual network interfaces, virtual block devices (disk), virtual camera devices,

152

CHAPTER 3 Virtual Machines and Virtualization of Clusters and Data Centers

and others
...
Each VIF consists of two message queues
...
In addition, each VIF has a unique ID for identifying it in SV-IO
...
7 VMware Workstation for I/O Virtualization
The VMware Workstation runs as an application
...
The application portion (VMApp) uses a driver loaded into the host
operating system (VMDriver) to establish the privileged VMM, which runs directly on the hardware
...
The VMware Workstation employs full device emulation to
implement I/O virtualization
...
15 shows the functional blocks used in sending and receiving packets
via the emulated virtual NIC
...
15
Functional blocks involved in sending and receiving network packets
...
3 Virtualization of CPU, Memory, and I/O Devices

153

The virtual NIC models an AMD Lance Am79C970A controller
...
When the last OUT instruction of the sequence is encountered, the Lance emulator calls a normal write() to the VMNet driver
...
The switch raises a virtual interrupt to notify the guest device driver that the packet was sent
...


3
...
5 Virtualization in Multi-Core Processors
Virtualizing a multi-core processor is relatively more complicated than virtualizing a uni-core processor
...
There are mainly two difficulties: Application programs must be parallelized to use all cores fully, and software must explicitly
assign tasks to the cores, which is a very complex problem
...
The second challenge has spawned research involving scheduling
algorithms and resource management policies
...
What is worse, as technology scales, a new challenge called
dynamic heterogeneity is emerging to mix the fat CPU core and thin GPU cores on the same chip,
which further complicates the multi-core or many-core resource management
...


3
...
5
...
[74] proposed a multicore virtualization method to allow hardware designers to get an
abstraction of the low-level details of the processor cores
...
It is located under the ISA and remains
unmodified by the operating system or VMM (hypervisor)
...
16 illustrates the technique of a
software-visible VCPU moving from one core to another and temporarily suspending execution of a
VCPU when there are no appropriate cores on which it can run
...
3
...
2 Virtual Hierarchy
The emerging many-core chip multiprocessors (CMPs) provides a new computing landscape
...
This idea was originally suggested by Marty and Hill [39]
...
Unlike a fixed physical hierarchy, a virtual
hierarchy can adapt to fit how the work is space shared for improved performance and performance
isolation
...

ISA

Paused
C0

V3

C2

Physical
cores

FIGURE 3
...

(Courtesy of Wells, et al
...
A virtual hierarchy is a cache hierarchy
that can adapt to fit the workload or mix of workloads [39]
...
When a miss leaves a tile, it
first attempts to locate the block (or sharers) within the first level
...
A miss at the L1 cache can invoke the L2
access
...
17(a)
...
The basic assumption is that each
workload runs in its own VM
...
Statically distributing the directory among tiles can do much better, provided operating systems or hypervisors carefully map virtual pages to physical frames
...

Figure 3
...
Each
VM operates in a isolated fashion at the first level
...
Moreover, the shared resources of cache
capacity, inter-connect links, and miss handling are mostly isolated between VMs
...
This facilitates dynamically repartitioning resources without
costly cache flushes
...

A virtual hierarchy adapts to space-shared workloads like multiprogramming and server consolidation
...
17 shows a case study focused on consolidated server workloads in a tiled architecture
...


3
...
17
CMP server consolidation by space-sharing of VMs into many cores forming multiple virtual clusters to
execute various workloads
...
4 VIRTUAL CLUSTERS AND RESOURCE MANAGEMENT
A physical cluster is a collection of servers (physical machines) interconnected by a physical network
such as a LAN
...
Here,
we introduce virtual clusters and study its properties as well as explore their potential applications
...


156

CHAPTER 3 Virtual Machines and Virtualization of Clusters and Data Centers

When a traditional VM is initialized, the administrator needs to manually write configuration
information or specify the configuration sources
...
Amazon’s Elastic
Compute Cloud (EC2) is a good example of a web service that provides elastic computing power in
a cloud
...
Most virtualization platforms, including XenServer and VMware ESX Server, support a bridging mode which allows all domains to appear on the network as individual hosts
...


3
...
1 Physical versus Virtual Clusters
Virtual clusters are built with VMs installed at distributed servers from one or more physical clusters
...
Figure 3
...
Each
virtual cluster is formed with physical machines or a VM hosted by multiple physical clusters
...

The provisioning of VMs to a virtual cluster is done dynamically to have the following interesting properties:




The virtual cluster nodes can be either physical or virtual machines
...

A VM runs with a guest OS, which is often different from the host OS, that manages the
resources in the physical machine, where the VM is implemented
...
This
will greatly enhance server utilization and application flexibility
...
18
A cloud platform with four virtual clusters over three physical clusters shaded differently
...
4 Virtual Clusters and Resource Management





157

VMs can be colonized (replicated) in multiple servers for the purpose of promoting distributed
parallelism, fault tolerance, and disaster recovery
...

The failure of any physical nodes may disable some VMs installed on the failing nodes
...


Since system virtualization has been widely used, it is necessary to effectively manage VMs
running on a mass of physical computing nodes (also called virtual clusters) and consequently build
a high-performance virtualized computing environment
...
The different node colors in
Figure 3
...
In a virtual cluster system, it is quite important to
store the large number of VM images efficiently
...
19 shows the concept of a virtual cluster based on application partitioning or customization
...
As a large
number of VM images might be present, the most important thing is to determine how to store
those images in the system efficiently
...
These software packages can
be preinstalled as templates (called template VMs)
...
New OS instances can be copied from the template VM
...

Three physical clusters are shown on the left side of Figure 3
...
Four virtual clusters are created
on the right, over the physical clusters
...
In
contrast, the VMs are guest systems
...
19
The concept of a virtual cluster based on application partitioning
...
Each VM can be installed on a remote server or replicated on multiple servers belonging to
the same or different physical clusters
...


3
...
1
...
Here, deployment means two things:
to construct and distribute software stacks (OS, libraries, applications) to a physical node inside clusters as fast as possible, and to quickly switch runtime environments from one user’s virtual cluster to
another user’s virtual cluster
...

The concept of “green computing” has attracted much attention recently
...
Consequently, they do not necessarily reduce the power consumption of the whole cluster
...
The live migration of VMs allows workloads of one node to transfer to
another node
...
In
fact, the potential overhead caused by live migrations of VMs cannot be ignored
...
Therefore, the challenge is to determine how to design migration strategies to implement
green computing without influencing the performance of clusters
...
Load balancing can be achieved using the
load index and frequency of user logins
...
Consequently, we can increase the resource
utilization of nodes and shorten the response time of systems
...
Dynamically adjusting loads among nodes by
live migration of VMs is desired, when the loads on cluster nodes become quite unbalanced
...
4
...
2 High-Performance Virtual Storage
The template VM can be distributed to several physical hosts in the cluster to customize the VMs
...
It is important to efficiently manage the disk spaces occupied by template software
packages
...
Hash values are used to compare the contents of data blocks
...
New blocks are created when users modify the corresponding data
...

Basically, there are four steps to deploy a group of VMs onto a target cluster: preparing the disk
image, configuring the VMs, choosing the destination nodes, and executing the VM deployment command
on every host
...
A template is
a disk image that includes a preinstalled operating system with or without certain application software
...
Templates could implement the COW (Copy on Write) format
...
Therefore, it definitely reduces disk space consumption
...


3
...
One needs to record each VM configuration into a file
...
VMs with the same configurations could use preedited profiles
to simplify the process
...
Most configuration items use the same settings, while some of them, such as UUID, VM name,
and IP address, are assigned with automatically calculated values
...
A strategy to choose the proper destination host for any VM is needed
...


3
...
2 Live VM Migration Steps and Performance Effects
In a cluster built with mixed nodes of host and guest systems, the normal method of operation is to
run everything on the physical machine
...
In other words, a physical node
can fail over to a VM on another host
...
The advantage is enhanced failover flexibility
...
However, this problem can be mitigated
with VM life migration
...
20 shows the process of life migration of a VM from host A to host
B
...

There are four ways to manage a virtual cluster
...
In this case, multiple VMs form a virtual cluster
...
Another example is Sun’s cluster Oasis, an experimental Solaris cluster of VMs supported
by a VMware VMM
...
The host-based
manager supervises the guest systems and can restart the guest system on another physical machine
...

These two cluster management systems are either guest-only or host-only, but they do not mix
...
This will make infrastructure management more complex, however
...
This means the manager must be designed to
distinguish between virtualized resources and physical resources
...

VMs can be live-migrated from one physical machine to another; in case of failure, one VM can be
replaced by another VM
...
The major attraction of this scenario is that virtual clustering provides dynamic resources that can be quickly put together upon user demand or after a node
failure
...
When a VM runs a live service, it is necessary to make a trade-off to ensure that the migration occurs in a manner that minimizes
all three metrics
...

Furthermore, we should ensure that the migration will not disrupt other active services residing
in the same host through resource contention (e
...
, CPU, network bandwidth)
...
An inactive state is defined by the virtualization platform, under which

160

CHAPTER 3 Virtual Machines and Virtualization of Clusters and Data Centers

VM running normally on
Host A

Stage 0: Pre-Migration
Active VM on Host A
Alternate physical host may be preselected for migration
Block devices mirrored and free resources maintained
Stage 1: Reservation
Initialize a container on the target host

Overhead due to copying

Downtime
(VM out of service)

Stage 2: Iterative pre-copy
Enable shadow paging
Copy dirty pages in successive rounds
...
20
Live migration process of a VM from one host to another
...
Clark, et al
...
An active state refers to a VM that has been instantiated at the virtualization
platform to perform a real task
...
A VM enters the suspended state if its
machine file and virtual resources are stored back to the disk
...
20, live migration of a VM consists of the following six steps:
Steps 0 and 1: Start migration
...
Although users could manually make a
VM migrate to an appointed host, in most circumstances, the migration is automatically started
by strategies such as load balancing and server consolidation
...
Since the whole execution state of the VM is stored in memory,
sending the VM’s memory to the destination node ensures continuity of the service provided by
the VM
...
These steps keep iterating until the
dirty portion of the memory is small enough to handle the final copy
...


3
...
The migrating VM’s
execution is suspended when the last round’s memory data is transferred
...
During this step, the VM is stopped and
its applications will no longer run
...

Steps 4 and 5: Commit and activate the new host
...
Then the network connection is redirected to the new
VM and the dependency to the source host is cleared
...

Figure 3
...
Before copying the VM with 512 KB files for 100 clients, the data
throughput was 870 MB/second
...
Then the data rate reduces to 694 MB/second in 9
...
The system experiences only 165 ms of downtime, before the
VM is restored at the destination host
...
This is critical to achieve dynamic cluster
reconfiguration and disaster recovery as needed in cloud computing
...

With the emergence of widespread cluster computing more than a decade ago, many cluster configuration and management systems have been developed to achieve a range of goals
...
VM technology has become a
popular method for simplifying management and sharing of physical computing resources
...
8 secs

800

600
694 Mbit/sec
400
165 ms total down time
200

Sample over 100 ms

512 Kb files
100 concurrent clients

Sample over 500 ms

0
0

10

20

30

40

50

60

70

80

90

100

110

120

130

Elapsed time (secs)

FIGURE 3
...

(Courtesy of C
...
[14] )

162

CHAPTER 3 Virtual Machines and Virtualization of Clusters and Data Centers

such as VMware and Xen allow multiple VMs with different operating systems and configurations
to coexist on the same physical host in mutual isolation
...
4
...
Shared clusters offer economies of scale and more effective utilization of
resources by multiplexing
...
When one system migrates to another physical node, we should
consider the following issues
...
4
...
1 Memory Migration
This is one of the most important aspects of VM migration
...
But traditionally, the
concepts behind the techniques tend to share common implementation paradigms
...

Memory migration can be in a range of hundreds of megabytes to a few gigabytes in a typical
system today, and it needs to be done in an efficient manner
...
Temporal locality refers to the fact that the memory
states differ only by the amount of work done since a VM was last suspended before being initiated
for migration
...

A copy of this tree exists in both the suspended and resumed VM instances
...
The ISR technique deals with situations where the migration of live
machines is not a necessity
...


3
...
3
...
A simple way to achieve this is to provide
each VM with its own virtual disk which the file system is mapped to and transport the contents of
this virtual disk along with the other states of the VM
...

Another way is to have a global file system across all machines where a VM could be located
...


3
...
The actual file systems themselves are not mapped onto the distributed file
system
...
The relevant VM files are explicitly
copied into the local file system for a resume operation and taken out of the local file system for a
suspend operation
...
It also essentially disassociates the
VMM from any particular distributed file system semantics
...

In smart copying, the VMM exploits spatial locality
...
In these conditions, it is possible to
transmit only the difference between the two file systems at suspending and resuming locations
...
In situations
where there is no locality to exploit, a different approach is to synthesize much of the state at the resuming site
...
Operating
system and application software account for the majority of storage space
...


3
...
3
...
To enable
remote systems to locate and communicate with a VM, each VM must be assigned a virtual IP
address known to other entities
...
Each VM can also have its own distinct virtual MAC address
...
In
general, a migrating VM includes all the protocol states and carries its IP address with it
...
This solves the open network connection problem by reconfiguring
all the peers to send future packets to a new location
...
Alternatively, on
a switched network, the migrating OS can keep its original Ethernet MAC address and rely on the
network switch to detect its move to a new port
...
This capability is being increasingly utilized in today’s enterprise environments to provide efficient online system maintenance, reconfiguration, load balancing,
and proactive fault tolerance
...
As a result, many implementations are available which support the feature using
disparate functionalities
...
By importing the precopy mechanism, a VM could be livemigrated without stopping the VM and keep the applications running during the migration
...
Here, we focus on VM
migration within a cluster environment where a network-accessible storage system, such as storage

164

CHAPTER 3 Virtual Machines and Virtualization of Clusters and Data Centers

area network (SAN) or network attached storage (NAS), is employed
...
Live migration techniques mainly
use the precopy approach, which first transfers all memory pages, and then only copies modified pages
during the last round iteratively
...
When applications’ writable working set becomes small, the VM is suspended and
only the CPU state and dirty pages in the last round are sent out to the destination
...
An adaptive rate limiting approach is employed to mitigate this issue, but total migration time
is prolonged by nearly 10 times
...

In fact, these issues with the precopy approach are caused by the large amount of transferred
data during the whole migration process
...
This approach transfers the execution trace
file in iterations rather than dirty pages, which is logged by a trace daemon
...
So, total migration time and downtime of
migration are drastically reduced
...
The inequality between source and target nodes limits the application scope of live migration in clusters
...
Here, all memory pages are
transferred only once during the whole migration process and the baseline total migration time is
reduced
...
With the advent of multicore or
many-core machines, abundant CPU resources are available
...
We can exploit these copious CPU resources to compress page frames and the amount of
transferred data can be significantly reduced
...
Decompression is simple and very fast and requires no memory for decompression
...
4
...
4 Live Migration of VM Using Xen
In Section 3
...
1, we studied Xen as a VMM or hypervisor, which allows multiple commodity OSes to
share x86 hardware in a safe and orderly fashion
...
Domain 0 (or Dom0) performs tasks to
create, terminate, or migrate to another host
...


Example 3
...
It is a useful feature and natural extension to virtualization platforms that
allows for the transfer of a VM from one physical machine to another with little or no downtime of the
services hosted by the VM
...
Xen also supports VM migration by using a mechanism called Remote Direct
Memory Access (RDMA)
...
RDMA implements a
different transfer protocol whose origin and destination VM buffers must be registered before any transfer

3
...
22
Live migration of VM from the Dom0 domain to a Xen-enabled target host
...
Data communication over RDMA does not need to
involve the CPU, caches, or context switches
...
Figure 3
...

This design requires that we make trade-offs between two factors
...
A single
compression algorithm for all memory data is difficult to achieve the win-win status that we expect
...

The structure of this live migration system is presented in Dom0
...
Shadow
page tables in the VMM layer trace modifications to the memory page in migrated VMs during the precopy
phase
...
At the start of each precopy round, the bitmap is
sent to the migration daemon
...
The system resides in Xen’s management VM
...
The compressed data is
then decompressed on the target
...
4
...
5 summarizes four virtual cluster research projects
...
The Cellular Disco at Stanford is a virtual
cluster built in a shared-memory multiprocessor system
...
The COD and VIOLIN clusters are studied in forthcoming examples
...
5 Experimental Results on Four Research Virtual Clusters
Reported Results and
References

Project Name

Design Objectives

Cluster-on-Demand at Duke
Univ
...


Dynamic resource allocation with a
virtual cluster management system
To deploy a virtual cluster on a
shared-memory multiprocessor

VIOLIN at Purdue Univ
...
performance
achieved with 30% resource
slacks over VM clusters

COD servers backed by configuration database

Physical cluster

Resource
policies
Resource
manager

DHCP

Dynamic
virtual clusters

MySQL

Template
definitions
Confd

NIS

MyDNS

Image upload

Network boot
automatic configuration
resource negotiation

vCluster
requests

ACPI
Wake-on-LAN
VCM

VCM
Batch pool
vCluster

Web interface

Database-driven
network install
controlled by
trampoline

Web service
vCluster

Energy-managed
reserve pool

FIGURE 3
...

(Courtesy of Jeff Chase, et al
...
9 The Cluster-on-Demand (COD) Project at Duke University
Developed by researchers at Duke University, the COD (Cluster-on-Demand) project is a virtual cluster
management system for dynamic allocation of servers from a computing pool to multiple virtual clusters
[12]
...
23
...
4 Virtual Clusters and Resource Management

167

80
70

Number of nodes

60

Systems
Architecture
Biogeometry

50
40
30
20
10
0
Day 1

Day 2

Day 3

Day 4

Day 5

Day 6

Day 7

Day 8

Time

FIGURE 3
...

(Courtesy of J
...
[12] )

partitions a physical cluster into multiple virtual clusters (vClusters)
...
The vClusters run a batch schedule
from Sun’s GridEngine on a web server cluster
...

The Duke researchers used the Sun GridEngine scheduler to demonstrate that dynamic virtual clusters
are an enabling abstraction for advanced resource management in computing utilities such as grids
...

Attractive features include resource reservation, adaptive provisioning, scavenging of idle resources, and
dynamic instantiation of grid services
...
This
system provides resource policies and template definition in response to user requests
...
24 shows the variation in the number of nodes in each of three virtual clusters during
eight days of a live deployment
...
The experiments were performed with
multiple SGE batch pools on a test bed of 80 rack-mounted IBM xSeries-335 servers within the Duke cluster
...

Dynamic provisioning and deprovisioning of virtual clusters are needed in real-life cluster applications
...
10 The VIOLIN Project at Purdue University
The Purdue VIOLIN Project applies live VM migration to reconfigure a virtual cluster environment
...
The project leverages the maturity of VM migration and environment adaptation technology
...
Figure 3
...

The squares of various shadings represent the VMs deployed in the physical server nodes
...
A virtual execution environment is able to relocate itself
across the infrastructure, and can scale its share of infrastructural resources
...
The adaptation
overhead is maintained at 20 sec out of 1,200 sec in solving a large NEMO3D problem of 1 million
particles
...
The

Without adaptation
Domain 2
Domain 1

With adaptation
Domain 1
Domain 2

VIOLIN 1
VIOLIN 2
VIOLIN 3

VIOLIN 4
VIOLIN 5

1
...
After VIOLIN 2
is finished,
before adaptation

3
...
After VIOLIN
4, 5 are created

5
...
25
VIOLIN adaptation scenario of five virtual environments sharing two hosted clusters
...

(Courtesy of P
...
[55] )

3
...
Of course, the gain in shared resource utilization will benefit many users, and the performance gain varies with different adaptation scenarios
...
17 at the end of this chapter to tell the differences
...


3
...
In addition, Google, Yahoo!, Amazon, Microsoft, HP,
Apple, and IBM are all in the game
...
Data-center automation means that huge volumes of hardware,
software, and database resources in these data centers can be allocated dynamically to millions of
Internet users simultaneously, with guaranteed QoS and cost-effectiveness
...
From 2006 to 2011, according to an IDC 2007 report on the growth of virtualization and its market distribution in major IT sectors
...
The majority was dominated by production consolidation and software development
...

The latest virtualization development highlights high availability (HA), backup services,
workload balancing, and further increases in client bases
...
The total business
opportunities may increase to $3
...
The major market share moves to the areas
of HA, utility computing, production consolidation, and client bases
...


3
...
1 Server Consolidation in Data Centers
In data centers, a large number of heterogeneous workloads can run on servers at various times
...
Chatty workloads may burst at some point and return to a silent state at some other
point
...
Noninteractive workloads do not require people’s efforts to make progress
after they are submitted
...
At various stages,
the requirements for resources of these workloads are dramatically different
...
Figure 3
...
In this case, the granularity of resource optimization is focused on the CPU, memory,
and network interfaces
...
A large amount of
hardware, space, power, and management cost of these servers is wasted
...
Among several server consolidation techniques such as centralized and physical consolidation, virtualization-based server consolidation is the most powerful
...
Yet these techniques are performed with the granularity of a full server
machine, which makes resource management far from well optimized
...

In general, the use of VMs increases resource management complexity
...
In detail,
server virtualization has the following side effects:





Consolidation enhances hardware utilization
...
Consolidation also facilitates backup services and
disaster recovery
...
In a virtual
environment, the images of the guest OSes and their applications are readily cloned and reused
...
In this sense, server virtualization causes deferred
purchases of new servers, a smaller data-center footprint, lower maintenance costs, and lower
power, cooling, and cabling requirements
...
The crash of a guest OS has no
effect on the host OS or any other guest OS
...


To automate data-center operations, one must consider resource scheduling, architectural support,
power management, automatic or autonomic resource management, performance of analytical models, and so on
...
Scheduling and reallocations can be done in a wide
range of levels in a set of data centers
...
Ideally, scheduling and resource reallocations should be done at all levels
...

Dynamic CPU allocation is based on VM utilization and application-level QoS metrics
...
Another scheme uses a two-level resource
management system to handle the complexity involved
...
They implement autonomic resource allocation via
the interaction of the local and global controllers
...

However, the use of CMP is far from well optimized
...
One can design a virtual hierarchy on a CMP in data centers
...
One can also consider a VM-aware power budgeting scheme using
multiple managers integrated to achieve better power management
...
Consequently, one must address the trade-off of power
saving and data-center performance
...
5 Virtualization for Data-Center Automation

171

3
...
2 Virtual Storage Management
The term “storage virtualization” was widely used before the renaissance of system virtualization
...
Previously, storage virtualization was largely used to describe the aggregation and repartitioning of disks at very coarse time scales
for use by physical machines
...
Generally, the data stored in this environment can be classified into two categories: VM images and application data
...

The most important aspects of system virtualization are encapsulation and isolation
...
Only one operating
system runs in a virtualization while many applications run in the operating system
...
To
achieve encapsulation and isolation, both the system software and the hardware platform, such as
CPUs and chipsets, are rapidly updated
...
The storage systems become
the main bottleneck of VM deployment
...
This
procedure complicates storage operations
...
On the other hand, many guest OSes contest the hard disk when many VMs are running on a
single physical machine
...

In addition, the storage primitives used by VMs are not nimble
...
In data centers, there are often thousands of VMs, which cause the VM images to
become flooded
...
The
main purposes of their research are to make management easy while enhancing performance and reducing the amount of storage occupied by the VM images
...
Content Addressable Storage (CAS) is a solution to reduce the
total size of VM images, and therefore supports a large set of VM-based systems in data centers
...
These storage VMs share the same physical hosts as the VMs that they serve
...
30 provides
an overview of the Parallax system architecture
...
For each physical machine, Parallax customizes a special
storage appliance VM
...
It provides a virtual disk for each VM on the same physical machine
...
11 Parallax Providing Virtual Disks to Client VMs from a Large Common
Shared Physical Disk
The architecture of Parallax is scalable and especially suitable for use in cluster-based environments
...
26 shows a high-level view of the structure of a Parallax-based cluster
...
The storage appliance

172

CHAPTER 3 Virtual Machines and Virtualization of Clusters and Data Centers

Physical hosts

Storage administration domain
Storage functionality such as snapshot
facilities that are traditionally
implemented within storage devices
are pushed out into per-host storage
appliance VMs, which interact with a
simple shared block device and may
also use local physical disks
...


Storage
appliance
VM

VM

VM

VM

VMM (Xen)

FIGURE 3
...

(Courtesy of D
...
[43] )

VM also allows functionality that is currently implemented within data-center hardware to be pushed out and
implemented on individual hosts
...

Parallax itself runs as a user-level application in the storage appliance VM
...
A VDI is a single-writer virtual disk which may be accessed in a location-transparent manner
from any of the physical hosts in the Parallax cluster
...

Parallax uses Xen’s block tap driver to handle block requests and it is implemented as a tapdisk library
...
In the Parallax
system, it is the storage appliance VM that connects the physical hardware device for block and network
access
...
30, physical device drivers are included in the storage appliance VM
...


3
...
3 Cloud OS for Virtualized Data Centers
Data centers must be virtualized to serve as cloud providers
...
6 summarizes four virtual
infrastructure (VI) managers and OSes
...
Nimbus, Eucalyptus,

3
...
6 VI Managers and Operating Systems for Virtualizing Data Centers [9]
Manager/
OS,
Platforms,
License
Nimbus
Linux,
Apache v2
Eucalyptus
Linux, BSD

OpenNebula
Linux,
Apache v2
vSphere 4
Linux,
Windows,
proprietary

Resources Being
Virtualized, Web
Link

Client
API,
Language

Hypervisors
Used

Public
Cloud
Interface

VM creation, virtual
cluster, www

...
org/
Virtual networking
(Example 3
...
eucalyptus
...
opennebula
...
13), www

...
com/
products/vsphere/ [66]

EC2 WS,
WSRF, CLI

Xen, KVM

EC2

Virtual
networks

EC2 WS,
CLI

Xen, KVM

EC2

Virtual
networks

XML-RPC,
CLI, Java

Xen, KVM

EC2, Elastic
Host

CLI, GUI,
Portal, WS

VMware
ESX, ESXi

VMware
vCloud
partners

Virtual
networks,
dynamic
provisioning
Data
protection,
vStorage,
VMFS, DRM,
HA

Special
Features

and OpenNebula are all open source software available to the general public
...

These VI managers are used to create VMs and aggregate them into virtual clusters as elastic
resources
...
OpenNebula has additional
features to provision dynamic resources and make advance reservations
...
vSphere 4 uses the hypervisors ESX and ESXi
from VMware
...
We will study Eucalyptus and vSphere 4 in the next two examples
...
12 Eucalyptus for Virtual Networking of Private Cloud
Eucalyptus is an open source software system (Figure 3
...
The system primarily supports virtual networking and the management of VMs;
virtual storage is not supported
...
The system also supports interaction with other private clouds or public
clouds over the Internet
...

The designers of Eucalyptus [45] implemented each high-level system component as a stand-alone
web service
...


174

CHAPTER 3 Virtual Machines and Virtualization of Clusters and Data Centers

CM
Public
network

Cluster A

IM

IM

IM

IM

IM

IM

Private
network

IM

GM

Private
network

IM

GM

Cluster B

FIGURE 3
...

(Courtesy of D
...
[45] )

Furthermore, the designers leverage existing web-service features such as WS-Security policies for
secure communication between components
...
27 are specified
below:

• Instance Manager controls the execution, inspection, and terminating of VM instances on the host
where it runs
...


• Cloud Manager is the entry-point into the cloud for users and administrators
...

In terms of functionality, Eucalyptus works like AWS APIs
...
It does
provide a storage API to emulate the Amazon S3 API for storing user data and VM images
...
CLI and web portal services can be applied with Eucalyptus
...
13 VMware vSphere 4 as a Commercial Cloud OS [66]
The vSphere 4 offers a hardware and software ecosystem developed by VMware and released in April
2009
...
Figure 3
...
5 Virtualization for Data-Center Automation

Future applications

Existing applications
App

App

App

175

App

App

App

App

App

VMware vCenter Suite
VMware vSphere 4
Availability

Application
services

VMotion
Storage vMotion
High availability
Fault tolerance
Data recovery

vShield zones
VMSafe

vCompute
Infrastructure
services

ESX
ESXi
DRS

Internal cloud

Security

Scalability
DRS
Hot add

vStorage

vNetwork
Distributed Switch

VMFS
Thin provisioning

al
External cloud

FIGURE 3
...

(Courtesy of VMware, April 2010 [72] )

overall architecture
...

vSphere is primarily intended to offer virtualization support and resource management of data-center
resources in building private clouds
...

The vSphere 4 is built with two functional software suites: infrastructure services and application
services
...
These
packages interact with the hardware servers, disks, and networks in the data center
...

The application services are also divided into three groups: availability, security, and scalability
...
The security package supports vShield Zones and VMsafe
...
Interested readers should refer to the vSphere 4 web site for more details regarding
these component software functions
...


3
...
4 Trust Management in Virtualized Data Centers
A VMM changes the computer architecture
...
A VM
entirely encapsulates the state of the guest operating system running inside it
...
In general, a VMM can provide secure isolation and a VM accesses hardware resources through the control of the VMM, so the VMM is the base of the security of a virtual
system
...

Once a hacker successfully enters the VMM or management VM, the whole system is in danger
...
Considering a VM, rolling back to a point after a random number has been
chosen, but before it has been used, resumes execution; the random number, which must be “fresh”
for security purposes, is reused
...
Noncryptographic protocols that rely on freshness are also at risk
...


3
...
4
...
An intrusion detection system (IDS) is built
on operating systems, and is based on the characteristics of intrusion actions
...
A HIDS can be implemented on the monitored system
...
A NIDS is based on the flow
of network traffic which can’t detect fake actions
...

Even some VMs can be invaded successfully; they never influence other VMs, which is similar to
the way in which a NIDS operates
...
This can avoid fake actions and possess the merit of a HIDS
...
5 Virtualization for Data-Center Automation

IDS

177

Monitored host

Policy engine

APP

APP

Policy module
Guest OS
Policy
framework
Virtual machine
OS interface library
PTrace
Virtual machine monitor

FIGURE 3
...

(Courtesy of Garfinkel and Rosenblum, 2002 [17] )

and has the same privilege to access the hardware as well as the VMM
...
Figure 3
...

The VM-based IDS contains a policy engine and a policy module
...
It’s difficult to predict and prevent all intrusions without
delay
...

At the time of this writing, most computer systems use logs to analyze attack actions, but it is hard to
ensure the credibility and integrity of a log
...
Thus, when an operating system is invaded by attackers, the log service should be unaffected
...
They attract and
provide a fake system view to attackers in order to protect the real system
...
A honeypot is a purposely defective system
that simulates an operating system to cheat and monitor the actions of an attacker
...
A guest operating system and the applications running
on it constitute a VM
...


Example 3
...
The concept of trusted zones was established as part of the virtual
infrastructure
...
30 illustrates the concept of creating trusted zones for virtual clusters (multiple

178

CHAPTER 3 Virtual Machines and Virtualization of Clusters and Data Centers

Identity
federation

Virtual
network
security

Anti-malware

Federate
identities with
public clouds

Control and
isolate VM in the
virtual
infrastructure

Insulate
infrastructure
from malware,
Trojans and
cybercriminals
APP

APP

OS

OS

Access
Mgmt

Segregate and
control user
access

APP

APP

OS

OS

Enable end to
end view of
security events
and compliance
across
infrastructures

Insulate
information
from other
tenants

Data loss
prevention

Tenant
#1

Virtual infrastructure

Security
info
...
30
Techniques for establishing trusted zones for virtual cluster insulation and VM isolation
...
Nick, EMC [40] )

applications and OSes for each tenant) provisioned in separate virtual environments
...
The virtual clusters or infrastructures are shown in the upper boxes for two tenants
...

The arrowed boxes on the left and the brief description between the arrows and the zoning boxes are
security functions and actions taken at the four levels from the users to the providers
...
The arrowed boxes on the right are those functions and actions applied between the tenant
environments, the provider, and the global communities
...
The main innovation here is to establish the trust zones among the virtual clusters
...
We will discuss security and trust issues in Chapter 7 when we
study clouds in more detail
...
6 BIBLIOGRAPHIC NOTES AND HOMEWORK PROBLEMS
Good reviews on virtualization technology can be found in Rosenblum, et al
...
White papers at [71,72] report on virtualization products at VMware, including
the vSphere 4 cloud operating system
...
Qian, et al
...
ISA-level virtualization and binary translation techniques are treated in [3] and in Smith and Nair [58]
...
[9]
...
Intel’s support of hardware-level virtualization is
treated in [62]
...
5 are taken from Buyya, et al
...

For GPU computing on VMs, the readers are referred to [57]
...
Pentium virtualization is treated in [53]
...
I/O virtualization is
Sun Microsystems reports on OS-level virtualization in [64]
...
Virtualization in Windows NT machines is described in [77,78]
...
The x86 host virtualization is treated in [2]
...
The book [24] have a good coverage of hardware support for virtualization
...
In particular, Duke’s COD is reported in [12] and
Purdue’s Violin in [25,55]
...

Hardware-level virtualization is treated in [1,7,8,13,41,47,58,73]
...
Wells, et al
...

The integration of multi-core and virtualization on future CPU/GPU chips posts a very hot research
area, called asymmetric CMP as reported in [33,39,63,66,74]
...
The maturity of this co-design CMP
approach will greatly impact the future development of HPC and HTC systems
...
Power consumption in virtualized data centers is treated in [44,46]
...
Kochut gives an analytical model for virtualized data centers [32]
...
For virtual storage, readers are referred
to the literature [27,36,43,48,76,79]
...
The Eucalyptus was reported in [45], vSphere in [72], Parallax in [43]
...


Acknowledgments
This chapter is coauthored by Zhibin Yu of Huazhong University of Science and Technology
(HUST), China and by Kai Hwang of USC
...
Hai Jin and Dr
...
The authors would like to thank Dr
...


References
[1] Advanced Micro Devices
...

[2] K
...
Agesen, A comparison of software and hardware techniques for x86 virtualization, in:
Proceedings of the 12th International Conference on Architectural Support for Programming Languages
and Operating Systems, San Jose, CA, October 2006, pp
...

[3] V
...
Lattner, et al
...

[4] J
...
Silva, A
...
Silva, J
...

[5] P
...
Gaggero, et al
...
9–12
...
Andre Lagar-Cavilla, J
...
Whitney, A
...
, SnowFlock: rapid virtual machine cloning for
cloud computing, in: Proceedings of EuroSystems, 2009
...
Barham, B
...
Fraser, et al
...
164–177
...
Bugnion, S
...
Rosenblum, Disco: running commodity OS on scalable multiprocessors, in:
Proceedings of SOSP, 1997
...
Buyya, J
...
Goscinski (Eds
...

[10] V
...
Illikkal, R
...

[11] R
...
Zeldovich, C
...
S
...
259–272
...
Chase, L
...
Irwin, J
...
Sprenkle, Dynamic virtual cluster in a grid site manager, in: IEEE
Int’l Symp
...

[13] D
...

[14] C
...
Fraser, S
...
, Live migration of virtual machines, in: Proceedings of the Second
Symposium on Networked Systems Design and Implementation (NSDI ’05), 2005, pp
...

[15] Y
...
Dai, et al
...

[16] E
...
Kistler, R
...

[17] J
...
, On the potential of NoC virtualization for multicore chips, in: IEEE Int’l Conf
...
801–807
...
Garfinkel, M
...

[19] L
...
Irwin, A
...
Chase, Virtual machine hosting for networked clusters: building the
foundations for autonomic orchestration, in: First International Workshop on Virtualization Technology in
Distributed Computing (VTDC), November 2006
...
Gupta, S
...
Vrable, et al
...
309–322
...
Hines, K
...
51–60
...
Hirofuchi, H
...
, A live storage migration mechanism over WAN and its performance evaluation, in: Proceedings of the 4th International Workshop on Virtualization Technologies in Distributed
Computing, 15 June, ACM Press, Barcelona, Spain, 2009
...
Hwang, D
...
, (September/October) (2010) 30–39
...

[25] X
...
Xu, VIOLIN: Virtual internetworking on overlay infrastructure, in: Proceedings of the
International Symposium on Parallel and Distributed Processing and Applications, 2004, pp
...

[26] H
...
Deng, S
...
Shi, X
...

[27] K
...
Miller, The effectiveness of deduplication on virtual machine disk images, in: Proceedings of
SYSTOR, 2009, The Israeli Experimental Systems Conference, 2009
...
Jones, A
...
Arpaci-Disseau, Geiger: Monitoring the buffer cache in a virtual machine
environment, in: ACM ASPLOS, San Jose, CA, October 2006, pp
...

[29] F
...

[30] D
...
Kim, J
...

[31] A
...
, KVM: The linux virtual machine monitor, in: Proceedings of the Linux Symposium,
Ottawa, Canada, 2007, p
...

[32] A
...

[33] R
...
, Heterogeneous chip multiptiprocessors, IEEE Comput
...
38 (November) (2005)
32–38
...
Kyrre, Managing large networks of virtual machines, in: Proceedings of the 20th Large Installation
System Administration Conference, 2006, pp
...

[35] J
...
Dinda, Transparent network services via a virtual traffic layer for virtual machines, in:
Proceedings of High Performance Distributed Computing, ACM Press, Monterey, CA, pp
...

[36] A
...
Hensbergen, Experiences with content addressable storage and virtual disks, in: Proceedings
of the Workshop on I/O Virtualization (WIOV ’08), 2008
...
Liu, H
...
Liao, L
...
Yu, Live migration of virtual machine based on full system trace and
replay, in: Proceedings of the 18th International Symposium on High Performance Distributed Computing
(HPDC ’09), 2009, pp
...

[38] A
...
Culler, Design challenges of virtual networks: Fast, general-purpose communication,
in: Proceedings of the Seventh ACM SIGPLAN Symposium on Principles and Practices of Parallel
Programming, 1999
...
Marty, M
...


182

CHAPTER 3 Virtual Machines and Virtualization of Clusters and Data Centers

[40] M
...
Gupta, A
...
M
...

[41] D
...
, performance modeling, in: Proceedings of the 31st
International Computer Measurement Group Conference, 2005, pp
...

[42] A
...
Renato, Y
...

[43] D
...
, Parallax: Virtual disks for virtual machines, in: Proceedings of EuroSys, 2008
...
Nick, Journey to the private cloud: Security and compliance, in: Technical presentation by EMC Visiting Team, May 25, Tsinghua University, Beijing, 2010
...
Nurmi, et al
...
124–131
...
Padala, et al
...

[47] L
...
Bavier, M
...
Fiuczynski, S
...

[48] B
...
Garfinkel, M
...
353–366
...
Pinheiro, R
...
Carrera, T
...
Benini (Ed
...

[50] H
...
Miller, et al
...

[51] H
...
Ganev, K
...
cercs
...
edu/tech-reports/tr2006/git-cercs-06-02
...

[52] J
...
Irvine, Analysis of the Intel pentium’s ability to support a secure virtual machine monitor, in:
Proceedings of the 9th USENIX Security Symposium Vol
...

[53] M
...

[54] M
...
Garfinkel, Virtual machine monitors: current technology and future trends, IEEE
Comput 38 (5) (2005) 39–47
...
Ruth, et al
...

[56] C
...
Chandra, B
...
, Optimizing the migration of virtual computers, in:
Proceedings of the 5th Symposium on Operating Systems Design and Implementation, Boston, 9–11
December 2002
...
Shi, H
...
Sun, vCUDA: GPU accelerated high performance computing in virtual machines, in:
Proceedings of the IEEE International Symposium on Parallel and Distributed Processing, 2009
...
Smith, R
...

[59] J
...
Nair, The architecture of virtual machines, IEEE Comput
...

[60] Y
...
Wang, et al
...

[61] B
...
Keahey, I
...


Homework Problems

183

[62] M
...
Whalley, et al
...
Syst
...
42 (1) (2008) 94–95
...
Suleman, Y
...
Sprangle, A
...

[64] Sun Microsystems
...

[65] SWsoft, Inc
...
openvz
...
pdf, 2005
...
Trivino, et al
...
Microprocess
...
35 (2010)
...
elsevier
...

[67] J
...
Zhao, et al
...

[68] R
...
, Intel virtualization technology, IEEE Comput
...

[69] H
...
Tran, Autonomic virtual resource management for service hosting platforms, CLOUD
(2009)
...
Verma, P
...
Neogi, pMapper: Power and migration cost aware application placement in
virtualized systems, in: Proceedings of the 9th International Middleware Conference, 2008, pp
...

[71] VMware (white paper)
...
vmware
...
pdf
...
The vSphere 4 Operating System for Virtualizing Datacenters, News release,
February 2009, www
...
com/products/vsphere/, April 2010
...
Walters, et al
...

[74] P
...
Chakraborty, G
...
Sohi, Dynamic heterogeneity and the need for multicore virtualization,
ACM SIGOPS Operat
...
Rev
...

[75] T
...
Levin, P
...

[76] J
...
Chen, W
...

[77] Y
...
D
...

[78] Y
...
Guo, et al
...

[79] M
...
Zhang, et al
...


HOMEWORK PROBLEMS
Problem 3
...
Highlight the key points and identify
the distinctions in different approaches
...
Also identify example systems implemented at each level
...
2
Explain the differences between hypervisor and para-virtualization and give one example VMM
(virtual machine monitor), that was built in each of the two categories
...
3
Install the VMware Workstation on a Windows XP or Vista personal computer or laptop, and then
install Red Hat Linux and Windows XP in the VMware Workstation
...
Write an installation and configuration
guide for the VMware Workstation, Red Hat Linux, and Windows XP systems
...


Problem 3
...
kernel
...
Compile it in Red Hat Linux in the VMware
Workstation installed in Problem 3
...
Compare the time required
for the two compilations
...
5
Install Xen on a Red Hat Linux machine in two methods from the binary code or from the source
code
...
Describe the dependencies of utilities
and packages along with troubleshooting tips
...
6
Install Red Hat Linux on the Xen you installed in Problem 3
...
Download nbench from www
...
org/~mayer/linux/bmark
...
Run the nbench on the VM using Xen and on a real machine
...


Problem 3
...
The Google-vmdeployment tool can be downloaded from http://code
...
com/p/google-vm-deployment/
...
8
Describe the approaches used to exchange data among the domains of Xen and design experiments
to compare the performance of data communication between the domains
...
It may require a longer period of time to port
the Xen code, implement the application code, perform the experiments, collect the performance
data, and interpret the results
...
9
Build your own LAN by using the VMware Workstation
...
31
...


Homework Problems

192
...
203
...
168
...
2
LAN 1

The router
External gate way: 192
...
204
...
168
...
1

LAN 2

Machine B

FIGURE 3
...


Problem 3
...

Write a study report to survey the area, identify the key research issues, review the current development and open research challenges lying ahead
...
11
Study the relevant papers [17,28,30,66] on network on chip (NoC) and virtualization of NoC
resources for multi-core CMP design and applications
...
10 with a survey report
after the research study
...
12
Hardware and software resource deployment are 4 often complicated and time-consuming
...
Visit the following web site for more information
...
systemimager
...
php/Automating_Xen_VM_deployment_with_SystemImager
...


Problem 3
...
The performance merits include the time consumed by the precopy phase, the downtime,
the time used by the pull phase, and the total migration time
...
14
Design an experiment to test the performance of Xen live migration for I/O write-intensive applications
...
Compare the results with those from
Problem 3
...


186

CHAPTER 3 Virtual Machines and Virtualization of Clusters and Data Centers

Problem 3
...

The environment should enable grid users and resource providers to use services that are unique to
a VM-based approach to distributed computing
...


Problem 3
...
This problem may require three students to work together for
a semester
...
Users can also manipulate and
configure multiple VMs at the same time
...
These templates enable users to create a new execution environment rapidly
...


Problem 3
...
32 shows another VIOLIN adaptation scenario for changes in virtual environments
...
Trace the three steps of VIOLIN job
execution and discuss the gains in resource utilization after live migration of the virtual execution

Without adaptation
Domain 1

Domain 2

With adaptation
Domain 1

Domain 2

VIOLIN 1

VIOLIN 3

VIOLIN 2

VIOLIN 4

Time

1
...


2
...

VIOLIN 1 runs less CPU
demanding application
...
After adaptation

FIGURE 3
...

(Courtesy of P
...
[55] )

Homework Problems

187

environment in the two cluster domains
...


Problem 3
...
3
...
[74]
and by Marty and Hill in [39] answer the following two questions:
a
...

b
...



Title: Virtualization
Description: Introduction to virtualization, Key concepts of Virtualization