Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
6 views8 pages

Module 2 CC Simplified

The document discusses the implementation levels of virtualization, detailing how virtual machines (VMs) can operate independently on shared hardware through various layers such as instruction set architecture, hardware, operating system, library support, and user applications. It highlights the advantages and disadvantages of different virtualization approaches, the design requirements for Virtual Machine Monitors (VMMs), and the role of virtualization in cloud computing. Additionally, it covers aspects of resource management, live migration, and trust management in virtualized environments.

Uploaded by

Srushti Reddy V
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views8 pages

Module 2 CC Simplified

The document discusses the implementation levels of virtualization, detailing how virtual machines (VMs) can operate independently on shared hardware through various layers such as instruction set architecture, hardware, operating system, library support, and user applications. It highlights the advantages and disadvantages of different virtualization approaches, the design requirements for Virtual Machine Monitors (VMMs), and the role of virtualization in cloud computing. Additionally, it covers aspects of resource management, live migration, and trust management in virtualized environments.

Uploaded by

Srushti Reddy V
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Module 2

3.1 Implementation Levels of Virtualization

 Virtualization is a computer architecture technology that multiplexes multiple


virtual machines (VMs) on the same hardware machine.
 Its purpose is to enhance resource sharing, improve computer performance,
resource utilization, and application flexibility.
 Hardware resources (CPU, memory, I/O devices) or software resources (OS,
libraries) can be virtualized in various layers.
 The technology has been revitalized due to the sharp increase in demand for
distributed and cloud computing.
 The core idea is to separate hardware from software for better system efficiency.

3.1.1 Levels of Virtualization Implementation

 After virtualization, different user applications with their own guest operating
systems (OS) can run on the same hardware independently of the host OS.
 This is achieved by adding a virtualization layer, known as a hypervisor or
Virtual Machine Monitor (VMM), which virtualizes the physical hardware into
virtual resources for VMs.
 Virtualization layers can be implemented at various levels: instruction set
architecture (ISA), hardware, operating system, library support, and application.

3.1.1.1 Instruction Set Architecture Level

 Virtualization is done by emulating the host machine's ISA.


 This allows legacy binary code written for various processors to run on new
hardware hosts.
 Emulation typically uses slow code interpretation, so dynamic binary translation
is preferred for better performance.
 A virtual instruction set architecture (V-ISA) requires a processor-specific
software translation layer in the compiler.

3.1.1.2 Hardware Abstraction Level

 Performed directly on top of the bare hardware.


 Generates a virtual hardware environment for VMs and manages the underlying
hardware through virtualization.
 Aims to increase hardware utilization by multiple concurrent users.
 Implemented in IBM VM/370 in the 1960s and more recently with the Xen
hypervisor for x86 machines.

3.1.1.3 Operating System Level


 An abstraction layer between the traditional OS and user applications.
 Creates isolated containers on a single physical server and OS instances.
 Containers behave like real servers, commonly used for virtual hosting
environments and consolidating server hardware.

3.1.1.4 Library Support Level

 Applications use APIs from user-level libraries, making the interface a


candidate for virtualization.
 Virtualization is achieved by controlling the communication link between
applications and the system via API hooks.
 Examples include WINE (Windows applications on UNIX) and vCUDA
(leveraging GPU acceleration for VMs).

3.1.1.5 User-Application Level

 Virtualizes an application as a VM, also known as process-level virtualization.


 The virtualization layer acts as an application program on top of the OS,
exporting an abstraction of a VM that runs programs compiled for a specific
abstract machine.
 Examples include Microsoft .NET CLR and Java Virtual Machine (JVM).
 Other forms include application isolation, sandboxing, or streaming, which
wrap applications in an isolated layer for easier distribution and removal.

3.1.1.6 Relative Merits of Different Approaches

 A table compares ISA, hardware-level, OS-level, runtime library, and user


application levels based on Higher Performance, Application Flexibility,
Implementation Complexity, and Application Isolation.
 Hardware and OS support generally yield the highest performance.
 Hardware and application levels are the most expensive to implement.
 User isolation is most difficult to achieve, while ISA implementation offers the
best application flexibility.

3.1.2 VMM Design Requirements and Providers

 Hardware-level virtualization uses a Virtual Machine Monitor (VMM) between


real hardware and traditional OSes to manage hardware resources.
 The VMM captures hardware access requests and can virtualize components
like the CPU into multiple virtual copies, allowing multiple OSes to run
simultaneously.
 Three VMM requirements: provide an environment identical to the original
machine, incur only minor speed decreases, and maintain complete control of
system resources.
 Efficiency is crucial, requiring a dominant subset of virtual processor
instructions to execute directly on the real processor without VMM
intervention.
 VMM control includes allocating resources, preventing unauthorized access,
and potentially regaining allocated resources.
 Hardware-assisted virtualization modifies hardware to support VMM
requirements for processors not designed for virtualization.
 Table 3.2 compares VMMs/hypervisors like VMware Workstation/ESX Server,
Xen, and KVM based on host CPU, host OS, guest OS, and architecture.

3.1.3 Virtualization Support at the OS Level

 Cloud computing is emerging with VM technology, shifting hardware/staffing


costs to third parties.
 Challenges in cloud computing include dynamically using variable numbers of
physical machines/VM instances and slow instantiation of new VMs.

3.1.3.1 Why OS-Level Virtualization?

 Hardware-level VM initialization is slow, especially for thousands of VMs in a


cloud environment, and VM image storage is an issue due to repeated content.
 Full hardware virtualization also suffers from slow performance, low density,
and requires para-virtualization to modify the guest OS, sometimes even
hardware modification.
 OS-level virtualization offers a feasible solution by inserting a virtualization
layer inside an operating system to partition physical resources.
 It enables multiple isolated VMs (also called virtual execution environments,
Virtual Private Systems, or containers) within a single OS kernel, appearing as
real servers to users.
 These VEs have their own processes, file systems, user accounts, network
interfaces, but share the same OS kernel (single-OS image virtualization).

3.1.3.2 Advantages of OS Extensions

 Compared to hardware-level virtualization, OS extensions offer minimal


startup/shutdown costs, low resource requirements, and high scalability.
 VMs and their host environment can synchronize state changes.
 These benefits are achieved because all OS-level VMs share a single OS kernel,
and the virtualization layer allows VMs to access host resources without
modification.
 These advantages help overcome slow VM initialization and lack of application
state awareness in hardware-level virtualization in cloud computing.

3.1.3.3 Disadvantages of OS Extensions


 The main disadvantage is that all OS-level VMs on a single container must use
the same type of guest operating system (same OS family, though different
distributions are possible). This limits flexibility for users who prefer different
OSes like Windows or Linux.
 Implementing OS-level virtualization requires creating isolated execution
environments based on a single OS kernel and redirecting VM access requests
to the VM's local resource partition.
 Duplicating common resources for each VM partition is resource-intensive and
often makes OS-level virtualization a less preferred choice compared to
hardware-assisted virtualization.

3.1.3.4 Virtualization on Linux or Windows Platforms

 Most reported OS-level virtualization systems are Linux-based, with Windows


platform support still in research.
 The Linux kernel's abstraction layer allows software processes to operate on
resources without hardware details, but new hardware may require kernel
patching.
 Table 3.3 summarizes OS-level virtualization tools: Linux vServer and OpenVZ
for Linux platforms, and FVM for Windows NT platforms.
 OpenVZ, an open-source container-based solution on Linux, supports
virtualization by modifying the Linux kernel to create isolated Virtual Private
Servers (VPSes) with their own files, users, process trees, and virtual devices.
 OpenVZ's resource management system includes two-level disk allocation, a
two-level CPU scheduler, and a resource controller with about 20 parameters to
control VM resource usage.
 OpenVZ supports checkpointing and live migration, allowing a VM's state to be
saved to disk and restored on another physical machine quickly (a few seconds),
though network connection migration incurs a delay.

3.1.4 Middleware Support for Virtualization

 Library-level virtualization (user-level ABI or API emulation) creates execution


environments for running "alien" programs without creating a full VM for the
entire OS.
 Key functions include API call interception and remapping.
 Table 3.4 summarizes several library-level virtualization systems:
o WABI: Converts Windows system calls on x86 to Solaris system calls on
SPARC.
o Lxrun: System call emulator enabling Linux x86 applications on UNIX
systems like SCO OpenServer.
o WINE: Library for virtualizing x86 processors to run Windows applications
under Linux, FreeBSD, and Solaris.
o Visual MainWin: Compiler support for developing Windows applications with
Visual Studio to run on Solaris, Linux, and AIX hosts.
o vCUDA: Provides virtualization support for using general-purpose GPUs to run
data-intensive applications under a special guest OS.
 vCUDA Architecture: It virtualizes the CUDA library, installed on guest
OSes. When a CUDA application makes an API call, vCUDA intercepts and
redirects it to the CUDA API on the host OS. It uses a client-server model with
a vCUDA library (client) in the guest OS and a vCUDA stub (server) in the host
OS. The vCUDA library manages vGPUs, which abstract GPU structure, handle
device memory allocation, and store API flow. The stub receives and interprets
remote requests, creating execution contexts for API calls and returning results.

3.3.2 CPU Virtualization

 A VM is a duplicate of an existing computer system where most VM


instructions execute directly on the host processor for efficiency.
 Critical instructions (privileged, control-sensitive, behavior-sensitive) need
careful handling.
 A CPU architecture is virtualizable if it can run VM privileged/unprivileged
instructions in user mode while the VMM runs in supervisor mode.
 When privileged instructions of a VM execute, they are trapped in the VMM.

3.3.3 Memory Virtualization

 Memory virtualization ensures that multiple VMs running on the same host can
access memory independently and securely.
 Shadow Page Tables: The VMM maintains shadow page tables for each guest
OS's page table to translate virtual-to-physical addresses, enabling direct lookup
by the TLB. This method can suffer from high performance overhead and
memory cost due to frequent updates when guest OSes change mappings.
 Nested Page Tables (Hardware-Assisted): Adds another indirection layer
where the MMU handles OS-defined virtual-to-physical translations, and then
the hypervisor translates these physical addresses to machine addresses using its
own page tables.
 Intel's Extended Page Table (EPT): A hardware-based technique to improve
memory virtualization efficiency by assisting the two-stage address translation,
along with Virtual Processor ID (VPID) to improve TLB use.
 When a virtual address needs translation, the CPU first checks the guest OS's
page tables and then uses EPT to convert guest physical addresses to host
physical addresses, utilizing the EPT TLB for speed.

3.4 Virtual Clusters and Resource Management


 A physical cluster is interconnected servers, while virtual clusters are built with
VMs installed on distributed servers from one or more physical clusters.
 VMs in a virtual cluster are logically interconnected by a virtual network across
physical networks.
 Virtual cluster nodes can be physical or virtual machines, and their boundaries
can change dynamically as VM nodes are added, removed, or migrated.
 Key design issues include live migration of VMs, memory/file migrations, and
dynamic deployment of virtual clusters.

3.4.1.1 Fast Deployment and Effective Scheduling

 Fast Deployment: Involves quickly constructing and distributing software


stacks (OS, libraries, applications) to physical nodes and rapidly switching
runtime environments between virtual clusters.
 Virtual clusters should shut down or suspend quickly when a user finishes to
save resources for other VMs.
 Live Migration of VMs: Allows workloads to transfer between nodes.
However, potential overhead can negatively affect cluster utilization,
throughput, and QoS.
 Green Computing: Designing migration strategies to implement energy
efficiency without impacting cluster performance is a challenge.
 Load Balancing: Virtualization enables load balancing of applications in a
virtual cluster, using metrics like load index and user login frequency.

3.4.2 Virtual Cluster Management

 Various cluster management schemes can be enhanced by VM live migration


with minimal overhead.
 VMs can be live-migrated from one physical machine to another, and in case of
failure, one VM can be replaced by another.
 Virtual clustering is crucial in cloud computing, providing dynamic resources
that can be quickly provisioned on demand or after a node failure.
 Live VM migration aims for negligible downtime, lowest network bandwidth
consumption, and reasonable total migration time, without disrupting other
services due to resource contention.

3.4.2.1 Live Migration Steps (Pre-copy approach)

 Step 1: Pre-copying all of the memory pages: All memory pages of the VM
are copied from the source host to the destination host, while the VM continues
to run on the source. Modified pages are re-copied in subsequent rounds.
 Step 2: Iterative re-copying dirty pages: Modified memory pages are
iteratively copied until the dirty rate becomes low.
 Step 3: Suspend and copy last portion: The VM's execution is suspended, and
the last memory data, CPU, and network states are transferred. This "downtime"
should be as short as possible.
 Steps 4 and 5: Commit and activate new host: On the destination, the VM
reloads states, resumes execution, and network connections are redirected. The
original VM is removed from the source.
 Experimental results show very small migration overhead, critical for dynamic
cluster reconfiguration and disaster recovery in cloud computing.

3.4.3 Migration of Memory, Files, and Network Resources

 When a system migrates to another physical node, memory, file system, and
network resource migration are important considerations.

3.4.3.1 Memory Migration

 Crucial aspect of VM migration, involving moving the VM's memory instance


between physical hosts.
 Memory size can range from hundreds of megabytes to gigabytes and needs
efficient handling.
 Internet Suspend-Resume (ISR): Exploits temporal locality (memory states
differ only by recent work) by representing files as trees of small subfiles,
ensuring only changed files are transmitted. ISR is for situations where live
migration isn't necessary, resulting in higher downtime.
 Pre-copy Approach: Iteratively copies modified memory pages while the VM
runs, minimizing downtime. Compression schemes can reduce data transmitted
during migration.

3.4.3.2 File System Migration

 To support VM migration, systems must provide each VM with a consistent,


location-independent view of the file system.

3.4.4 Dynamic Deployment of Virtual Clusters

 Table 3.5 summarizes four virtual cluster research projects and their objectives:
Cellular Disco, INRIA, COD, and VIOLIN.
 Cluster-on-Demand (COD) Project (Duke University): A virtual cluster
management system for dynamically allocating servers from a computing pool
to multiple virtual clusters. It partitions a physical cluster into multiple virtual
clusters, allowing owners to specify OS and software via XML-RPC. It
responds to load changes by dynamically restructuring virtual clusters and
supports dynamic, policy-based cluster sharing.
 VIOLIN Project (Purdue University): Applies live VM migration to
reconfigure a virtual cluster environment for better resource utilization in
executing multiple cluster jobs. It aims to enable mutually isolated virtual
environments for parallel applications on a shared physical infrastructure.

3.5.2 Virtual Storage Management

 Storage Virtualization: In system virtualization, this refers to storage managed


by VMMs and guest OSes, distinct from its prior meaning of
aggregating/repartitioning disks for physical machines.
 Data is classified as VM images (special to the virtual environment) and
application data (same as traditional OS).
 Encapsulation and isolation are key aspects, where traditional OSes and
applications are encapsulated in VMs, and multiple VMs are completely
isolated on a physical machine.
 Storage becomes a bottleneck as system software and hardware evolve faster
than storage systems to support virtualization.
 Storage operations are complicated by the virtualization layer, as guest OSes
cannot directly access hard disks, and multiple guest OSes contend for shared
storage.

3.5.4 Trust Management in Virtualized Data Centers

 A VMM changes computer architecture by providing a software layer between


operating systems and hardware to create VMs.
 An Intrusion Detection System (IDS) can run on a VMM as a high-privileged
VM. This VM-based IDS includes a policy engine and module to monitor
events in different guest VMs.
 Analyzing intrusion actions is crucial after an attack, and while most systems
use logs, ensuring log credibility and integrity is challenging, especially if the
OS kernel is compromised.
 Honeypots and honeynets are used for intrusion detection, attracting attackers to
a fake system to protect the real one and analyze attack actions. A honeypot is a
deliberately defective system simulating an OS to monitor attackers.

You might also like