Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
8 views18 pages

CC Unit 2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views18 pages

CC Unit 2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

Virtual Machines and Virtualization of Clusters and Data Centres-

Unit 2: Virtual Machines and Virtualization of Clusters and Data Centers: Implementation Levels of
Virtualization, Virtualization Structures/ Tools and Mechanisms, Virtualization of CPU, Memory and
I/O Devices, Virtual Clusters and Resource Management, Virtualization for Data-Center Automation.
1. Implementation Levels of Virtualization,
 Virtualization is a computer architecture technology by which multiple virtual
machines (VMs) are multiplexed in the same hardware machine.
 The purpose of a VM is to enhance resource sharing by many users and improve
computer performance in terms of resource utilization and application
flexibility.
 Hardware resources (CPU, memory, I/O devices, etc.) or software resources
(operating system and software libraries) can be virtualized in various functional
layers.
 The idea is to separate the hardware from the software to yield better system
efficiency. virtualization techniques can be applied to enhance the use of compute
engines, networks, and storage.
**According to a 2009 Gartner Report, virtualization was the top strategic technology poised to
change the computer industry. **
1.1. Levels of Virtualization Implementation
A traditional computer runs with a host operating system specially tailored for its
hardware architecture as shown in the following fig:

After virtualization, different user applications managed by their own operating systems
(guest OS) can run on the same hardware, independent of the host OS. This is often done by
adding additional software, called a virtualization layer as shown in Figure
 This virtualization layer is known as hypervisor or virtual machine monitor
(VMM).
 The VMs are shown in the upper boxes, where applications run with their own
guest OS over the virtualized CPU, memory, and I/O resources.
 The main function of the software layer for virtualization is to virtualize the physical
hardware of a host machine into virtual resources to be used by the VMs,
exclusively.
 This can be implemented at various operational levels, as we will discuss shortly.
 The virtualization software creates the abstraction of VMs by interposing a
virtualization layer at various levels of a computer system.
 Common virtualization layers include the instruction set architecture (ISA)
level, hardware level, operating system level, library support level, and application
level as shown in the following fig

Virtualization ranging from hardware to applications in five abstraction levels

1.1.1. Instruction Set Architecture Level


The Instruction Set Architecture (ISA) is the part of the computer architecture that is
visible to the programmer. It defines:
 The set of machine instructions the processor can execute.
 The data types, registers, addressing modes, memory architecture, interrupts, and
input/output model.
Key Components of ISA:
1. Instruction Set: List of all operations the processor can perform (e.g., ADD, SUB,
LOAD, STORE, JUMP).
2. Data Types: Supported data formats (e.g., integer, floating point).
3. Registers: Number, type, and function of CPU registers.
4. Addressing Modes: How instructions access operands in memory (e.g., direct, indirect,
indexed).
5. Memory Architecture: How memory is organized (e.g., byte-addressable, word-
addressable).
6. I/O Mechanisms: How the processor communicates with peripherals.
7. Interrupts and Exceptions: Mechanism for handling unexpected events.
Examples of ISA: x86 (Intel, AMD processors), ARM (mobile and embedded devices)
Importance of ISA Level:
 Portability: Programs compiled for a specific ISA can run on any processor
implementing that ISA.
 Compatibility: Allows newer hardware to support old software.
 Performance Optimization: ISA features affect compiler optimization and execution
speed.
 Security & Reliability: Some ISAs provide support for secure execution and fault
tolerance.
1.1.2. Hardware Abstraction Level
 Hardware-level virtualization is performed right on top of the bare hardware.
 On the one hand, this approach generates a virtual hardware environment for a VM.
 On the other hand, the process manages the underlying hardware through
virtualization.
 The idea is to virtualize a computer’s resources, such as its processors, memory, and
I/O devices.
 The intention is to upgrade the hardware utilization rate by multiple users
concurrently.

1.1.3. Operating System Level


 This refers to an abstraction layer between traditional OS and user applications.
 OS-level virtualization creates isolated containers on a single physical server and the
OS instances to utilize the hardware and software in data centres.
 The containers behave like real servers.
 OS-level virtualization is commonly used in creating virtual hosting environments to
allocate hardware resources among a large number of mutually distrusting users.

1.1.4. Library Support Level


 Virtualization with library interfaces is possible by controlling the communication
link between applications and the rest of a system through API hooks.
 The software tool WINE has implemented this approach to support Windows
applications on top of UNIX hosts

1.1.5. User-Application Level


The User-Application Level refers to the programs and interfaces that users use to
perform tasks on a computer. These are built on top of the Operating System and interact
indirectly with the hardware through system calls and APIs.
Purpose of User-Application Level:
 To provide a user-friendly interface for interacting with the system.
 To abstract away complexity of hardware and OS-level tasks.
 To perform high-level functions like document editing, web browsing, gaming, etc.

1.2. VMM Design Requirements and Providers


Overview of Hardware-Level Virtualization:
 A layer called Virtual Machine Monitor (VMM) is inserted between real hardware
(physical hardware ) and operating systems.
 The VMM manages hardware resources and captures all hardware access attempts.
 Acts like a traditional OS, allowing multiple OSes to run on the same hardware
simultaneously via virtual CPUs.
🔹 Three Key Requirements for a VMM:
1. Environment Fidelity: Programs should run in an environment nearly identical to real
hardware.
2. Performance Efficiency: Minimal slowdown when programs run under the VMM.
3. Resource Control: VMM must have complete control over all hardware resources.

Exceptions to Requirement 1:
 Differences are acceptable due to:
o Resource contention (when multiple VMs run on the same system).
o Timing dependencies (because of virtualization overhead or concurrent
VMs).

Functional Equivalence vs. Performance:


 VMM must ensure functional equivalence with real machines.
 Time-sharing OS cannot be considered a VMM due to lack of identical environment.

Efficiency in Virtualization:
 Traditional emulators/simulators (instruction-by-instruction interpretation) are flexible
but slow.
 Efficient VMMs execute most instructions directly on the real CPU without software
handling.

VMM Resource Control Includes:


1. Allocating resources to programs.
2. Preventing access to unauthorized resources.
3. Regaining control of allocated resources when needed.

Processor Architecture Dependency:


 Not all CPUs are suitable for VMMs.
 x86 architecture poses challenges (e.g., some privileged instructions cannot be
trapped).
 If hardware doesn’t support virtualization, it must be modified – known as hardware-
assisted virtualization.

1.3. Virtualization Support at the OS Level


1.3.1. Why OS-Level Virtualization?
 Hardware-level VMs are slow to initialize, especially problematic when thousands
are needed (e.g., in cloud computing).
 Storing full VM images consumes a lot of space with redundant content.
 Performance is slower and density is lower in hardware-level virtualization.
 May require para-virtualization (modifying the guest OS) or even hardware
modification to reduce overhead.
 OS-level virtualization solves these problems efficiently.

What is OS-Level Virtualization?


 A virtualization layer is inserted within the OS, not below it.
 It partitions physical resources to create multiple isolated environments within a
single OS kernel.
 These environments are called:
o Virtual Execution Environments (VE)
o Virtual Private Systems (VPS)
o Or more commonly, containers

1.3.2. Advantages of OS Extensions


Enhanced Functionality
 Adds new features or capabilities to the existing operating system.
 Example: Advanced file systems, real-time scheduling, or custom security models.
Improved Security
 Allows for additional security modules, such as intrusion detection, encryption, or
access control enhancements.
Performance Optimization
 Extensions can introduce kernel-level optimizations, improving performance for
specific applications or hardware.
Modularity and Flexibility
 Makes the OS modular, enabling developers to load, update, or remove components
without altering the whole system.
Customizability
 Enables system developers to tailor the OS for specialized applications (e.g.,
embedded systems, servers, mobile devices).
Support for New Hardware or Services
 Extensions can help support emerging hardware or software services without
waiting for a full OS upgrade.

Experimentation and Research


 Facilitates testing and innovation in OS design, useful in academic or development
environments.
Improved Compatibility
 Can be used to bridge legacy systems or applications with modern environments by
extending system APIs.

1.3.3. Disadvantages of OS Extensions


System Instability
 Poorly written or incompatible extensions can crash the system or cause
unpredictable behaviour.
Security Risks
 Extensions can introduce vulnerabilities (weakness), especially if they bypass
standard security checks or come from untrusted sources.
Compatibility Issues
 Updates to the base OS may break compatibility with existing extensions, leading
to malfunctions or the need for re-development.
Performance Overhead
 Some extensions may consume additional system resources (CPU, memory),
leading to slower performance.
Increased Complexity
 Managing multiple extensions adds to system complexity, making it harder to
debug or maintain the OS.
Difficult to Test
 Extensions that interact closely with the kernel or hardware may be difficult to test,
leading to undiscovered bugs.
Lack of Standardization
 Extensions may follow different programming standards, making integration or
migration across platforms harder.
Update and Maintenance Challenges
 Requires frequent updates to remain compatible with the core OS, especially after
security patches or version upgrades.

1.3.4. Virtualization on Linux or Windows Platforms


Virtualization on Linux
Common Tools and Technologies:
 KVM (Kernel-based Virtual Machine) – Native virtualization built into the Linux
kernel.
 QEMU (Quick Emulator) – Hardware emulator used with or without KVM.
 Xen – Type 1 hypervisor for server virtualization.
 LXC (Linux Containers) – OS-level virtualization for containers.
 Docker – Popular containerization platform using LXC or cgroups.
 Virt-Manager – GUI tool to manage virtual machines.
 Libvirt – API and daemon for managing virtualization.

Advantages:
 Open-source and cost-effective.
 Highly customizable and flexible.
 Supports both hardware-level (KVM/Xen) and OS-level virtualization (LXC).
 Lightweight containers for microservices (Docker).
 Efficient resource usage.

Disadvantages:
 Steeper learning curve for beginners.
 May require more manual configuration (CLI tools).
 Some tools lack the polish of commercial GUI-based platforms.

Virtualization on Windows
Common Tools and Technologies:
 Hyper-V – Native Windows Type 1 hypervisor (available in Windows Pro,
Enterprise, and Server editions).
 VirtualBox – Cross-platform, open-source hypervisor (supports Windows, Linux,
macOS).
 VMware Workstation – Commercial hypervisor for desktop virtualization.
 Windows Subsystem for Linux (WSL2) – Uses lightweight VM to run Linux on
Windows.
Advantages:
 Easy to use GUI with Hyper-V Manager and VirtualBox.
 Seamless integration with Windows ecosystem (e.g., Active Directory,
PowerShell).
 Ideal for enterprise environments and test labs.
 WSL2 enables Linux environments without full VM overhead.

Disadvantages:
 Hyper-V only available on certain editions of Windows.
 More resource-intensive than Linux in some scenarios.
 Licensing costs for some tools (e.g., VMware).

1.4. Middleware Support for Virtualization


What is Middleware in Virtualization?
 Middleware is software that sits between the OS and applications (or between layers
of distributed systems).
 In the context of virtualization, middleware provides support services that help
manage and coordinate virtual machines (VMs), containers, and applications across
multiple hosts or environments.
 It abstracts the complexity of virtualization, offering scalability, interoperability,
monitoring, and communication support.
Functions of Middleware in Virtualized Environments:
1. Resource Management
o Allocates and monitors CPU, memory, disk, and network usage across VMs or
containers.
o Ensures load balancing and resource optimization.
2. Orchestration & Automation
o Automates deployment, scaling, and migration of VMs/containers.
o Works with platforms like Kubernetes, Docker Swarm, or OpenStack Heat.
3. Security & Isolation
o Enforces access control, policy management, and user isolation across virtual
environments.
4. Communication Services
o Provides message passing, RPC (Remote Procedure Calls), and service discovery
among distributed VMs or containers.

5. Monitoring and Logging


o Tracks VM/container performance and logs events.
o Tools like Prometheus, Grafana, ELK stack often integrate at the middleware
level.
6. Interoperability
o Allows different VMs or container types (e.g., Docker vs. LXC) to work together
seamlessly.
o Supports heterogeneous systems (Windows + Linux).

2. Virtualization Structures / Tools and Mechanisms


 Before virtualization, the operating system manages the hardware, after virtualization, a
virtualization layer is inserted between the hardware and the operating system.
 In such a case, the virtualization layer is responsible for converting portions of the real
hardware into virtual hardware.
 Therefore, different operating systems such as Linux and Windows can run on the
same physical machine, simultaneously.
 Depending on the position of the virtualization layer, there are several classes of VM
architectures, namely the hypervisor architecture, paravirtualization, and host-
based virtualization.
 The hypervisor is also known as the VMM (Virtual Machine Monitor). They both
perform the same virtualization operations.

2.1. Hypervisor and Xen Architecture


 The hypervisor supports hardware-level virtualization on bare metal devices like
CPU, memory, disk and network interfaces.
 The hypervisor software sits directly between the physical hardware and its OS.
 This virtualization layer is referred to as either the VMM or the hypervisor.
 The hypervisor provides hyper calls for the guest OSes and applications.
 Depending on the functionality, a hypervisor can assume a micro-kernel architecture
like the Microsoft Hyper-V, or it can assume a monolithic hypervisor architecture
like the VMware ESX for server virtualization.
 A micro-kernel hypervisor includes only the basic and unchanging functions (such as
physical memory management and processor scheduling).
 The device drivers and other changeable components are outside the hypervisor.
 A monolithic hypervisor implements all the afore mentioned functions, including those
of the device drivers.
 Therefore, the size of the hypervisor code of a micro-kernel hypervisor is smaller than
that of a monolithic hypervisor.
 Essentially, a hypervisor must be able to convert physical devices into virtual resources
dedicated for the deployed VM to use.

2.1.1. The Xen Architecture:


 Xen is an open source hypervisor program developed by Cambridge University.
 Xen is a microkernel hypervisor, which separates the policy from the mechanism.
 The Xen hypervisor implements all the mechanisms, leaving the policy to be
handled by Domain 0, as shown in figure does not include any device drivers
natively. It just provides a mechanism by which a guest OScan have direct access to
the physical devices. As a result, the size of the Xen hypervisor is kept rather small.
 Xen provides a virtual environment located between the hardware and the OS.
 A number of vendors are in the process of developing commercial Xen
hypervisors, among the mare Citrix XenServer and Oracle VM.
 The core components of a Xen system are the hypervisor, kernel, and applications.
 The organization of the three components is important.
 Like other virtualization systems, many guest OSescan run on top of the hypervisor.
 The guest OS, which has control ability, is called Domain 0, and the others are called
Domain U. Domain 0 is a privileged guest OS of Xen.
 It is first loaded when Xen boots without any file system drivers being available.
Domain 0 is designed to access hardware directly and manage devices. Therefore,
one of the responsibilities of Domain 0 is to allocate and map hardware resources for
the guest domains (the Domain U domains).
 For example, Xen is based on Linux and its security level is C2. Its management VM
is named Domain 0, which has the privilege to manage other VMs implemented on
the same host.
 If Domain0 is compromised, the hacker can control the entire system. So, in the VM
system, security policies are needed to improve the security of Domain 0.
 Domain 0, behaving as a VMM, allows users to create, copy, save, read, modify,
share, migrate, and roll back VMs as easily as manipulating a file, which flexibly
provides tremendous benefits for users.
 It also brings a series of security problems during the software life cycle and data
lifetime.
 Traditionally, a machine’s lifetime can be envisioned as a straight line where the
current state of the machine is a point that progresses monotonically as the software
executes.
 During this time, configuration changes are made, software is installed, and patches
are applied. In such an environment, the VM state is in to a tree: At any point,
execution can go into N different branches where multiple instances of a VM can
exist at any point in this tree at any given time. VMs are allowed to roll back to
previous states in their execution (e.g., to fix configuration errors) or rerun from the
same point many times (e.g., as a means of distributing dynamic content or
circulating a “live” system image).
The Xen architecture’s special domain 0 for control and I/O, and several guest
domains for user applications.
2.2. Binary Translation with Full Virtualization
 Depending on implementation technologies, hardware virtualization can be classified
into two categories: full virtualization and host-based virtualization.
 Full virtualization does not need to modify the host OS. It relies on binary
translation to trap and to virtualize the execution of certain sensitive, nonvirtualizable
instructions.
 The guest OSes and their applications consist of noncritical (Arithmetic operations
(ADD, SUB), Logical operations (AND, OR, XOR), Data movement (MOV, LOAD,
STORE), Control flow (JMP, CALL)) and critical instructions (Changing the page
table (memory management), Disabling/enabling interrupts, Performing I/O operations,
Modifying control registers.).
 In a host-based system, both a host OS and a guest OS are used. A virtuali-zation
software layer is built between the host OS and guest OS. These two classes of VM
architec-ture are introduced next.

2.3. Full Virtualization


 With full virtualization, noncritical instructions run on the hardware directly while
critical instructions are discovered and replaced with traps into the VMM to be
emulated by software.
 Both the hypervisor (vitalization) and VMM approaches are considered full
virtualization.
 Why are only critical instructions trapped into the VMM? This is because binary
translation can incur a large performance overhead (load).
 Noncritical instructions do not control hardware or threaten the security of the
system, but critical instructions do (hardware with the help of emulated software).
 Therefore, running noncritical instructions on hardware not only can promote
efficiency, but also can ensure system security.

2.4. Binary Translation of Guest OS Requests Using a VMM

 This approach was implemented by VMware and many other software companies.
 As shown in the following figure, VMware puts the VMM at Ring 0 and the guest OS
at Ring 1.
 The VMM scans the instruction stream and identifies the privileged, control- and
behaviour-sensitive instructions.
 When these instructions are identified, they are trapped into the VMM, which
emulates the behaviour of these instructions.
 The method used in this emulation is called binary translation.
 Therefore, full virtualization combines binary translation and direct execution.
 The guest OS is completely decoupled from the underlying hardware. Consequently, the
guest OS is unaware that it is being virtualized.
 The performance of full virtualization may not be ideal, because it involves binary
translation which is rather time-consuming.
 In particular, the full virtualization of I/O-intensive applications is a really a big
challenge.
 Binary translation employs a code cache to store translated hot instructions to improve
performance, but it increases the cost of memory usage.

2.5. Host-Based Virtualization


 An alternative VM architecture is to install a virtualization layer on top of the host
OS.
 This host OS is still responsible for managing the hardware.
 The guest OSes are installed and run on top of the virtualization layer.
 Dedicated applications may run on the VMs.
 Certainly, some other applications can also run with the host OS directly.
 This host-based architecture has some distinct advantages.
 First, the user can install this VM architecture without modifying the host OS.
 The virtualizing software can rely on the host OS to provide device drivers and other
low-level services.
 This will simplify the VM design and ease its deployment.
 Second, the host-based approach appeals to many host machine configurations.
 Compared to the hypervisor/VMM architecture, the performance of the host-based
architecture may also be low.
 When an application requests hardware access, it involves four layers of mapping which
downgrades performance significantly.
 When the ISA of a guest OS is different from the ISA of the underlying hardware, binary
translation must be adopted. Although the host-based architecture has flexibility, the
performance is too low to be useful in practice.

3. VIRTUALIZATION OF CPU, MEMORY, AND I/O DEVICES


3.1. Hardware Support for Virtualization
 Modern operating systems and processors permit multiple processes (it is a program
which is in execution) to run simultaneously. –multitasking (This is achieved by rapidly
switching between different processes, giving the illusion that they are all executing
at the same time)
 If there is no protection mechanism in a processor, all instructions from different
processes will access the hardware directly and cause a system crash.

 Therefore, all processors have at least two modes, user mode and supervisor mode,
to ensure controlled access of critical hardware.
 Instructions running in supervisor mode are called privileged instructions.
 Other instructions are unprivileged instructions.
 In a virtualized environment, it is more difficult to make OSes and applications run
correctly because there are more layers in the machine stack.
 Example: Hardware Support for Virtualization in the Intel x86 Processor (for self study)

3.2. CPU Virtualization


 A VM is a duplicate of an existing computer system in which a majority of the VM
instructions are executed on the host processor in native mode.
 Thus, unprivileged instructions of VMs run directly on the host machine for higher
efficiency.
 Other critical instructions should be handled carefully for correctness and stability.
 The critical instructions are divided into three categories: privileged instructions,
control sensitive instructions, and behaviour-sensitive instructions.
o Privileged instructions execute in a privileged mode (high priority with
equivalence of memory accessibility) and will be trapped if executed outside this
mode.
o Control-sensitive instructions attempt to change the configuration of
resources used.
o Behaviour-sensitive instructions have different behaviours depending on the
configuration of resources, including the load and store operations over the
virtual memory.

3.2.1. Hardware-Assisted CPU Virtualization


 Purpose: Simplifies virtualization compared to full or paravirtualization, which are
complex.
 Intel VT-x / AMD-V:
o Introduce a new privilege mode level (often called Ring -1) in x86
processors.
o Hypervisor runs at Ring -1.
o Operating System continues to run at Ring 0.
o Automatic Instruction Trapping:
o All privileged and sensitive instructions are automatically trapped by the
hypervisor (virtualization).
o Eliminates the need for binary translation used in full virtualization.
o Unmodified OS Support:
o Operating systems can run in VMs without modification, improving
compatibility.
o Additional CPU Instructions:
o New instructions are added to manage VM state and transitions (changing
from one state to another- process).
o Popular Hypervisors Using VT-x:
o Xen
o VMware
o Microsoft Virtual PC
o Performance Considerations:
o Hardware-assisted virtualization is generally efficient.
o However, mode switching overhead between hypervisor and guest OS can
reduce performance.
o In some cases, binary translation may outperform hardware-assisted
virtualization.
o Hybrid Virtualization Approach:
o Used by systems like VMware.
o Combines hardware and software virtualization techniques.
o Some tasks are offloaded to hardware; others handled in software.
o Performance Optimization:
o Combining paravirtualization with hardware-assisted virtualization can
further enhance performance.

3.3. Memory Virtualization


o Virtual Memory in Traditional OS:
o OS uses page tables for one-stage mapping: virtual → machine memory.
o MMU and TLB (Translation lookaside buffer- It stores recent translations of
virtual memory addresses to physical memory addresses, allowing the CPU to
quickly access frequently used memory locations.) optimize memory access. –
recently used applications
o Virtual Memory in VMs:
o Requires two-stage mapping:
 Virtual → Physical memory (handled by guest OS)
 Physical → Machine memory (handled by VMM)
o MMU Virtualization:
o Must be transparent to the guest OS.
o Guest OS manages virtual-to-physical mapping.
o VMM maps guest physical memory to actual machine memory.
o Shadow Page Tables: - replicate table
o VMM maintains a shadow page table for each guest page table.
o Enables direct lookup for memory translation.
o Can cause performance overhead due to high memory cost.
o Nested Page Tables:
o Adds another layer of indirection.
o Used by hypervisor to map physical → machine memory.
o Helps reduce shadow page table overhead.
o TLB Optimization:
o TLB maps virtual memory directly to machine memory.
o Avoids repeated two-level translation.
o VMware Approach:
o Uses shadow page tables to manage memory translation.
o Updates shadow tables when guest OS changes mappings.
o AMD Barcelona Processor (2007):
o Introduced hardware-assisted memory virtualization.
o Uses nested paging to support two-stage address translation efficiently.

Two-level memory mapping procedure

3.4. I/O Virtualization


 I/O virtualization involves managing the routing of I/O requests between virtual devices and
the shared physical hardware.
 There are three ways to implement I/O virtualization: full device emulation, para-
virtualization, and direct I/O.
 Full device emulation is the first approach for I/O virtualization.
 Generally, this approach emulates well-known, real-world devices.
 All the functions of a device or bus infrastructure, such as device enumeration, identification,
interrupts, and DMA, are replicated in software.
 This software is located in the VMM and acts as a virtual device. The I/O access requests of
the guest OS are trapped in the VMM which interacts with the I/O devices.

.
 A single hardware device can be shared by multiple VMs that run concurrently.
 However, software emulation runs much slower than the hardware it emulates.
 The para-virtualization method of I/O virtualization is typically used in Xen (an open-
source hypervisor that allows multiple operating systems to run concurrently on the
same hardware, effectively creating virtual machines (VMs)).
 It is also known as the split driver model consisting of a frontend driver and a backend
driver. The frontend driver is running in Domain U (interacting with the guest operating
system (Domain U) ) and the backend driver is running in Domain 0 (which handles the
actual hardware interaction within the privileged domain).
 They interact with each other via a block of shared memory. The frontend driver
manages the I/O requests of the guest OSes and the backend driver is responsible for
managing the real I/O devices and multiplexing the I/O data of different VMs.
 Although para-I/O-virtualization achieves better device performance than full device
emulation, it comes with a higher CPU overhead.
Example: VMware Workstation for I/O Virtualization – self study

3.5. Virtualization in Multi-Core Processors

 Virtualizing multi-core processors is more complex than uni-core processor


virtualization.
 Multi-core processors offer higher performance by integrating multiple cores on one
chip.
 Two main challenges arise:
1. Application programs must be parallelized to fully utilize all cores (breaking down
a task into smaller parts that can be processed concurrently on multiple processing
units (cores)), which requires new programming models, languages, and libraries.
2. Software must explicitly assign tasks to specific cores (system does not
automatically balance the tasks across the cores), making scheduling and resource
management complex.
 Research on scheduling algorithms (fcfs) and resource management (inputs , printing (fcfs) )
tries to balance performance and complexity but struggles with trade-offs (need to balance
competing demands and priorities when making decisions about cloud adoption and
usage).
 Emerging dynamic heterogeneity mixes large CPU cores (fat cores) with smaller GPU cores
(thin cores) on the same chip, increasing resource management complexity.
 This heterogeneity stems from less reliable transistors and growing transistor usage
complexity.

3.5.1. Virtual Hierarchy


 Many-core CMPs (Cloud Management Platform - software solution that provides a
centralized interface for managing and optimizing cloud resources across different
environments (public, private, or hybrid(combination of public and private))) enable
space-sharing, assigning single-threaded or multithreaded jobs to separate core groups for
long periods.
 Virtual hierarchies are dynamic, software-managed structures that organize processor
cores and caches in a way that optimizes performance for the current workload, even if
the physical hardware layout stays the same. They simulate or overlay new
coherence/cache structures on top of existing physical ones to make things faster or more
efficient.
 Unlike fixed physical hierarchies, virtual hierarchies improve performance and isolate
workloads better.
 The first level of the hierarchy places data close to cores that need it, enabling faster
access and establishing a shared-cache domain.
 Virtual hierarchies reduce miss access time and minimize interference between virtual
machines (VMs) or workloads.
 The second level maintains globally (public) – virtualization shared memory (allows
distributed systems to access and modify common data seamlessly, as if they are
sharing a single physical RAM, even though the actual memory is distributed across
multiple cloud nodes), allowing dynamic resource repartitioning without costly cache
flushes.
 This approach facilitates content-based page sharing and minimizes changes to existing
system software(operating system- host OS).
 Virtual hierarchy supports space-shared multiprogramming and server consolidation
workloads effectively.
 Virtual hierarchy design can be applied both in multi-VM and single-OS environments
to optimize cache and coherence management.

3.6. VIRTUAL CLUSTERS AND RESOURCE MANAGEMENT

 A physical cluster is a collection of servers (physical machines) interconnected by a


physical network such as a LAN. When a traditional VM is initialized, the
administrator needs to manually write configuration information or specify the
configuration sources.
 When more VMs join a network, an inefficient configuration always causes problems
with overloading or underutilization.
 Amazon’s Elastic Compute Cloud (EC2) is a good example of a web service that
provides elastic computing power in a cloud.
 EC2 permits customers to create VMs and to manage user accounts over the time of
their use.
 Most virtualization platforms, including Xen Server and VMware ESX Server,
support a bridging mode which allows all domains to appear on the network as individual
hosts. By using this mode, VMs can communicate with one another freely through the
virtual network interface card and configure the network automatically.

3.6.1. Physical versus Virtual Clusters


 Virtual clusters are built with VMs installed at distributed servers from one or more
physical clusters.
 The VMs in a virtual cluster are interconnected logically by a virtual network across
several physical networks.
 The following figure illustrates the concepts of virtual clusters and physical clusters.
 Each virtual cluster is formed with physical machines or a VM hosted by multiple
physical clusters.
 The virtual cluster boundaries are shown as distinct boundaries.
 The provisioning of VMs to a virtual cluster is done dynamically to have the
following interesting properties:
• The virtual cluster nodes can be either physical or virtual machines. Multiple
VMs running with different OSes can be deployed on the same physical
node.
• A VM runs with a guest OS, which is often different from the host OS, that
manages the resources in the physical machine, where the VM is
implemented.
• The purpose of using VMs is to consolidate multiple functionalities on the
same server. This will greatly enhance server utilization and application
flexibility.
 VMs can be colonized (replicated) in multiple servers for the purpose of promoting
distributed parallelism, fault tolerance, and disaster recovery.
• The size (number of nodes) of a virtual cluster can grow or shrink
dynamically, similar to the way an overlay network varies in size in a peer-
to-peer (P2P) network.
• The failure of any physical nodes may disable some VMs installed on the
failing nodes. But the failure of VMs will not pull down the host system.
 Since system virtualization has been widely used, it is necessary to effectively
manage VMs running on a mass of physical computing nodes (also called virtual
clusters) and consequently build a high-performance virtualized computing
environment.
 This involves virtual cluster deployment, monitoring and management over large-
scale clusters, as well as resource scheduling, load balancing, server consolidation,
fault tolerance, and other techniques.
 The different node colors in the above figure refers to different virtual clusters. In a
virtual cluster system, it is quite important to store the large number of VM images
efficiently.
 The following figure shows the concept of a virtual cluster based on application
partitioning or customization.
 The different colors in the figure represent the nodes in different virtual clusters. As a
large number of VM images might be present, the most important thing is to determine
how to store those images in the system efficiently.
 There are common installations for most users or applications, such as operating systems
or user-level programming libraries.
 These software packages can be preinstalled as templates (called template VMs). With
these templates, users can
 .
 New OS instances can be copied from the template VM. User-specific components such
as programming libraries and applications can be installed to those instances

The concept of a virtual cluster based on application partitioning

3.6.1.1. Fast Deployment and Effective Scheduling -- self study


3.6.1.2. High-Performance Virtual Storage – self study
3.6.2. Migration of Memory, Files, and Network Resources—self study

You might also like