BCS601 Module 2 PDF
BCS601 Module 2 PDF
Module-2
vtucircle.com pg. 1
BCS601|Cloud Computing Module-2
Example Scenarios
Traditional Computer: A single physical server running Windows, dedicated to one task,
such as hosting a database.
Virtualized Computer: A physical server running multiple VMs—one with Windows for
the database, another with Linux for a web server, and another for testing.
Benefits of Virtualization
Virtualization Layer
The virtualization layer is a software layer that abstracts physical hardware resources (CPU,
memory, storage, network, etc.) and presents them as virtual resources to applications and
operating systems. It acts as a bridge between the physical hardware and virtual instances,
ensuring proper allocation, isolation, and management of resources.
vtucircle.com pg. 2
BCS601|Cloud Computing Module-2
vtucircle.com pg. 3
BCS601|Cloud Computing Module-2
1. Instruction Emulation:
vtucircle.com pg. 4
BCS601|Cloud Computing Module-2
o The source ISA (e.g., MIPS) is emulated on the target ISA (e.g., x86) through
a software layer.
o The software layer interprets or translates the source instructions into target
machine instructions.
2. Virtual ISA (V-ISA):
o A virtual instruction set architecture acts as an abstraction, making it
possible for various source ISAs to execute on the same host machine by
translating and optimizing the instructions.
o A software layer, added to the compiler, facilitates this translation and
manages differences between ISAs.
1. Code Interpretation:
o Process: An interpreter program translates source instructions into host
(native) instructions one-by-one during execution.
o Characteristics:
▪ Simple to implement.
▪ High overhead due to the need to process each instruction individually.
o Performance: Slow, as each source instruction may require tens or even
hundreds of native instructions to execute.
2. Dynamic Binary Translation:
o Process:
▪ Instead of interpreting instructions one-by-one, this method translates
blocks of source instructions (basic blocks, traces, or superblocks)
into target instructions.
▪ The translated blocks are cached, so subsequent executions do not need
re-translation.
o Characteristics:
▪ Faster than interpretation due to caching and reuse of translated
instructions.
▪ Optimization opportunities arise from analyzing multiple instructions
in a block.
o Performance: Significantly better than interpretation but requires more
complex implementation.
3. Binary Translation and Optimization:
o Purpose: Enhance performance and reduce the overhead of translation.
o Methods:
▪ Static Binary Translation: Translates the entire binary code before
execution, which avoids runtime translation but can miss opportunities
for runtime optimizations.
▪ Dynamic Binary Translation: Translates instructions at runtime,
enabling better adaptability to runtime conditions.
▪ Dynamic Optimizations: Includes reordering, inlining, and loop
unrolling to improve the efficiency of translated code.
ISA-level virtualization via instruction set emulation opens immense possibilities for running
diverse workloads across platforms, supporting legacy systems, and enabling hardware
vtucircle.com pg. 5
BCS601|Cloud Computing Module-2
independence. The shift from simple interpretation to more advanced techniques like
dynamic binary translation and optimizations has significantly improved its performance and
applicability, making it a key enabler for cross-platform software execution.
1. Performance Overhead:
o Emulating an ISA on another is inherently slower due to instruction-by-
instruction interpretation or translation.
o Dynamic binary translation improves performance but still adds runtime
overhead.
2. Complexity:
o Implementing dynamic binary translation and optimizations requires advanced
techniques and significant development effort.
3. Scalability:
o Supporting highly diverse ISAs can become challenging, especially when
optimizing performance for multiple architectures.
1. Bare-Metal Hypervisors:
o A hypervisor (Type 1) operates directly on the hardware without requiring an
underlying host operating system.
o It creates and manages virtual hardware environments for virtual machines.
2. Resource Virtualization:
vtucircle.com pg. 6
BCS601|Cloud Computing Module-2
1. High Performance:
o Since the hypervisor runs directly on hardware, it minimizes overhead and
provides near-native performance for VMs.
2. Scalability:
o Easily supports multiple VMs, enabling efficient use of physical server
resources.
3. Fault Isolation:
o Problems in one VM (e.g., OS crashes or software bugs) do not impact other
VMs or the host system.
4. Versatility:
o Supports running different operating systems or environments on the same
physical hardware.
Operating System (OS) level virtualization is a type of virtualization that operates at the
OS kernel layer, creating isolated environments called containers or virtual environments
within a single instance of an operating system. This approach allows multiple isolated user
spaces to run on the same physical hardware while sharing the same operating system kernel.
1. Single OS Kernel:
o All containers share the same underlying OS kernel, eliminating the need for
separate kernels for each environment.
o More lightweight compared to traditional hardware-level virtualization since
there's no need to emulate hardware.
2. Isolated Environments (Containers):
o Containers behave like independent servers, with their own libraries, binaries,
and configuration files.
o Processes running inside one container are isolated from processes in other
containers.
3. Efficient Resource Utilization:
o OS-level virtualization efficiently shares hardware resources like CPU,
memory, and storage across containers.
o Reduces overhead compared to full virtualization, as there is no need for a
hypervisor or virtual hardware.
vtucircle.com pg. 7
BCS601|Cloud Computing Module-2
1. Single OS Limitation:Since all containers share the same kernel, they must use the
same operating system. For example, you cannot run a Windows container on a Linux
host.
2. Weaker Isolation:Compared to hardware-level virtualization, OS-level virtualization
provides less isolation. If the kernel is compromised, all containers are at risk.
3. Compatibility Issues:Applications that require specific kernel modules or features
not supported by the shared kernel may face compatibility challenges.
1. API Hooks:
o Applications typically interact with the operating system via APIs exported by
user-level libraries.
o Library-level virtualization works by intercepting API calls and redirecting
them to virtualized implementations.
2. Controlled Communication:
o Virtualization happens by managing the communication link between the
application and the underlying system.
o This approach avoids direct interaction with the operating system and replaces
it with controlled, virtualized responses.
3. Application-Specific Virtualization:
o Focused on enabling specific features or compatibility, such as supporting
applications from one environment on another.
• Applications are written to use standard library calls for their functionality, such as
file access, networking, or graphics.
vtucircle.com pg. 8
BCS601|Cloud Computing Module-2
• Library-level virtualization intercepts these calls (using API hooks) and replaces the
original functionality with emulated or redirected behavior.
vtucircle.com pg. 9
BCS601|Cloud Computing Module-2
3. Application Streaming
1. Cross-Platform Compatibility:
o Applications written for an abstract VM (e.g., JVM, CLR) can run on any
system with the corresponding VM implementation.
2. Improved Security:
o Applications are isolated from the host OS and other applications, reducing the
risk of system compromise or interference.
3. Simplified Deployment:
o Applications can be distributed as self-contained packages, eliminating the
need for complex installation procedures or OS-level dependencies.
4. Resource Efficiency:
o Compared to hardware- or OS-level virtualization, application-level
virtualization has lower overhead as it focuses only on individual processes.
5. Portability:
o Virtualized applications can be easily moved between systems or platforms.
1. Performance Overhead:
o Running applications in a virtualized environment may introduce some latency
compared to native execution.
2. Limited Scope:
o Unlike OS- or hardware-level virtualization, application-level virtualization
cannot provide a full OS environment or support multiple users.
3. Compatibility Challenges:
o Not all applications can be easily virtualized, especially those with tight
integration with the underlying OS or hardware.
In the above table, the column headings correspond to four technical merits. “Higher
Performance” and “Application Flexibility” are self-explanatory.“Implementation
vtucircle.com pg. 10
BCS601|Cloud Computing Module-2
Complexity” implies the cost to implement that particular virtualization level. “Application
Isolation”refers to the effort required to isolate resources committed to different VMs.
The number of X’s in the table cells reflects the advantage points of each implementation
level. Five X’s implies the best case and one X implies the worst case. Overall, hardware and
OS support will yield the highest performance. However, the hardware and application levels
are also the most expensive to implement. User isolation is the most difficult to achieve. ISA
implementation offers the best application flexibility.
Efficiency is crucial for VMMs, as slow emulators or interpreters are unsuitable for real
machines. To ensure performance, most virtual processor instructions should execute directly
on physical hardware without VMM intervention.
Key Observations:
vtucircle.com pg. 11
BCS601|Cloud Computing Module-2
• VMware Workstation supports a wide range of guest operating systems and uses full
virtualization.
• VMware ESX Server eliminates a host OS, running directly on hardware with para-
virtualization.
• Xen supports diverse host OSs and uses a hypervisor-based architecture.
• KVM runs exclusively on Linux hosts and supports para-virtualization for multiple
architectures.
Cloud computing, enabled by VM technology, shifts the cost and responsibility of managing
computational centers to third parties, resembling the role of banks. While transformative, it
faces two significant challenges:
To address these challenges and enhance cloud computing efficiency, significant research and
development are needed.
vtucircle.com pg. 12
BCS601|Cloud Computing Module-2
VEs share the same OS kernel but appear as independent servers to users, each with its own
processes, file system, user accounts, network settings, and more. This approach, known as
single-OS image virtualization, is an efficient alternative to hardware-level virtualization.
Figure 3.3 illustrates operating systems virtualization from the point of view of a machine
stack.
1. Efficiency and Scalability: OS-level VMs have low startup/shutdown costs, minimal
resource requirements, and high scalability.
2. State Synchronization: VMs can synchronize state changes with the host
environment when needed.
In cloud computing, these features address the slow initialization of hardware-level VMs and
their inability to account for the current application state.
The primary disadvantage of OS-level virtualization is that all VMs on a single container
must belong to the same operating system family. For example, a Linux-based container
cannot run a Windows OS. This limitation challenges its usability in cloud computing, where
users may prefer different operating systems.
1. Duplicating resources for each VM: This incur high resource costs and overhead.
2. Sharing most resources with the host and creating private copies on demand:
This is more efficient and commonly used.
vtucircle.com pg. 13
BCS601|Cloud Computing Module-2
Due to its limitations and overhead in some scenarios, OS-level virtualization is often
considered a secondary choice compared to hardware-assisted virtualization.
• Most Linux platforms are not tied to a specific kernel, enabling a host to run multiple
VMs simultaneously on the same hardware.
• Linux-based tools, such as Linux vServer and OpenVZ, support running applications
from other platforms through virtualization.
• On Windows, FVM is a specific tool developed for OS-level virtualization on the
Windows NT platform.
Key Features:
1. Isolation:
o Each VPS has its own files, user accounts, process tree, virtual network,
virtual devices, and interprocess communication (IPC) mechanisms.
2. Resource Management:
o Disk Allocation: Two levels:
▪ First level: The OpenVZ server administrator assigns disk space limits
to each VM.
▪ Second level: VM administrators manage disk quotas for users and
groups.
o CPU Scheduling:
▪ First level:OpenVZ's scheduler allocates time slices based on virtual
CPU priority and limits.
▪ Second level: Uses the standard Linux CPU scheduler.
o Resource Control:OpenVZ has ~20 parameters to control VM resource
usage.
3. Checkpointing and Live Migration:
o Allows saving the complete state of a VM to a disk file, transferring it to
another machine, and restoring it there.
o The process takes only a few seconds, although network connection re-
establishment causes minor delays.
vtucircle.com pg. 14
BCS601|Cloud Computing Module-2
Advantages:
Challenges:
1. WABI:
o Middleware that translates Windows system calls into Solaris system calls,
allowing Windows applications to run on Solaris systems.
2. Lxrun:
o A system call emulator enabling Linux applications designed for x86 hosts to
run on UNIX systems.
3. WINE:
o Provides library support to virtualize x86 processors, enabling Windows
applications to run on UNIX-based systems.
4. Visual MainWin:
o A compiler support system that allows developers to use Visual Studio to
create Windows applications capable of running on some UNIX hosts.
vtucircle.com pg. 15
BCS601|Cloud Computing Module-2
5. vCUDA:
o A virtualization solution for CUDA, enabling applications requiring GPU
acceleration to utilize GPU resources remotely. (Discussed in detail in
Example 3.2.)
Key Benefits:
Challenges:
1. Purpose:
o Virtualizes the CUDA library for guest OSes, enabling CUDA applications to
execute GPU-based tasks indirectly through the host OS.
2. Architecture:
o Follows a client-server model with three main components:
▪ vCUDA Library:
▪ Resides in the guest OS as a substitute for the standard CUDA
library.
▪ Intercepts and redirects API calls to the host OS.
▪ Manages virtual GPUs (vGPUs).
▪ Virtual GPU (vGPU):
▪ Abstracts GPU hardware, provides a uniform interface, and
manages device memory allocation.
▪ Tracks and stores CUDA API flow.
▪ vCUDA Stub:
▪ Resides in the host OS.
▪ Receives and interprets requests from the guest OS.
▪ Creates execution contexts for CUDA API calls and manages
the physical GPU resources.
3. Functionality of vGPU:
o Abstracts the GPU structure, giving applications a consistent view of
hardware.
o Handles memory allocation by mapping virtual addresses in the guest OS to
real device memory in the host OS.
o Stores the flow of CUDA API calls for proper execution.
vtucircle.com pg. 16
BCS601|Cloud Computing Module-2
4. Workflow:
o CUDA applications on the guest OS send API calls to the vCUDA library.
o The vCUDA library redirects these calls to the vCUDA stub on the host OS.
o The vCUDA stub processes the requests, executes them on the physical GPU,
and returns results to the guest OS.
Benefits of vCUDA:
Challenges:
• Relies heavily on the client-server architecture and the efficiency of API call
redirection.
• Performance may depend on the complexity of GPU tasks and overhead of
virtualization.
There are three typical classes of VM architectures, differentiated by the placement of the
virtualization layer in the system stack. Virtualization transforms a machine’s architecture by
inserting a virtualization layer between the hardware and the operating system. This layer
converts real hardware into virtual hardware, enabling different operating systems (e.g.,
Linux and Windows) to run simultaneously on the same physical machine.
vtucircle.com pg. 17
BCS601|Cloud Computing Module-2
Classes of VM Architectures:
Key Points:
• The virtualization layer is crucial for translating real hardware into virtual hardware.
• These architectures enable flexibility in running multiple operating systems on the
same machine.
• Hypervisors (or VMMs) and other approaches vary in performance, complexity, and
implementation.
The hypervisor (or Virtual Machine Monitor, VMM) enables hardware-level virtualization by
acting as an intermediate layer between physical hardware (e.g., CPU, memory, disk, network
interfaces) and the operating systems (OS). It facilitates the creation of virtual resources that
guest OSes and applications can utilize.
1. Micro-Kernel Hypervisor:
o Only includes essential and unchanging functionalities, such as physical
memory management and processor scheduling.
o Device drivers and other changeable components are kept outside the
hypervisor.
o Examples: Microsoft Hyper-V.
o Advantages: Smaller code size, reduced complexity, and easier
maintainability.
2. Monolithic Hypervisor:
o Integrates all functionalities, including device drivers, within the hypervisor
itself.
o Examples: VMware ESX for server virtualization.
o Advantages: Comprehensive functionality but with a larger codebase and
potential complexity.
vtucircle.com pg. 18
BCS601|Cloud Computing Module-2
• Supports virtualized access to physical hardware through hypercalls for guest OSes
and applications.
• Converts physical devices into virtual resources for use by virtual machines (VMs).
• Plays a critical role in resource management and scheduling for multiple VMs.
These architectures allow efficient use of physical hardware while enabling multiple OSes to
run simultaneously.
A key feature of Xen is Domain 0 (Dom0), a privileged guest OS that manages hardware
access and resource allocation for other guest domains (Domain U). Since Dom0 controls the
entire system, its security is critical. If compromised, an attacker can control all virtual
machines.
Xen allows users to manage VMs flexibly creating, copying, migrating, and rolling back
instances. However, this flexibility also introduces security risks, as VMs can revert to
previous states, potentially reintroducing vulnerabilities. Unlike traditional machines with a
linear execution timeline, Xen VMs form a tree-like execution state, enabling multiple
instances and rollbacks, which benefits system management but also creates security
challenges.
Note: A key feature of Xen is Domain 0 (Dom0), a privileged virtual machine responsible
for managing hardware, I/O operations, and other guest VMs (Domain U). Dom0 is the first
OS to load and has direct hardware access, allowing it to allocate resources and manage
devices for unprivileged guest domains.
vtucircle.com pg. 19
BCS601|Cloud Computing Module-2
Full Virtualization
Host-Based Virtualization
While host-based virtualization offers flexibility, it is generally less efficient than full
virtualization with a VMM.
vtucircle.com pg. 20
BCS601|Cloud Computing Module-2
Challenges of Para-Virtualization
1. Compatibility & Portability Issues: Since para-virtualization modifies the guest OS,
supporting unmodified OSes becomes difficult.
2. High Maintenance Costs: OS kernel modifications require ongoing updates and
support.
3. Variable Performance Gains: The performance improvement depends on the
workload and system architecture.
Para-Virtualization Architecture
• Guest OS Modification: The OS kernel is modified, but user applications may also
need changes.
• Hypercalls: Privileged instructions that would normally run at Ring 0 are replaced
with hypercalls to the hypervisor.
• Intelligent Compiler: A specialized compiler assists in identifying and replacing
nonvirtualizable instructions with hypercalls, optimizing performance.
• Improved Efficiency: Compared to full virtualization, para-virtualization
significantly reduces overhead, making VM execution closer to native
performance.
• Limitation: Since the guest OS is modified, it cannot run directly on physical
hardware without a hypervisor.
vtucircle.com pg. 21
BCS601|Cloud Computing Module-2
Due to the inefficiency of binary translation, many virtualization solutions, including Xen,
KVM, and VMware ESX, use para-virtualization.
Example3.3VMwareESXServerforPara-Virtualization
VMware pioneered the virtualization market, providing solutions for desktops, servers, and
data centers. VMware ESX is a bare-metal hypervisor designed for x86 symmetric
multiprocessing (SMP) servers, enabling efficient virtualization by directly managing
hardware resources.
Para-Virtualization in ESX
• The VMkernel interacts directly with the hardware, bypassing the need for a host OS.
• Para-virtualized drivers (e.g., VMXNET for networking, PVSCSI for disk I/O)
improve performance.
• Provides better efficiency than full virtualization while supporting unmodified guest
OSes via hardware-assisted virtualization (Intel VT, AMD-V).
vtucircle.com pg. 22
BCS601|Cloud Computing Module-2
1. VMware Workstation
o A host-based virtualization software suite for x86 and x86-64 systems.
o Runs multiple VMs simultaneously on a host OS.
2. Xen Hypervisor
o Works on IA-32, x86-64, Itanium, and PowerPC 970 architectures.
o Modifies Linux to function as a hypervisor, controlling guest OSes.
3. KVM (Kernel-Based Virtual Machine)
o Integrated into the Linux kernel as a virtualization infrastructure.
vtucircle.com pg. 23
BCS601|Cloud Computing Module-2
Example 3.4: Hardware Support for Virtualization in the Intel x86 Processor
Figure 3.10 provides an overview of Intel’s full virtualization techniques. For processor
virtualization, Intel offers the VT-x or VT-i technique. VT-x adds a privileged mode (VMX
Root Mode) and some instructions to processors. This enhancement traps all sensitive
instructions in the VMM automatically. For memory virtualization, Intel offers the EPT,
which translates the virtual address to the machine’s physical addresses to improve
performance. For I/O virtualization, Intel implements VT-d and VT-c to support this.
vtucircle.com pg. 24
BCS601|Cloud Computing Module-2
1. Privileged Instructions
o Execute only in privileged mode.
o If executed in user mode, they trigger a trap.
2. Control-Sensitive Instructions
o Modify system resources (e.g., changing memory configuration).
3. Behavior-Sensitive Instructions
o Change behavior based on system configuration (e.g., memory load/store
operations).
• RISC CPUs (e.g., PowerPC, SPARC) are naturally virtualizable since all sensitive
instructions are privileged.
• x86 CPUs were not originally designed for virtualization, as some sensitive
instructions (e.g., SGDT, SMSW) are not privileged.
o These instructions bypass the VMM, making virtualization difficult without
software-based techniques like binary translation.
vtucircle.com pg. 25
BCS601|Cloud Computing Module-2
Performance Considerations
• High efficiency expected, but switching between hypervisor and guest OS causes
overhead.
• Hybrid Approach (used by VMware):
o Offloads some tasks to hardware while keeping others in software.
• Combining Para-Virtualization with Hardware-Assisted Virtualization further boosts
performance.
vtucircle.com pg. 26
BCS601|Cloud Computing Module-2
• Memory Management Unit (MMU) and Translation Lookaside Buffer (TLB) help
optimize performance.
• Guest OS controls virtual-to-physical mapping but cannot directly access machine
memory.
• VMM (Hypervisor) handles actual memory allocation to prevent conflicts.
4. VMware's Approach
• Software-based shadow page tables were inefficient and caused high performance
overhead.
• Frequent memory lookups and context switches slowed down virtualized
environments.
vtucircle.com pg. 27
BCS601|Cloud Computing Module-2
• Hardware-assisted memory virtualization that eliminates the need for shadow page
tables.
• Works with Virtual Processor ID (VPID) to optimize Translation Lookaside Buffer
(TLB) usage.
• Reduces memory lookup time and improves performance significantly.
• Intel increased the size of EPT TLB to store more translations and reduce memory
accesses.
• This dramatically improves memory access speed and virtualization efficiency.
vtucircle.com pg. 28
BCS601|Cloud Computing Module-2
vtucircle.com pg. 29
BCS601|Cloud Computing Module-2
• Uses a frontend driver (inside the guest OS) and a backend driver (inside the
hypervisor or privileged VM).
• How it works:
o Frontend driver (Domain U): Handles I/O requests from guest OS.
o Backend driver (Domain 0): Manages real I/O devices and routes data
between VMs.
o Communication occurs via a shared memory block.
• Advantages:
o Better performance than full device emulation.
• Disadvantages:
o Higher CPU overhead due to the need for additional processing.
• Intel VT-d technology supports I/O DMA remapping and device interrupt remapping.
• Helps unmodified, specialized, or virtualization-aware guest OSes run efficiently.
Summary
vtucircle.com pg. 30
BCS601|Cloud Computing Module-2
I/O virtualization continues to evolve, with hardware-assisted methods like VT-d and SV-
IO improving efficiency and reducing overhead.
vtucircle.com pg. 31
BCS601|Cloud Computing Module-2
Disadvantages
vtucircle.com pg. 32
BCS601|Cloud Computing Module-2
3. Dynamic Heterogeneity
• The integration of different types of cores (fat CPU cores & thin GPU cores) on the
same chip makes resource management more complex.
• As transistor reliability decreases and complexity increases, system designers must
adapt scheduling techniques dynamically.
vtucircle.com pg. 33
BCS601|Cloud Computing Module-2
vtucircle.com pg. 34
BCS601|Cloud Computing Module-2
Conclusion:
Virtual clusters provide flexibility, efficient resource usage, and better fault tolerance.
However, they require careful management for fast deployment, effective load balancing, and
optimized storage. Strategies like automated configuration and optimized migration help
improve performance while reducing overhead.
vtucircle.com pg. 35
BCS601|Cloud Computing Module-2
In a mixed-node cluster, virtual machines (VMs) typically run on physical hosts, but if a host
fails, its VM role can be taken over by another VM on a different host. This enables flexible
failover compared to traditional physical-to-physical failover. However, if a host fails, its
VMs also fail, which can be mitigated through live VM migration.
1. Guest-Based Manager: The cluster manager runs inside the guest OS (e.g.,
OpenMosix, Sun’s Oasis).
2. Host-Based Manager: The cluster manager runs on the host OS, supervising VMs
(e.g., VMware HA).
3. Independent Manager: Both guest and host have separate cluster managers, increasing
complexity.
4. Integrated Cluster Management: A unified manager controls both virtual and physical
resources.
1. Start Migration: Identify the VM and destination host, often triggered by load
balancing or server consolidation strategies.
2. Memory Transfer: The VM’s memory is copied to the destination host in multiple
rounds, ensuring minimal disruption.
3. Suspend and Final Copy: The VM pauses briefly to transfer the last memory portion,
CPU, and network states.
4. Commit and Activate: The destination host loads the VM state and resumes execution.
5. Redirect Network & Cleanup: The network redirects to the new VM, and the old VM
is removed.
Performance Effects:
vtucircle.com pg. 36
BCS601|Cloud Computing Module-2
• The first memory copy takes 63 seconds, reducing network speed from 870 MB/s to
765 MB/s.
• Additional memory copy rounds further reduce speed to 694 MB/s in 9.8 seconds.
• The total downtime is only 165 milliseconds, ensuring minimal service disruption.
Live VM migration enhances cloud computing by enabling seamless workload balancing and
minimizing downtime during host failures. Platforms like VMware and Xen support these
migrations, allowing multiple VMs to run efficiently on a shared physical infrastructure.
ActiveVMonHostA
Stage1: Reservation
Initializeacontaineronthetarget host
Overheadduetocopying Stage2:Iterativepre-copy
Enableshadowpaging
Copydirtypagesinsuccessiverounds.
(VMoutofservice) SuspendVMonhostA
Stage4: Commitment
vtucircle.com pg. 37
BCS601|Cloud Computing Module-2
Shared clusters reduce costs and improve resource utilization. When migrating a system to a
new physical node, key factors include memory migration, file system migration, and
network migration.
1. Memory Migration
3. Network Migration
• Precopy Approach:
o Transfers all memory pages first, then iteratively copies only modified pages.
o Reduces downtime but increases total migration time.
• Checkpoint/Recovery & Trace/Replay (CR/TR-Motion):
o Transfers execution logs instead of dirty pages, minimizing migration time.
o Limited by differences in source and target system performance.
• Postcopy Approach:
o Transfers memory pages once but has higher downtime due to fetch delays.
• Memory Compression:
o Uses spare CPU resources to compress memory pages before transfer,
reducing data size.
vtucircle.com pg. 38
BCS601|Cloud Computing Module-2
Key Takeaways
Example3.8LiveMigrationofVMsbetweenTwoXen-EnabledHosts
What is Live Migration?
Live migration is the process of moving a running Virtual Machine (VM) from one physical
machine to another without stopping its operations. This means users experience little to no
downtime while the VM is being transferred.
Trade-offs in Migration
• The compression algorithm must be fast and effective for different types of memory
data.
• Using a single compression method for all memory pages is not efficient because
different memory types require different strategies.
Conclusion
Live migration in Xen, enhanced by RDMA, allows seamless VM transfer with minimal
impact on performance. Techniques like precopying, dirty bitmaps, and compression improve
efficiency while ensuring smooth operation.
vtucircle.com pg. 39
BCS601|Cloud Computing Module-2
3.4.4 DynamicDeploymentofVirtualClusters
What is Dynamic Deployment of Virtual Clusters?
Dynamic deployment allows virtual clusters (vClusters) to change in size, move, or adapt to
resource demands. This helps in efficient resource management, improved performance, and
cost savings in cloud computing.
vtucircle.com pg. 40
BCS601|Cloud Computing Module-2
Note:
• Virtual clusters help efficiently manage and allocate resources.
• COD and VIOLIN show that dynamic adaptation can significantly improve resource
utilization.
• Live migration allows VMs to be moved with minimal downtime.
• These techniques enable scalable, flexible, and cost-effective cloud computing
solutions.
vtucircle.com pg. 41
BCS601|Cloud Computing Module-2
3.5 VIRTUALIZATIONFORDATA-CENTERAUTOMATION
Data centers have expanded rapidly, with major IT companies like Google, Amazon, and
Microsoft investing heavily in automation. This automation dynamically allocates hardware,
software, and database resources to millions of users while ensuring cost-effectiveness and
Quality of Service (QoS). The rise of virtualization and cloud computing has driven this
transformation, with market growth from $1.04 billion in 2006 to a projected $3.2 billion by
2011.
• Chatty workloads (e.g., web video services) that have fluctuating demand.
• Noninteractive workloads (e.g., high-performance computing) that require consistent
resource allocation.
To meet peak demand, resources are often statically allocated, leading to underutilized
servers and wasted costs in hardware, space, and power. Server consolidation—particularly
virtualization-based consolidation—optimizes resource management by reducing physical
servers and improving hardware utilization.
vtucircle.com pg. 42
BCS601|Cloud Computing Module-2
By leveraging virtualization and multicore processing (CMP), data centers can enhance
efficiency, but optimization in memory access, VM reassignment, and power management
remains a challenge.
vtucircle.com pg. 43
BCS601|Cloud Computing Module-2
• Virtual Disk Images (VDIs): Provides single-writer virtual disks, accessible from any
physical host in the cluster.
• Efficient Block Virtualization: Uses Xen’s block tap driver and tapdisk library for
handling block storage requests across VMs.
• Storage Appliance VM: Acts as an intermediary between client VMs and physical
hardware, facilitating live upgrades of block device drivers.
Parallax enhances flexibility, scalability, and ease of storage management in virtualized data
centers by integrating advanced block storage virtualization techniques.
To function as cloud providers, data centers must be virtualized using Virtual Infrastructure
(VI) managers and Cloud OSes. Table 3.6 outlines four such platforms:
1. Nimbus (Open-source)
2. Eucalyptus (Open-source)
3. OpenNebula (Open-source)
4. vSphere 4 (Proprietary, VMware)
• VM Creation & Management: All platforms support virtual machines and virtual
clusters for elastic cloud resources.
• Virtual Networking:Nimbus, Eucalyptus, and OpenNebula offer virtual network
support, enabling flexible communication between VMs.
vtucircle.com pg. 44
BCS601|Cloud Computing Module-2
Example3.12EucalyptusforVirtualNetworkingofPrivateCloud
Eucalyptus is an open-source software system designed for private cloud infrastructure and
Infrastructure as a Service (IaaS). It enables virtual networking and VM management,
butdoes not support virtual storage.
vtucircle.com pg. 45
BCS601|Cloud Computing Module-2
Eucalyptus provides a flexible and scalable solution for private cloud networking but lacks
some security and general-purpose cloud features.
Example3.13VMwarevSphere4asaCommercialCloudOS
Users must understand vCenter interfaces to manage applications effectively. More details
are available on the vSphere 4 website.
vtucircle.com pg. 46
BCS601|Cloud Computing Module-2
A Virtual Machine Monitor (VMM) creates and manages Virtual Machines (VMs) by acting
as a software layer between the operating system and hardware. It provides secure isolation
and manages access to hardware resources, making it the foundation of security in virtualized
environments. However, if a hacker compromises the VMM or management VM, the entire
system is at risk. Security issues also arise from random number reuse, which can lead to
encryption vulnerabilities and TCP hijacking attacks.
Intrusion Detection Systems (IDS) help identify unauthorized access. IDS can be:
vtucircle.com pg. 47
BCS601|Cloud Computing Module-2
Garfinkel and Rosenblum proposed a VMM-based IDS that monitors guest VMs using a
policy framework and trace-based security enforcement. However, logs used for analysis can
be compromised if the operating system is attacked.
Besides IDS, honeypots and honeynets are used to detect attacks by tricking attackers into
interacting with fake systems. Honeypots can be physical or virtual, and in virtual honeypots,
the host OS and VMM must be protected to prevent attacks from guest VMs.
Example3.14EMCEstablishmentofTrustedZonesforProtectionofVirtualClus
ters Provided to Multiple Tenants
EMC and VMware collaborated to develop security middleware for trust management in
distributed systems and private clouds. The concept of trusted zones was introduced to
enhance security in virtual clusters, where multiple applications and OS instances for
different tenants operate in separate virtual environments.
vtucircle.com pg. 48
BCS601|Cloud Computing Module-2
The trusted zones ensure secure isolation of VMs while allowing controlled interactions
among tenants, providers, and global communities. This approach strengthens security in
private cloud environments.
vtucircle.com pg. 49