Unit 2
Unit 2
Virtualization is a way to use one computer as if it were many. Before virtualization, most
computers were only doing one job at a time, and a lot of their power was wasted. Virtualization
lets you run several virtual computers on one real computer, so you can use its full power and do
more tasks at once.
Virtual Machine is like fake computer system operating on your hardware. It partially uses the
hardware of your system (like CPU, RAM, disk space, etc.) but its space is completely separated
from your main system. Two virtual machines don’t interrupt in each other’s working and
functioning nor they can access each other’s space which gives an illusion that we are using totally
different hardware system.
Use of Virtualization in Cloud Computing
2. Scalability: Users of the cloud may cost-effectively sustain computer server resources
when the workload increases by purchasing just the resources they require at the time of
demand because of virtualization.
3. Backup and disaster recovery: Users may build up virtual resources as a backup in a
variety of places across the globe, thanks to cloud virtualization. This increases
availability and uptime.
Using hypervisor software, which links directly It is a highly accessible service that provides a
to the hardware, enables the division of a single shared pool of resources that users can access
system into several virtual computers. conveniently.
Virtualization refers to the use of various techniques that allow one system to mimic or emulate
other systems or services. These techniques are applied in many areas of computing, providing
flexibility, efficiency, and scalability. The classification of these virtualization techniques helps
to better understand their characteristics and usage in different scenarios.
Virtualization Areas
Emulation Targets Virtualization techniques are mainly applied to emulate different services or
entities. Specifically, virtualization is used to emulate three primary areas:
• Networks: Virtualizing networks allows different networks to operate as though they are
independent of the underlying physical infrastructure.
Among these, execution virtualization is the oldest, most popular, and most developed area,
making it the focus of most virtualization studies. Therefore, this area requires a deeper
investigation and further categorization.
How is it Done?
1.Process-Level Virtualization
• Emulation: This technique mimics a different system entirely, allowing applications meant
for one system to run on another by imitating its architecture.
• High-Level Virtual Machines (VMs): These VMs operate at a high level and abstract the
details of the hardware. They are particularly suited for programming languages like Java,
where the code runs on a virtual machine instead of directly on hardware.
2. System-Level Virtualization
• Partial Virtualization: Only part of the hardware resources are virtualized, meaning some
applications may need to run directly on the host.
Virtualization Models
The virtualization models describe the environment created by the virtualization techniques:
• Application-Level Virtualization
This model allows individual applications to run in a virtual environment separate from the
underlying system. This is commonly achieved through emulation.
This model allows multiple user environments to run on a single OS, managed by
the multiprogramming technique. It ensures that resources are shared but isolated between
environments.
• Hardware-Level Virtualization
In system-level virtualization, hardware-level models are prevalent. These models allow multiple
operating systems to share a single hardware platform through full, paravirtualization, or partial
virtualization techniques.
2.3 Hypervisors
• Type I Hypervisors
• Type II Hypervisors
Type I Hypervisors :
The hypervisor runs directly on the underlying host system. It is also known as a "Native
Hypervisor" or "Bare metal hypervisor". It does not require any base server operating system. It
has direct access to hardware resources. Examples of Type 1 hypervisors include VMware ESXi,
Citrix XenServer, and Microsoft Hyper-V hypervisor.
Pros: Such kinds of hypervisors are very efficient because they have direct access to the physical
hardware resources (like Cpu, Memory, Network, and Physical storage). This causes the
empowerment of the security because there is nothing any kind of the third party resource so that
attacker couldn't compromise with anything.
Cons: One problem with Type-1 hypervisors is that they usually need a dedicated separate
machine to perform their operation and to instruct different VMs and control the host hardware
resources.
Type II Hypervisors:
A Host operating system runs on the underlying host system. It is also known as 'Hosted
Hypervisor". Such kind of hypervisors doesn’t run directly over the underlying hardware rather
they run as an application in a Host system (physical machine).
Basically, the software is installed on an operating system. Hypervisor asks the operating system
to make hardware calls.
Pros: Such kind of hypervisors allows quick and easy access to a guest Operating System
alongside the host machine running. These hypervisors usually come with additional useful
features for guest machines. Such tools enhance the coordination between the host machine and
the guest machine.
Cons: Here there is no direct access to the physical hardware resources so the efficiency of these
hypervisors lags in performance as compared to the type-1 hypervisors, and potential security risks
are also there an attacker can compromise the security weakness if there is access to the host
operating system so he can also access the guest operating system.
The hypervisor supports hardware-level virtualization (see Figure 3.1(b)) on bare metal devices
like CPU, memory, disk and network interfaces. The hypervisor software sits directly between the
physi-cal hardware and its OS. This virtualization layer is referred to as either the VMM or the
hypervisor. The hypervisor provides hypercalls for the guest OSes and applications. Depending
on the functional-ity, a hypervisor can assume a micro-kernel architecture like the Microsoft
Hyper-V. Or it can assume a monolithic hypervisor architecture like the VMware ESX for server
virtualization.
A micro-kernel hypervisor includes only the basic and unchanging functions (such as physical
memory management and processor scheduling). The device drivers and other changeable
components are outside the hypervisor. A monolithic hypervisor implements all the
aforementioned functions, including those of the device drivers. Therefore, the size of the
hypervisor code of a micro-kernel hyper-visor is smaller than that of a monolithic hypervisor.
Essentially, a hypervisor must be able to convert physical devices into virtual resources dedicated
for the deployed VM to use.
Xen is an open source hypervisor program developed by Cambridge University. Xen is a micro-
kernel hypervisor, which separates the policy from the mechanism. The Xen hypervisor
implements all the mechanisms, leaving the policy to be handled by Domain 0, as shown in Figure
3.5. Xen does not include any device drivers natively [7]. It just provides a mechanism by which
a guest OS can have direct access to the physical devices. As a result, the size of the Xen hypervisor
is kept rather small. Xen provides a virtual environment located between the hardware and the OS.
A number of vendors are in the process of developing commercial Xen hypervisors, among them
are Citrix XenServer [62] and Oracle VM [42].
The core components of a Xen system are the hypervisor, kernel, and applications. The
organization of the three components is important. Like other virtualization systems, many guest
OSes can run on top of the hypervisor. However, not all guest OSes are created equal, and one in
particular controls the others. The guest OS, which has control ability, is called Domain 0, and the
others are called Domain U. Domain 0 is a privileged guest OS of Xen. It is first loaded when Xen
boots without any file system drivers being available. Domain 0 is designed to access hardware
directly and manage devices. Therefore, one of the responsibilities of Domain 0 is to allocate and
map hardware resources for the guest domains (the Domain U domains).
With full virtualization, noncritical instructions run on the hardware directly while critical
instructions are discovered and replaced with traps into the VMM to be emulated by software.
Both the hypervisor and VMM approaches are considered full virtualization. Why are only critical
instructions trapped into the VMM? This is because binary translation can incur a large
performance overhead. Noncritical instructions do not control hardware or threaten the security of
the system, but critical instructions do. Therefore, running noncritical instructions on hardware
not only can promote efficiency, but also can ensure system security.
This approach was implemented by VMware and many other software companies. As shown in
Figure 3.6, VMware puts the VMM at Ring 0 and the guest OS at Ring 1. The VMM scans the
instruction stream and identifies the privileged, control- and behavior sensitive instructions. When
these instructions are identified, they are trapped into the VMM, which emulates the behavior of
these instructions. The method used in this emulation is called binary translation. Therefore, full
virtualization combines binary translation and direct execution. The guest OS is completely
decoupled from the underlying hardware. Consequently, the guest OS is unaware that it is being
virtualized.
The performance of full virtualization may not be ideal, because it involves binary translation
which is rather time-consuming. In particular, the full virtualization of I/O-intensive applications
is a really a big challenge. Binary translation employs a code cache to store translated hot
instructions to improve performance, but it increases the cost of memory usage. At the time of this
writing, the performance of full virtualization on the x86 architecture is typically 80 percent to 97
percent that of the host machine.
An alternative VM architecture is to install a virtualization layer on top of the host OS. This host
OS is still responsible for managing the hardware. The guest OSes are installed and run on top of
the virtualization layer. Dedicated applications may run on the VMs. Certainly, some other
applications
can also run with the host OS directly. This host-based architecture has some distinct advantages,
as enumerated next. First, the user can install this VM architecture without modifying the host
OS. The virtualizing software can rely on the host OS to provide device drivers and other low-
level services. This will simplify the VM design and ease its deployment.
Second, the host-based approach appeals to many host machine configurations. Compared to the
hypervisor/VMM architecture, the performance of the host-based architecture may also be low.
When an application requests hardware access, it involves four layers of mapping which
downgrades performance significantly. When the ISA of a guest OS is different from the ISA of
the underlying hardware, binary translation must be adopted. Although the host-based architecture
has flexibility, the performance is too low to be useful in practice.
Figure 3.7 illustrates the concept of a paravirtualized VM architecture. The guest operating
systems are para-virtualized. They are assisted by an intelligent compiler to replace the
nonvirtualizable OS instructions by hypercalls as illustrated in Figure 3.8. The traditional x86
processor offers four instruction execution rings: Rings 0, 1, 2, and 3. The lower the ring number,
the higher the privilege of instruction being executed. The OS is responsible for managing
the hardware and the privileged instructions to execute at Ring 0, while user-level applications
run at Ring 3. The best example of para-virtualization is the KVM to be described below.
Although para-virtualization reduces the overhead, it has incurred other problems. First, its
compatibility and portability may be in doubt, because it must support the unmodified OS as well.
Second, the cost of maintaining para-virtualized OSes is high, because they may require deep OS
kernel modifications. Finally, the performance advantage of para-virtualization varies greatly due
to workload variations. Compared with full virtualization, para-virtualization is relatively easy and
more practical. The main problem in full virtualization is its low performance in binary translation.
To speed up binary translation is difficult. Therefore, many virtualization products employ the
para-virtualization architecture. The popular Xen, KVM, and VMware ESX are good examples.
This is a Linux para-virtualization system—a part of the Linux version 2.6.20 kernel. Memory
management and scheduling activities are carried out by the existing Linux kernel. The KVM does
the rest, which makes it simpler than the hypervisor that controls the entire machine. KVM is a
hardware-assisted para-virtualization tool, which improves performance and supports unmodified
guest OSes such as Windows, Linux, Solaris, and other UNIX variants.
Unlike the full virtualization architecture which intercepts and emulates privileged and sensitive
instructions at runtime, para-virtualization handles these instructions at compile time. The guest
OS kernel is modified to replace the privileged and sensitive instructions with hypercalls to the
hypervisor or VMM. Xen assumes such a para-virtualization architecture.
The guest OS running in a guest domain may run at Ring 1 instead of at Ring 0. This implies that
the guest OS may not be able to execute some privileged and sensitive instructions. The privileged
instructions are implemented by hypercalls to the hypervisor. After replacing the instructions with
hypercalls, the modified guest OS emulates the behavior of the original guest OS. On an UNIX
system, a system call involves an interrupt or service routine. The hypercalls apply a dedicated
service routine in Xen.
Example 3.3 VMware ESX Server for Para-Virtualization
Now, what is the role of the hypervisor? It is a bridge between the hardware and the virtual
operating system, which allows smooth functioning. Talking of the Implementation levels of
virtualization in Cloud Computing., there are a total of five levels that are commonly used.
Let us now look closely at each of these levels of virtualization implementation in Cloud
Computing.
1) Instruction Set Architecture Level (ISA)
ISA virtualization can work through ISA emulation. This is used to run many legacy codes
written for a different hardware configuration. These codes run on any virtual machine using the
ISA. With this, a binary code that originally needed some additional layers to run is now capable
of running on the x86 machines. It can also be tweaked to run on the x64 machine. With ISA, it is
possible to make the virtual machine hardware agnostic. For the basic emulation, an interpreter is
needed, which interprets the source code and then converts it into a hardware format that can be
read. This then allows processing. This is one of the five implementation levels of virtualization
in Cloud Computing.
True to its name HAL lets the virtualization perform at the level of the hardware. This
makes use of a hypervisor which is used for functioning. The virtual machine is formed at this
level, which manages the hardware using the virtualization process. It allows the virtualization of
each of the hardware components, which could be the input-output device, the memory, the
processor, etc. Multiple users will not be able to use the same hardware and also use multiple
virtualization instances at the very same time. This is mostly used in the cloud-based
infrastructure.
At the level of the operating system, the virtualization model is capable of creating a layer
that is abstract between the operating system and the application. This is an isolated container on
the operating system and the physical server, which uses the software and hardware. Each of these
then functions in the form of a server.
When there are several users and no one wants to share the hardware, then this is where the
virtualization level is used. Every user will get his virtual environment using a dedicated virtual
hardware resource. In this way, there is no question of any conflict.
4) Library Level
The operating system is cumbersome, and this is when the applications use the API from
the libraries at a user level. These APIs are documented well, and this is why the library
virtualization level is preferred in these scenarios. API hooks make it possible as it controls the
link of communication from the application to the system.
5) Application Level
The application-level virtualization is used when there is a desire to virtualize only one
application and is the last of the implementation levels of virtualization in Cloud Computing. One
does not need to virtualize the entire environment of the platform. This is generally used when
you run virtual machines that use high-level languages. The application will sit above the
virtualization layer, which in turn sits on the application program. It lets the high-level language
programs compiled to be used at the application level of the virtual machine run seamlessly.
To support virtualization, processors such as the x86 employ a special running mode and
instructions, known as hardware-assisted virtualization. In this way, the VMM and guest OS run
in different modes and all sensitive instructions of the guest OS and its applications are trapped in
the VMM. To save processor states, mode switching is completed by hardware. For the x86
architecture, Intel and AMD have proprietary technologies for hardware-assisted virtualization.
At the time of this writing, many hardware virtualization products were available. The VMware
Workstation is a VM software suite for x86 and x86-64 computers. This software suite allows
users to set up multiple x86 and x86-64 virtual computers and to use one or more of these VMs
simultaneously with the host operating system. The VMware Workstation assumes the host-based
virtualization. Xen is a hypervisor for use in IA-32, x86-64, Itanium, and PowerPC 970 hosts.
Actually, Xen modifies Linux as the lowest and most privileged layer, or a hypervisor.
One or more guest OS can run on top of the hypervisor. KVM (Kernel-based Virtual Machine) is
a Linux kernel virtualization infrastructure. KVM can support hardware-assisted virtualization and
paravirtualization by using the Intel VT-x or AMD-v and VirtIO framework, respectively. The
VirtIO framework includes a paravirtual Ethernet card, a disk I/O controller, a balloon device for
adjusting guest memory usage, and a VGA graphics interface using VMware drivers.
2. CPU Virtualization
A CPU architecture is virtualizable if it supports the ability to run the VM’s privileged and
unprivileged instructions in the CPU’s user mode while the VMM runs in supervisor mode. When
the privileged instructions including control- and behavior-sensitive instructions of a VM are
executed, they are trapped in the VMM. In this case, the VMM acts as a unified mediator
for hardware access from different VMs to guarantee the correctness and stability of the whole
system. However, not all CPU architectures are virtualizable. RISC CPU architectures can be
naturally virtualized because all control- and behaviour-sensitive instructions are privileged
instructions. On the contrary, x86 CPU architectures are not primarily designed to support
virtualization. This is because about 10 sensitive instructions, such as SGDT and SMSW, are not
privileged instructions. When these instructions execute in virtualization, they cannot be trapped
in the VMM.
Virtual memory virtualization is similar to the virtual memory support provided by modern
operating systems. In a traditional execution environment, the operating system maintains
mappings of virtual memory to machine memory using page tables, which is a one-stage mapping
from virtual memory to machine memory. All modern x86 CPUs include a memory management
unit (MMU) and a translation lookaside buffer (TLB) to optimize virtual memory performance.
However, in a virtual execution environment, virtual memory virtualization involves sharing the
physical system memory in RAM and dynamically allocating it to the physical memory of the
VMs.
That means a two-stage mapping process should be maintained by the guest OS and the VMM,
respectively: virtual memory to physical memory and physical memory to machine memory.
Furthermore, MMU virtualization should be supported, which is transparent to the guest OS. The
guest OS continues to control the mapping of virtual addresses to the physical memory addresses
of VMs. But the guest OS cannot directly access the actual machine memory. The VMM is
responsible for mapping the guest physical memory to the actual machine memory. Figure 3.12
shows the two-level memory mapping procedure.
4. I/O Virtualization
I/O virtualization involves managing the routing of I/O requests between virtual devices and the
shared physical hardware. At the time of this writing, there are three ways to implement I/O
virtualization: full device emulation, para-virtualization, and direct I/O. Full device emulation is
the first approach for I/O virtualization. Generally, this approach emulates well-known, real-world
devices.
All the functions of a device or bus infrastructure, such as device enumeration, identification,
interrupts, and DMA, are replicated in software. This software is located in the VMM and acts as
a virtual device. The I/O access requests of the guest OS are trapped in the VMM which interacts
with the I/O devices. The full device emulation approach is shown in Figure 3.14.
A single hardware device can be shared by multiple VMs that run concurrently. However, software
emulation runs much slower than the hardware it emulates . The para-virtualization method of I/O
virtualization is typically used in Xen. It is also known as the split driver model consisting of a
frontend driver and a backend driver. The frontend driver is running in Domain U and the backend
dri-ver is running in Domain 0. They interact with each other via a block of shared memory. The
frontend driver manages the I/O requests of the guest OSes and the backend driver is responsible
for managing the real I/O devices and multiplexing the I/O data of different VMs. Although para-
I/O-virtualization achieves better device performance than full device emulation, it comes with a
higher CPU overhead.
Direct I/O virtualization lets the VM access devices directly. It can achieve close-to-native
performance without high CPU costs. However, current direct I/O virtualization implementations
focus on networking for mainframes. There are a lot of challenges for commodity hardware
devices. For example, when a physical device is reclaimed (required by workload migration) for
later reassign-ment, it may have been set to an arbitrary state (e.g., DMA to some arbitrary memory
locations) that can function incorrectly or even crash the whole system. Since software-based I/O
virtualization requires a very high overhead of device emulation, hardware-assisted I/O
virtualization is critical. Intel VT-d supports the remapping of I/O DMA transfers and device-
generated interrupts. The architecture of VT-d provides the flexibility to support multiple usage
models that may run unmodified, special-purpose, or “virtualization-aware” guest OSes.
Concerning the first challenge, new programming models, languages, and libraries are needed to
make parallel programming easier. The second challenge has spawned research involving
scheduling algorithms and resource management policies. Yet these efforts cannot balance well
among performance, complexity, and other issues. What is worse, as technology scales, a new
challenge called dynamic heterogeneity is emerging to mix the fat CPU core and thin GPU cores
on the same chip, which further complicates the multi-core or many-core resource management.
The dynamic heterogeneity of hardware infrastructure mainly comes from less reliable transistors
and increased complexity in using the transistors.
Remote desktop virtualization is also a key component of digital workspaces Virtual desktop
workloads run on desktop virtualization servers which typically execute on virtual machines
(VMs) either at on-premises data centers or in the public cloud. Since the user devices is basically
a display, keyboard, and mouse, a lost or stolen device presents a reduced risk to the organization.
All user data and programs exist in the desktop virtualization server, not on client devices.
1.Resource Utilization: Since IT resources for desktop virtualization are concentrated in a data
center, resources are pooled for efficiency. The need to push OS and application updates to end-
user devices is eliminated, and virtually any desktop, laptop, tablet, or smartphone can be used to
access virtualized desktop applications.
IT organizations can thus deploy less powerful and less expensive client devices since they are
basically only used for input and output.
2.Remote Workforce Enablement: Since each virtual desktop resides in central servers, new
user desktops can be provisioned in minutes and become instantly available for new users to
access. Additionally IT support resources can focus on issues on the virtualization servers with
little regard to the actual end-user device being used to access the virtual desktop. Finally, since
all applications are served to the client over a network, users have the ability to access their
business applications virtually anywhere there is internet connectivity. If a user leaves the
organization, the resources that were used for their virtual desktop can then be returned to centrally
pooled infrastructure.
3.Security: IT professionals rate security as their biggest challenge year after year. By removing
OS and application concerns from user devices, desktop virtualization enables centralized security
control, with hardware security needs limited to virtualization servers, and an emphasis on identity
and access management with role-based permissions that limit users only to those applications and
data they are authorized to access. Additionally, if an employee leaves an organization there is no
need to remove applications and data from user devices; any data on the user device is ephemeral
by design and does not persist when a virtual desktop session ends.
Remote desktop virtualization is typically based on a client/server model, where the organization’s
chosen operating system and applications run on a server located either in the cloud or in a data
center. In this model all interactions with users occur on a local device of the user’s choosing,
reminiscent of the so-called ‘dumb’ terminals popular on mainframes and early Unix systems.
3. Desktop-as-a-Service (DaaS).
VDI simulates the familiar desktop computing model as virtual desktop sessions that run on VMs
either in on-premises data center or in the cloud. Organizations who adopt this model manage the
desktop virtualization server as they would any other application server on-premises. Since all
end-user computing is moved from users back into the data centre, the initial deployment of servers
to run VDI sessions can be a considerable investment, tempered by eliminating the need to
constantly refresh end-user devices.
RDS is often used where a limited number of applications needs be virtualized, rather than a full
windows, Mac, or Linux desktop. In this model applications are streamed to the local device which
runs its own OS. Because only applications are virtualized RDS systems can offer a higher density
of users per VM.
DaaS shifts the burden of providing desktop virtualization to service providers, which greatly
alleviates the IT burden in providing virtual desktops. Organizations that wish to move IT
expenses from capital expense to operational expenses will appreciate the predictable monthly
costs that DaaS providers base their business model on.
In server virtualization, a server OS and its applications are abstracted into a VM from the
underlying hardware by a hypervisor. Multiple VMs can run on a single server, each with its own
server OS, applications, and all the application dependencies required to execute as if it were
running on bare metal.
Desktop virtualization abstracts client software (OS and applications) from a physical thin client
which connects to applications and data remotely, typically via the internet. This abstraction
enables users to utilize any number of devices to access their virtual desktop. Desktop
virtualization can greatly increase an organization’s need for bandwidth, depending on the number
of concurrent users during peak.
Application virtualization insulates executing programs from the underlying device, where
desktop virtualization abstracts the entire desktop – OS and applications – which are then
accessible by virtually any client device.
However, application virtualization is not for every application. Compute- and graphics-intensive
applications can suffer from slowing down causing visible lag during rendering, and a solid
broadband connection is necessary to delivery a user experience comparable to local device
applications.
1. Physical switch OS – It is where the OS must have the functionality of network virtualization.
2. Hypervisor – It is which uses third-party software or built-in networking and the functionalities
of network virtualization.
The basic functionality of the OS is to give the application or the executing process with a simple
set of instructions. System calls that are generated by the OS and executed through the libc library
are comparable to the service primitives given at the interface between the application and the
network through the SAP (Service Access Point). The hypervisor is used to create a virtual switch
and configuring virtual networks on it. The third-party software is installed onto the hypervisor
and it replaces the native networking functionality of the hypervisor. A hypervisor allows us to
have various VMs all working optimally on a single piece of computer hardware.
1. Physical Network
Physical components: Network adapters, switches, bridges, repeaters, routers and hubs.
Grants connectivity among physical servers running a hypervisor, between physical servers
2. VM Network
OS Based Virtualization
Traditional Virtualization Architecture (Using VMs)
• Each Virtual Machine (VM) operates as an isolated environment, having its own OS.
• This results in higher resource consumption (CPU, memory, storage).
• The Virtual Machine Management layer is responsible for:
Virtualization Architecture
• The Host Operating System (OS) runs above the physical hardware and provides the
environment for the hypervisor.
• The Hypervisor: Manages VMs & Allocates resources from the physical hardware to the
VMs.
• The Hardware (Virtualization Host) is the physical machine that provides necessary CPU,
memory, storage and I/O to run the hypervisor and VMs.
• Multiple VMs run simultaneously on the same physical hardware.
How OS-Based Virtualization Works
OS-Based Virtualization works as follows:
• The host OS kernel is shared among all containers, unlike full virtualization (e.g., VMs)
where each VM has its own kernel.
• The kernel enforces isolation between containers using namespaces (for process, network,
filesystem isolation) and cgroups (control groups) for resource allocation (CPU, memory,
disk I/O, network).
• cgroups limit and prioritize resource usage (CPU, memory, disk, network) per container.
• The kernel ensures that a container cannot exceed its allocated resources (unless explicitly
allowed).
• Namespaces prevent processes in one container from seeing or interfering with processes
in another.
• Programs inside a container cannot access resources outside unless explicitly granted (e.g.,
mounted volumes, network ports).
• The overhead comes from kernel-level isolation mechanisms (namespaces, cgroups), but
it’s minimal compared to full virtualization.
Operating System Based Services
Some major operating system based services are mentioned below:
• Backup and Recovery: Host operating systems can be utilized to back up and restore
virtual machines. Backup software tools can be used to ensure data safety and system
recovery.
• Security Management: Host operating systems help manage the security of virtual
machines. This includes configuring firewalls, installing antivirus software and applying
other essential security settings.
• Integration with Directory Services: Host operating systems can be integrated with
directory services like Active Directory, enabling centralized management of users and
groups.
Operating System Based Operations
Various major operations of Operating System Based Virtualization are described below:
1. Hardware capabilities can be employed such as the network connection and CPU.
2. Connected peripherals with which Host OS can interact such as a webcam, printer,
keyboard or scanners.
3. Host OS can be used to read or write data in files, folders and network shares.
Features of OS- Based Virtualization
• Resource isolation: Operating system based virtualization provides a high level of
resource isolation which allows each container to have its own set of resources, including
CPU, memory and I/O bandwidth.
• Lightweight: Containers are lighter compared to traditional virtual machines as they share
the same host operating system. This results in faster startup and lower resource usage.
• Portability: Containers are highly portable. They can be easily moved from one
environment to another without the need to modify the underlying application.
• Scalability: Containers can be easily scaled up or down based on the application
requirements. This makes it easier for applications to be highly responsive to changes in
demand.
• Security: Containers provide a high level of security by isolating the containerized
application from the host operating system and other containers running on the same
system.
• Reduced Overhead: Containers incur less overhead than traditional virtual machines as
they do not need to emulate a full hardware environment.
• Easy Management: Containers are easy to manage as they can be started, stopped and
monitored using simple commands.
Pros of OS-Based Virtualization
• Resource Efficiency: Operating system based virtualization allows for greater resource
efficiency as containers do not need to emulate a complete hardware environment, which
reduces resource overhead.
• High Scalability: Containers can be quickly and easily scaled up or down depending on
the demand, which makes it easy to respond to changes in the workload.
• Easy Management: Containers are easy to manage as they can be managed through
simple commands, which makes it easy to deploy and maintain large numbers of
containers.
• Reduced Costs: Operating system based virtualization can significantly reduce costs, as
it requires fewer resources and infrastructure than traditional virtual machines.
• Faster Deployment: Containers can be deployed quickly, reducing the time required to
launch new applications or update existing ones.
• Portability: Containers are highly portable, making it easy to move them from one
environment to another without requiring changes to the underlying application.
Cons of OS-Based Virtualization
• Security: Operating system based virtualization may pose security risks as containers
share the same host operating system, which means that a security breach in one container
could potentially affect all other containers running on the same system.
• Limited Isolation: Containers may not provide complete isolation between applications,
which can lead to performance degradation or resource contention.
• Complexity: Operating system based virtualization can be complex to set up and manage,
requiring specialized skills and knowledge.
• Dependency Issues: Containers may have dependency issues with other containers or the
host operating system, which can lead to compatibility issues and hinder deployment.
• Limited Hardware Access: Containers may have limited access to hardware resources
which can limit their ability to perform certain tasks or applications that require direct
hardware access.
2.11 Application virtualization:
Application virtualization is a process that deceives a standard app into believing that it interfaces
directly with an operating system's capacities when, in fact, it does not.
This ruse requires a virtualization layer inserted between the app and the OS. This layer, or
framework, must run an app's subsets virtually and without impacting the subjacent OS. The
virtualization layer replaces a portion of the runtime environment typically supplied by the OS,
transparently diverting files and registry log changes to a single executable file.
By diverting the app's processes into one file instead of many dispersed across the OS, the app
easily operates on a different device, and formerly incompatible apps can now run adjacently.
Used in conjunction with application virtualization is desktop virtualization—the abstraction of
the physical desktop environment and its related app software from the end-user device that
accesses it.
Executing Application Virtualization:
Application (and desktop virtualization) are a Desktop as a Service (DaaS) managed by a
hypervisor (aka virtual machine monitor or VMM). A VMM infrastructure—software, firmware,
and/or hardware—creates and operates virtual machines (VMs). A host (server) connects to
multiple guests (endpoints).
Application and desktop virtualization enables centralized management of the complete desktop
environment ecosystem. Organizations need only to patch but a few images of applications and
virtualized desktops rather than a myriad of endpoints, thereby deploying updates consistently,
completely, and rapidly.
Since software and security updates are stored on images in data center servers, endpoint device
exposure to vulnerabilities such as nascent malware or app manipulations is significantly reduced.
Benefits of Application Virtualization:
These server images facilitate regulatory compliance with standards such as the Payment Card
Industry Data Security Standards (PCI DSS) and the Health Insurance Portability and
Accountability Act (HIPAA). Since data is not processed or stored on endpoint devices, no data
breach occurs, should the device become compromised. The endpoint device is but a display
terminal.
Application and desktop virtualization both support incident management, resolving many
adverse desktop events by merely refreshing a virtualized image, and restoring the desktop
environment to its previous state.
Other virtualization benefits include:
Allows the running of legacy apps (e.g., those developed for end-of-life OS platforms
like Windows 7 and XP).
Enables cross-platform operations (e.g., running Windows apps on iOS, Android,
macOS, and Chrome OS).
Prevents conflicts with other virtualized apps (e.g., conflicting anti-malware software).
Permits users to run multiple app instances—if not virtualized, many apps can detect the
running of an instance and won't allow new ones.
However, some apps pose challenges to virtualization. For example, an application requiring a
device driver (which integrates into the OS and is thus OS-specific) can affect the use of
peripherals like printers. Also, 16-bit applications and apps requiring extensive OS-integration are
problematic to virtualize (e.g., some anti-virus programs). Latency caused by virtualization can
drag graphics- intensive apps during the rendering process.