Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
7 views35 pages

Unit 2

The document provides an overview of virtualization in cloud computing, explaining the basics of virtual machines, types of virtualization, and the role of hypervisors. It highlights the advantages of virtualization, such as efficient resource utilization, scalability, and enhanced security, while also differentiating between virtualization and cloud computing. Additionally, it discusses various virtualization techniques, architectures, and the pros and cons of Type I and Type II hypervisors.

Uploaded by

madhumathis23cb
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views35 pages

Unit 2

The document provides an overview of virtualization in cloud computing, explaining the basics of virtual machines, types of virtualization, and the role of hypervisors. It highlights the advantages of virtualization, such as efficient resource utilization, scalability, and enhanced security, while also differentiating between virtualization and cloud computing. Additionally, it discusses various virtualization techniques, architectures, and the pros and cons of Type I and Type II hypervisors.

Uploaded by

madhumathis23cb
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

CS2V31 CLOUD COMPUTING

UNIT II VIRTUALIZATION BASICS AND INFRASTRUCTURE 9


Virtual Machine Basics – Taxonomy of Virtualization – Hypervisor – key concepts – Virtualization
Structures – Types of Virtualizations: Full virtualization, Para virtualization and Hardware virtualization –
Implementation Levels of Virtualization: Virtualization of CPU – Memory – I/O Devices - Desktop
Virtualization – Network Virtualization – Storage Virtualization – System-level of Operating Virtualization
– Application Virtualization.

2.1 Virtual Machine Basics:


A virtual machine (VM) is a software-based emulation of a physical computer, allowing multiple
operating systems and applications to run on a single physical server. The implementation of VM
may consider specialized software, hardware, or a combination of both.

Virtualization is a way to use one computer as if it were many. Before virtualization, most
computers were only doing one job at a time, and a lot of their power was wasted. Virtualization
lets you run several virtual computers on one real computer, so you can use its full power and do
more tasks at once.

Virtual Machine is like fake computer system operating on your hardware. It partially uses the
hardware of your system (like CPU, RAM, disk space, etc.) but its space is completely separated
from your main system. Two virtual machines don’t interrupt in each other’s working and
functioning nor they can access each other’s space which gives an illusion that we are using totally
different hardware system.
Use of Virtualization in Cloud Computing

Some uses and advantages of virtualization in cloud computing are:

1. Efficient utilization of hardware resources: Virtualization technology allows for the


efficient utilization of hardware resources by dividing a single physical instance of a
resource into multiple virtual machines. This reduces costs and provides flexibility in
managing workloads.

2. Scalability: Users of the cloud may cost-effectively sustain computer server resources
when the workload increases by purchasing just the resources they require at the time of
demand because of virtualization.

3. Backup and disaster recovery: Users may build up virtual resources as a backup in a
variety of places across the globe, thanks to cloud virtualization. This increases
availability and uptime.

4. Additional Security: Virtualization technology provides a secure environment for


running applications and storing data. It enables the creation of isolated virtual machines
that can be used to run software applications without affecting other virtual machines.

Differences between virtualization and cloud computing

Virtualization Cloud Computing

Virtualization is the establishment of cloud


Cloud computing is a client-server computing
computing and allows the creation of from a
architecture where resources are used in a
single physical hardware device several dedicated
centralized pattern.
environments or resources.

Using hypervisor software, which links directly It is a highly accessible service that provides a
to the hardware, enables the division of a single shared pool of resources that users can access
system into several virtual computers. conveniently.

It is low-scalability and flexible when compared


It is highly scalable and flexible.
to cloud computing.
The configuration of virtualization is template- The configuration of cloud computing is
based. image-based.

2.2 TAXONOMY OF VIRTUALIZATION TECHNIQUES

Virtualization refers to the use of various techniques that allow one system to mimic or emulate
other systems or services. These techniques are applied in many areas of computing, providing
flexibility, efficiency, and scalability. The classification of these virtualization techniques helps
to better understand their characteristics and usage in different scenarios.

Virtualization Areas

Emulation Targets Virtualization techniques are mainly applied to emulate different services or
entities. Specifically, virtualization is used to emulate three primary areas:

• Execution Environments: These are environments where programs, including operating


systems and applications, can run as if they were on actual physical hardware.
• Storage: Virtualizing storage separates physical storage locations from the logical view
users or programs interact with.

• Networks: Virtualizing networks allows different networks to operate as though they are
independent of the underlying physical infrastructure.

Among these, execution virtualization is the oldest, most popular, and most developed area,
making it the focus of most virtualization studies. Therefore, this area requires a deeper
investigation and further categorization.

How is it Done?

1.Process-Level Virtualization

• Process-level virtualization allows virtualization within the execution environment. This


form of virtualization is hosted on an existing operating system and provides a virtualized
process space where applications can run.

The main techniques used here include:

• Emulation: This technique mimics a different system entirely, allowing applications meant
for one system to run on another by imitating its architecture.

• High-Level Virtual Machines (VMs): These VMs operate at a high level and abstract the
details of the hardware. They are particularly suited for programming languages like Java,
where the code runs on a virtual machine instead of directly on hardware.

• Multiprogramming: This involves running multiple processes on a single system in a time-


sharing manner, where the OS switches between different processes. This is an example
of virtualization at the operating system level.

2. System-Level Virtualization

System-level virtualization operates directly on the hardware without needing an existing


operating system to host it. It is more efficient in accessing the hardware resources.

Key techniques under system-level virtualization are:

• Hardware-Assisted Virtualization: This technique uses the hardware's support to improve


virtualization performance. It often involves using a hypervisor that interacts directly with
the hardware to manage virtual machines.
• Full Virtualization: Here, the guest operating system runs in isolation as though it were
directly using the hardware, without needing modification.

• Paravirtualization: In contrast to full virtualization, paravirtualization requires


modification of the guest operating system so that it can interact more efficiently with the
virtual environment.

• Partial Virtualization: Only part of the hardware resources are virtualized, meaning some
applications may need to run directly on the host.

Virtualization Models

The virtualization models describe the environment created by the virtualization techniques:

• Application-Level Virtualization

This model allows individual applications to run in a virtual environment separate from the
underlying system. This is commonly achieved through emulation.

• Programming Language-Level Virtualization

Programming language-level virtualization makes it possible for applications written in a


particular language to run in a virtual machine designed for that language. High-level VMs such
as the Java Virtual Machine (JVM) are examples of this type of virtualization model.

• Operating System-Level Virtualization

This model allows multiple user environments to run on a single OS, managed by
the multiprogramming technique. It ensures that resources are shared but isolated between
environments.

• Hardware-Level Virtualization

In system-level virtualization, hardware-level models are prevalent. These models allow multiple
operating systems to share a single hardware platform through full, paravirtualization, or partial
virtualization techniques.

2.3 Hypervisors

A hypervisor is a critical piece of software that enables virtualization by allowing multiple


operating systems (guest VMs) to run on a single physical machine (host).
A hypervisor, also known as a Virtual Machine Manager (VMM), is essential in hardware
virtualization. The hypervisor creates a virtual hardware environment where guest operating
systems are installed. There are two types of hypervisors:

• Type I Hypervisors

• Type II Hypervisors

Type I Hypervisors :

The hypervisor runs directly on the underlying host system. It is also known as a "Native
Hypervisor" or "Bare metal hypervisor". It does not require any base server operating system. It
has direct access to hardware resources. Examples of Type 1 hypervisors include VMware ESXi,
Citrix XenServer, and Microsoft Hyper-V hypervisor.

Pros & Cons of Type-1 Hypervisor:

Pros: Such kinds of hypervisors are very efficient because they have direct access to the physical
hardware resources (like Cpu, Memory, Network, and Physical storage). This causes the
empowerment of the security because there is nothing any kind of the third party resource so that
attacker couldn't compromise with anything.

Cons: One problem with Type-1 hypervisors is that they usually need a dedicated separate
machine to perform their operation and to instruct different VMs and control the host hardware
resources.

Type II Hypervisors:

A Host operating system runs on the underlying host system. It is also known as 'Hosted
Hypervisor". Such kind of hypervisors doesn’t run directly over the underlying hardware rather
they run as an application in a Host system (physical machine).
Basically, the software is installed on an operating system. Hypervisor asks the operating system
to make hardware calls.

An example of a Type 2 hypervisor includes VMware Player or Parallels Desktop. Hosted


hypervisors are often found on endpoints like PCs. The type-2 hypervisor is very useful for
engineers, and security analysts (for checking malware, or malicious source code and newly
developed applications).

Pros & Cons of Type-2 Hypervisor:

Pros: Such kind of hypervisors allows quick and easy access to a guest Operating System
alongside the host machine running. These hypervisors usually come with additional useful
features for guest machines. Such tools enhance the coordination between the host machine and
the guest machine.

Cons: Here there is no direct access to the physical hardware resources so the efficiency of these
hypervisors lags in performance as compared to the type-1 hypervisors, and potential security risks
are also there an attacker can compromise the security weakness if there is access to the host
operating system so he can also access the guest operating system.

2.4 Virtualization Structures


In general, there are three typical classes of VM architecture. Figure 3.1 showed the architectures
of a machine before and after virtualization. Before virtualization, the operating system manages
the hardware. After virtualization, a virtualization layer is inserted between the hardware and the
operating system. In such a case, the virtualization layer is responsible for converting portions of
the real hardware into virtual hardware. Therefore, different operating systems such as Linux and
Windows can run on the same physical machine, simultaneously. Depending on the position of
the virtualization layer, there are several classes of VM architectures, namely
the hypervisor architecture, para-virtualization, and host-based virtualization. The hypervisor is
also known as the VMM (Virtual Machine Monitor). They both perform the same virtualization
operations.

1. Hypervisor and Xen Architecture

The hypervisor supports hardware-level virtualization (see Figure 3.1(b)) on bare metal devices
like CPU, memory, disk and network interfaces. The hypervisor software sits directly between the
physi-cal hardware and its OS. This virtualization layer is referred to as either the VMM or the
hypervisor. The hypervisor provides hypercalls for the guest OSes and applications. Depending
on the functional-ity, a hypervisor can assume a micro-kernel architecture like the Microsoft
Hyper-V. Or it can assume a monolithic hypervisor architecture like the VMware ESX for server
virtualization.

A micro-kernel hypervisor includes only the basic and unchanging functions (such as physical
memory management and processor scheduling). The device drivers and other changeable
components are outside the hypervisor. A monolithic hypervisor implements all the
aforementioned functions, including those of the device drivers. Therefore, the size of the
hypervisor code of a micro-kernel hyper-visor is smaller than that of a monolithic hypervisor.
Essentially, a hypervisor must be able to convert physical devices into virtual resources dedicated
for the deployed VM to use.

1.1 The Xen Architecture

Xen is an open source hypervisor program developed by Cambridge University. Xen is a micro-
kernel hypervisor, which separates the policy from the mechanism. The Xen hypervisor
implements all the mechanisms, leaving the policy to be handled by Domain 0, as shown in Figure
3.5. Xen does not include any device drivers natively [7]. It just provides a mechanism by which
a guest OS can have direct access to the physical devices. As a result, the size of the Xen hypervisor
is kept rather small. Xen provides a virtual environment located between the hardware and the OS.
A number of vendors are in the process of developing commercial Xen hypervisors, among them
are Citrix XenServer [62] and Oracle VM [42].

The core components of a Xen system are the hypervisor, kernel, and applications. The
organization of the three components is important. Like other virtualization systems, many guest
OSes can run on top of the hypervisor. However, not all guest OSes are created equal, and one in
particular controls the others. The guest OS, which has control ability, is called Domain 0, and the
others are called Domain U. Domain 0 is a privileged guest OS of Xen. It is first loaded when Xen
boots without any file system drivers being available. Domain 0 is designed to access hardware
directly and manage devices. Therefore, one of the responsibilities of Domain 0 is to allocate and
map hardware resources for the guest domains (the Domain U domains).

2. Binary Translation with Full Virtualization

Depending on implementation technologies, hardware virtualization can be classified into two


categories: full virtualization and host-based virtualization. Full virtualization does not need to
modify the host OS. It relies on binary translation to trap and to virtualize the execution of certain
sensitive, nonvirtualizable instructions. The guest OSes and their applications consist of
noncritical and critical instructions. In a host-based system, both a host OS and a guest OS are
used. A virtualization software layer is built between the host OS and guest OS. These two classes
of VM architecture are introduced next.

2.1 Full Virtualization

With full virtualization, noncritical instructions run on the hardware directly while critical
instructions are discovered and replaced with traps into the VMM to be emulated by software.
Both the hypervisor and VMM approaches are considered full virtualization. Why are only critical
instructions trapped into the VMM? This is because binary translation can incur a large
performance overhead. Noncritical instructions do not control hardware or threaten the security of
the system, but critical instructions do. Therefore, running noncritical instructions on hardware
not only can promote efficiency, but also can ensure system security.

2.2 Binary Translation of Guest OS Requests Using a VMM

This approach was implemented by VMware and many other software companies. As shown in
Figure 3.6, VMware puts the VMM at Ring 0 and the guest OS at Ring 1. The VMM scans the
instruction stream and identifies the privileged, control- and behavior sensitive instructions. When
these instructions are identified, they are trapped into the VMM, which emulates the behavior of
these instructions. The method used in this emulation is called binary translation. Therefore, full
virtualization combines binary translation and direct execution. The guest OS is completely
decoupled from the underlying hardware. Consequently, the guest OS is unaware that it is being
virtualized.
The performance of full virtualization may not be ideal, because it involves binary translation
which is rather time-consuming. In particular, the full virtualization of I/O-intensive applications
is a really a big challenge. Binary translation employs a code cache to store translated hot
instructions to improve performance, but it increases the cost of memory usage. At the time of this
writing, the performance of full virtualization on the x86 architecture is typically 80 percent to 97
percent that of the host machine.

2.3 Host-Based Virtualization

An alternative VM architecture is to install a virtualization layer on top of the host OS. This host
OS is still responsible for managing the hardware. The guest OSes are installed and run on top of
the virtualization layer. Dedicated applications may run on the VMs. Certainly, some other
applications

can also run with the host OS directly. This host-based architecture has some distinct advantages,
as enumerated next. First, the user can install this VM architecture without modifying the host
OS. The virtualizing software can rely on the host OS to provide device drivers and other low-
level services. This will simplify the VM design and ease its deployment.

Second, the host-based approach appeals to many host machine configurations. Compared to the
hypervisor/VMM architecture, the performance of the host-based architecture may also be low.
When an application requests hardware access, it involves four layers of mapping which
downgrades performance significantly. When the ISA of a guest OS is different from the ISA of
the underlying hardware, binary translation must be adopted. Although the host-based architecture
has flexibility, the performance is too low to be useful in practice.

3. Para-Virtualization with Compiler Support

Para-virtualization needs to modify the guest operating systems. A para-virtualized VM


provides special APIs requiring substantial OS modifications in user applications. Performance
degradation is a critical issue of a virtualized system. No one wants to use a VM if it is much
slower than using a physical machine. The virtualization layer can be inserted at different positions
in a machine soft-ware stack. However, para-virtualization attempts to reduce the virtualization
overhead, and thus improve performance by modifying only the guest OS kernel.

Figure 3.7 illustrates the concept of a paravirtualized VM architecture. The guest operating
systems are para-virtualized. They are assisted by an intelligent compiler to replace the
nonvirtualizable OS instructions by hypercalls as illustrated in Figure 3.8. The traditional x86
processor offers four instruction execution rings: Rings 0, 1, 2, and 3. The lower the ring number,
the higher the privilege of instruction being executed. The OS is responsible for managing
the hardware and the privileged instructions to execute at Ring 0, while user-level applications
run at Ring 3. The best example of para-virtualization is the KVM to be described below.

3.1 Para-Virtualization Architecture


When the x86 processor is virtualized, a virtualization layer is inserted between the hardware and
the OS. According to the x86 ring definition, the virtualization layer should also be installed at
Ring 0. Different instructions at Ring 0 may cause some problems. In Figure 3.8, we show that
para-virtualization replaces nonvirtualizable instructions with hypercalls that communicate
directly with the hypervisor or VMM. However, when the guest OS kernel is modified for
virtualization, it can no longer run on the hardware directly.

Although para-virtualization reduces the overhead, it has incurred other problems. First, its
compatibility and portability may be in doubt, because it must support the unmodified OS as well.
Second, the cost of maintaining para-virtualized OSes is high, because they may require deep OS
kernel modifications. Finally, the performance advantage of para-virtualization varies greatly due
to workload variations. Compared with full virtualization, para-virtualization is relatively easy and
more practical. The main problem in full virtualization is its low performance in binary translation.
To speed up binary translation is difficult. Therefore, many virtualization products employ the
para-virtualization architecture. The popular Xen, KVM, and VMware ESX are good examples.

3.2 KVM (Kernel-Based VM)

This is a Linux para-virtualization system—a part of the Linux version 2.6.20 kernel. Memory
management and scheduling activities are carried out by the existing Linux kernel. The KVM does
the rest, which makes it simpler than the hypervisor that controls the entire machine. KVM is a
hardware-assisted para-virtualization tool, which improves performance and supports unmodified
guest OSes such as Windows, Linux, Solaris, and other UNIX variants.

3.3 Para-Virtualization with Compiler Support

Unlike the full virtualization architecture which intercepts and emulates privileged and sensitive
instructions at runtime, para-virtualization handles these instructions at compile time. The guest
OS kernel is modified to replace the privileged and sensitive instructions with hypercalls to the
hypervisor or VMM. Xen assumes such a para-virtualization architecture.

The guest OS running in a guest domain may run at Ring 1 instead of at Ring 0. This implies that
the guest OS may not be able to execute some privileged and sensitive instructions. The privileged
instructions are implemented by hypercalls to the hypervisor. After replacing the instructions with
hypercalls, the modified guest OS emulates the behavior of the original guest OS. On an UNIX
system, a system call involves an interrupt or service routine. The hypercalls apply a dedicated
service routine in Xen.
Example 3.3 VMware ESX Server for Para-Virtualization

2.5 Implementation Levels of Virtualization


In Cloud Computing It is not simple to set up virtualization. Your computer runs on an operating
system that gets configured on some particular hardware. It is not feasible or easy to run a different
operating system using the same hardware. To do this, you will need a hypervisor.

Now, what is the role of the hypervisor? It is a bridge between the hardware and the virtual
operating system, which allows smooth functioning. Talking of the Implementation levels of
virtualization in Cloud Computing., there are a total of five levels that are commonly used.

Let us now look closely at each of these levels of virtualization implementation in Cloud
Computing.
1) Instruction Set Architecture Level (ISA)

ISA virtualization can work through ISA emulation. This is used to run many legacy codes
written for a different hardware configuration. These codes run on any virtual machine using the
ISA. With this, a binary code that originally needed some additional layers to run is now capable
of running on the x86 machines. It can also be tweaked to run on the x64 machine. With ISA, it is
possible to make the virtual machine hardware agnostic. For the basic emulation, an interpreter is
needed, which interprets the source code and then converts it into a hardware format that can be
read. This then allows processing. This is one of the five implementation levels of virtualization
in Cloud Computing.

2) Hardware Abstraction Level (HAL)

True to its name HAL lets the virtualization perform at the level of the hardware. This
makes use of a hypervisor which is used for functioning. The virtual machine is formed at this
level, which manages the hardware using the virtualization process. It allows the virtualization of
each of the hardware components, which could be the input-output device, the memory, the
processor, etc. Multiple users will not be able to use the same hardware and also use multiple
virtualization instances at the very same time. This is mostly used in the cloud-based
infrastructure.

3) Operating System Level

At the level of the operating system, the virtualization model is capable of creating a layer
that is abstract between the operating system and the application. This is an isolated container on
the operating system and the physical server, which uses the software and hardware. Each of these
then functions in the form of a server.

When there are several users and no one wants to share the hardware, then this is where the
virtualization level is used. Every user will get his virtual environment using a dedicated virtual
hardware resource. In this way, there is no question of any conflict.

4) Library Level

The operating system is cumbersome, and this is when the applications use the API from
the libraries at a user level. These APIs are documented well, and this is why the library
virtualization level is preferred in these scenarios. API hooks make it possible as it controls the
link of communication from the application to the system.

5) Application Level

The application-level virtualization is used when there is a desire to virtualize only one
application and is the last of the implementation levels of virtualization in Cloud Computing. One
does not need to virtualize the entire environment of the platform. This is generally used when
you run virtual machines that use high-level languages. The application will sit above the
virtualization layer, which in turn sits on the application program. It lets the high-level language
programs compiled to be used at the application level of the virtual machine run seamlessly.

2.6 VIRTUALIZATION OF CPU, MEMORY, AND I/O DEVICES

To support virtualization, processors such as the x86 employ a special running mode and
instructions, known as hardware-assisted virtualization. In this way, the VMM and guest OS run
in different modes and all sensitive instructions of the guest OS and its applications are trapped in
the VMM. To save processor states, mode switching is completed by hardware. For the x86
architecture, Intel and AMD have proprietary technologies for hardware-assisted virtualization.

1. Hardware Support for Virtualization


Modern operating systems and processors permit multiple processes to run simultaneously. If there
is no protection mechanism in a processor, all instructions from different processes will access the
hardware directly and cause a system crash. Therefore, all processors have at least two modes,
user mode and supervisor mode, to ensure controlled access of critical hardware. Instructions
running in supervisor mode are called privileged instructions. Other instructions are unprivileged
instructions. In a virtualized environment, it is more difficult to make OSes and applications run
correctly because there are more layers in the machine stack. Example 3.4 discusses Intel’s
hardware support approach.

At the time of this writing, many hardware virtualization products were available. The VMware
Workstation is a VM software suite for x86 and x86-64 computers. This software suite allows
users to set up multiple x86 and x86-64 virtual computers and to use one or more of these VMs
simultaneously with the host operating system. The VMware Workstation assumes the host-based
virtualization. Xen is a hypervisor for use in IA-32, x86-64, Itanium, and PowerPC 970 hosts.
Actually, Xen modifies Linux as the lowest and most privileged layer, or a hypervisor.

One or more guest OS can run on top of the hypervisor. KVM (Kernel-based Virtual Machine) is
a Linux kernel virtualization infrastructure. KVM can support hardware-assisted virtualization and
paravirtualization by using the Intel VT-x or AMD-v and VirtIO framework, respectively. The
VirtIO framework includes a paravirtual Ethernet card, a disk I/O controller, a balloon device for
adjusting guest memory usage, and a VGA graphics interface using VMware drivers.

2. CPU Virtualization

A VM is a duplicate of an existing computer system in which a majority of the VM instructions


are executed on the host processor in native mode. Thus, unprivileged instructions of VMs run
directly on the host machine for higher efficiency. Other critical instructions should be handled
carefully for correctness and stability. The critical instructions are divided into three
categories: privileged instructions, control-sensitive instructions, and behavior-sensitive
instructions. Privileged instructions execute in a privileged mode and will be trapped if executed
outside this mode. Control-sensitive instructions attempt to change the configuration of resources
used. Behavior-sensitive instructions have different behaviors depending on the configuration of
resources, including the load and store operations over the virtual memory.

A CPU architecture is virtualizable if it supports the ability to run the VM’s privileged and
unprivileged instructions in the CPU’s user mode while the VMM runs in supervisor mode. When
the privileged instructions including control- and behavior-sensitive instructions of a VM are
executed, they are trapped in the VMM. In this case, the VMM acts as a unified mediator
for hardware access from different VMs to guarantee the correctness and stability of the whole
system. However, not all CPU architectures are virtualizable. RISC CPU architectures can be
naturally virtualized because all control- and behaviour-sensitive instructions are privileged
instructions. On the contrary, x86 CPU architectures are not primarily designed to support
virtualization. This is because about 10 sensitive instructions, such as SGDT and SMSW, are not
privileged instructions. When these instructions execute in virtualization, they cannot be trapped
in the VMM.

2.1 Hardware-Assisted CPU Virtualization

This technique attempts to simplify virtualization because full or paravirtualization is complicated.


Intel and AMD add an additional mode called privilege mode level (some people call it Ring-1)
to x86 processors. Therefore, operating systems can still run at Ring 0 and the hypervisor can run
at Ring -1. All the privileged and sensitive instructions are trapped in the hypervisor automatically.
This technique removes the difficulty of implementing binary translation of full virtualization. It
also lets the operating system run in VMs without modification.
3. Memory Virtualization

Virtual memory virtualization is similar to the virtual memory support provided by modern
operating systems. In a traditional execution environment, the operating system maintains
mappings of virtual memory to machine memory using page tables, which is a one-stage mapping
from virtual memory to machine memory. All modern x86 CPUs include a memory management
unit (MMU) and a translation lookaside buffer (TLB) to optimize virtual memory performance.
However, in a virtual execution environment, virtual memory virtualization involves sharing the
physical system memory in RAM and dynamically allocating it to the physical memory of the
VMs.

That means a two-stage mapping process should be maintained by the guest OS and the VMM,
respectively: virtual memory to physical memory and physical memory to machine memory.
Furthermore, MMU virtualization should be supported, which is transparent to the guest OS. The
guest OS continues to control the mapping of virtual addresses to the physical memory addresses
of VMs. But the guest OS cannot directly access the actual machine memory. The VMM is
responsible for mapping the guest physical memory to the actual machine memory. Figure 3.12
shows the two-level memory mapping procedure.
4. I/O Virtualization

I/O virtualization involves managing the routing of I/O requests between virtual devices and the
shared physical hardware. At the time of this writing, there are three ways to implement I/O
virtualization: full device emulation, para-virtualization, and direct I/O. Full device emulation is
the first approach for I/O virtualization. Generally, this approach emulates well-known, real-world
devices.

All the functions of a device or bus infrastructure, such as device enumeration, identification,
interrupts, and DMA, are replicated in software. This software is located in the VMM and acts as
a virtual device. The I/O access requests of the guest OS are trapped in the VMM which interacts
with the I/O devices. The full device emulation approach is shown in Figure 3.14.
A single hardware device can be shared by multiple VMs that run concurrently. However, software
emulation runs much slower than the hardware it emulates . The para-virtualization method of I/O
virtualization is typically used in Xen. It is also known as the split driver model consisting of a
frontend driver and a backend driver. The frontend driver is running in Domain U and the backend
dri-ver is running in Domain 0. They interact with each other via a block of shared memory. The
frontend driver manages the I/O requests of the guest OSes and the backend driver is responsible
for managing the real I/O devices and multiplexing the I/O data of different VMs. Although para-
I/O-virtualization achieves better device performance than full device emulation, it comes with a
higher CPU overhead.

Direct I/O virtualization lets the VM access devices directly. It can achieve close-to-native
performance without high CPU costs. However, current direct I/O virtualization implementations
focus on networking for mainframes. There are a lot of challenges for commodity hardware
devices. For example, when a physical device is reclaimed (required by workload migration) for
later reassign-ment, it may have been set to an arbitrary state (e.g., DMA to some arbitrary memory
locations) that can function incorrectly or even crash the whole system. Since software-based I/O
virtualization requires a very high overhead of device emulation, hardware-assisted I/O
virtualization is critical. Intel VT-d supports the remapping of I/O DMA transfers and device-
generated interrupts. The architecture of VT-d provides the flexibility to support multiple usage
models that may run unmodified, special-purpose, or “virtualization-aware” guest OSes.

5. Virtualization in Multi-Core Processors

Virtualizing a multi-core processor is relatively more complicated than virtualizing a uni-core


processor. Though multicore processors are claimed to have higher performance by integrating
multiple processor cores in a single chip, muti-core virtualization has raised some new challenges
to computer architects, compiler constructors, system designers, and application programmers.
There are mainly two difficulties: Application programs must be parallelized to use all cores fully,
and software must explicitly assign tasks to the cores, which is a very complex problem.

Concerning the first challenge, new programming models, languages, and libraries are needed to
make parallel programming easier. The second challenge has spawned research involving
scheduling algorithms and resource management policies. Yet these efforts cannot balance well
among performance, complexity, and other issues. What is worse, as technology scales, a new
challenge called dynamic heterogeneity is emerging to mix the fat CPU core and thin GPU cores
on the same chip, which further complicates the multi-core or many-core resource management.
The dynamic heterogeneity of hardware infrastructure mainly comes from less reliable transistors
and increased complexity in using the transistors.

2.7 Desktop Virtualization:


Desktop virtualization is a method of simulating a user workstation so it can be accessed from a
remotely connected device. By abstracting the user desktop in this way, organizations can allow
users to work from virtually anywhere with a network connecting, using any desktop laptop, tablet,
or smartphone to access enterprise resources without regard to the device or operating system
employed by the remote user.

Remote desktop virtualization is also a key component of digital workspaces Virtual desktop
workloads run on desktop virtualization servers which typically execute on virtual machines
(VMs) either at on-premises data centers or in the public cloud. Since the user devices is basically
a display, keyboard, and mouse, a lost or stolen device presents a reduced risk to the organization.
All user data and programs exist in the desktop virtualization server, not on client devices.

Benefits of Desktop Virtualization:

1.Resource Utilization: Since IT resources for desktop virtualization are concentrated in a data
center, resources are pooled for efficiency. The need to push OS and application updates to end-
user devices is eliminated, and virtually any desktop, laptop, tablet, or smartphone can be used to
access virtualized desktop applications.

IT organizations can thus deploy less powerful and less expensive client devices since they are
basically only used for input and output.
2.Remote Workforce Enablement: Since each virtual desktop resides in central servers, new
user desktops can be provisioned in minutes and become instantly available for new users to
access. Additionally IT support resources can focus on issues on the virtualization servers with
little regard to the actual end-user device being used to access the virtual desktop. Finally, since
all applications are served to the client over a network, users have the ability to access their
business applications virtually anywhere there is internet connectivity. If a user leaves the
organization, the resources that were used for their virtual desktop can then be returned to centrally
pooled infrastructure.

3.Security: IT professionals rate security as their biggest challenge year after year. By removing
OS and application concerns from user devices, desktop virtualization enables centralized security
control, with hardware security needs limited to virtualization servers, and an emphasis on identity
and access management with role-based permissions that limit users only to those applications and
data they are authorized to access. Additionally, if an employee leaves an organization there is no
need to remove applications and data from user devices; any data on the user device is ephemeral
by design and does not persist when a virtual desktop session ends.

Working principles of Desktop Virtualization:

Remote desktop virtualization is typically based on a client/server model, where the organization’s
chosen operating system and applications run on a server located either in the cloud or in a data
center. In this model all interactions with users occur on a local device of the user’s choosing,
reminiscent of the so-called ‘dumb’ terminals popular on mainframes and early Unix systems.

Types of Desktop Virtualization:

The three most popular types of desktop virtualization are

1. Virtual desktop infrastructure (VDI)

2. Remote desktop services (RDS)

3. Desktop-as-a-Service (DaaS).

VDI simulates the familiar desktop computing model as virtual desktop sessions that run on VMs
either in on-premises data center or in the cloud. Organizations who adopt this model manage the
desktop virtualization server as they would any other application server on-premises. Since all
end-user computing is moved from users back into the data centre, the initial deployment of servers
to run VDI sessions can be a considerable investment, tempered by eliminating the need to
constantly refresh end-user devices.

RDS is often used where a limited number of applications needs be virtualized, rather than a full
windows, Mac, or Linux desktop. In this model applications are streamed to the local device which
runs its own OS. Because only applications are virtualized RDS systems can offer a higher density
of users per VM.

DaaS shifts the burden of providing desktop virtualization to service providers, which greatly
alleviates the IT burden in providing virtual desktops. Organizations that wish to move IT
expenses from capital expense to operational expenses will appreciate the predictable monthly
costs that DaaS providers base their business model on.

Desktop Virtualization vs. Server Virtualization:

In server virtualization, a server OS and its applications are abstracted into a VM from the
underlying hardware by a hypervisor. Multiple VMs can run on a single server, each with its own
server OS, applications, and all the application dependencies required to execute as if it were
running on bare metal.

Desktop virtualization abstracts client software (OS and applications) from a physical thin client
which connects to applications and data remotely, typically via the internet. This abstraction
enables users to utilize any number of devices to access their virtual desktop. Desktop
virtualization can greatly increase an organization’s need for bandwidth, depending on the number
of concurrent users during peak.

Desktop Virtualization vs. App Virtualization:

Application virtualization insulates executing programs from the underlying device, where
desktop virtualization abstracts the entire desktop – OS and applications – which are then
accessible by virtually any client device.

Application virtualization simplifies the installation of each individual application, which is


installed once on a server and then virtualized to the various end-user device that it executes on.
Client devices are sent a packaged, pre-configured executable which eases deployment. A
virtualization application exists as a single instance in the application server, so maintenance is
greatly simplified. Only one instance needs be updated. If an application is retired, deleting it from
the application server will also delete it from all users wherever they are. Further, since virtualized
applications are packaged in their own ‘containers’ they cannot interact with each other or cause
other applications to fail. Finally, since virtualized applications are independent of the underlying
device OS, they can be used on any endpoint, whether Windows, iOS or Linux/Android.

However, application virtualization is not for every application. Compute- and graphics-intensive
applications can suffer from slowing down causing visible lag during rendering, and a solid
broadband connection is necessary to delivery a user experience comparable to local device
applications.

2.8 Network Virtualization:


Network Virtualization is a process of logically grouping physical networks and making them
operate as single or multiple independent networks called Virtual Networks.

Tools for Network Virtualization :

1. Physical switch OS – It is where the OS must have the functionality of network virtualization.
2. Hypervisor – It is which uses third-party software or built-in networking and the functionalities
of network virtualization.

The basic functionality of the OS is to give the application or the executing process with a simple
set of instructions. System calls that are generated by the OS and executed through the libc library
are comparable to the service primitives given at the interface between the application and the
network through the SAP (Service Access Point). The hypervisor is used to create a virtual switch
and configuring virtual networks on it. The third-party software is installed onto the hypervisor
and it replaces the native networking functionality of the hypervisor. A hypervisor allows us to
have various VMs all working optimally on a single piece of computer hardware.

Functions of Network Virtualization :


• It enables the functional grouping of nodes in a virtual network.
• It enables the virtual network to share network resources.
• It allows communication between nodes in a virtual network without routing of frames.
• It restricts management traffic.
• It enforces routing for communication between virtual networks.

Network Virtualization in Virtual Data Center:

1. Physical Network

Physical components: Network adapters, switches, bridges, repeaters, routers and hubs.

Grants connectivity among physical servers running a hypervisor, between physical servers

and storage systems and between physical servers and clients.

2. VM Network

Consists of virtual switches. Provides connectivity to hypervisor kernel. Connects to the


physical network. Resides inside the physical server.

Advantages of Network Virtualization :


Improves manageability -
• Grouping and regrouping of nodes are eased.
• Configuration of VM is allowed from a centralized management workstation using
management software.
Reduces CAPEX -
• The requirement to set up separate physical networks for different node groups is reduced.
Improves utilization -
• Multiple VMs are enabled to share the same physical network which enhances the
utilization of network resource.
Enhances performance -
• Network broadcast is restricted and VM performance is improved.
Enhances security -
• Sensitive data is isolated from one VM to another VM.
• Access to nodes is restricted in a VM from another VM.

Disadvantages of Network Virtualization :


• It needs to manage IT in the abstract.
• It needs to coexist with physical devices in a cloud-integrated hybrid environment.
• Increased complexity.
• Upfront cost.
• Possible learning curve.

Examples of Network Virtualization :


Virtual LAN (VLAN) -
• The performance and speed of busy networks can be improved by VLAN.
• VLAN can simplify additions or any changes to the network.
Network Overlays -
• A framework is provided by an encapsulation protocol called VXLAN for overlaying
virtualized layer 2 networks over layer 3 networks.
• The Generic Network Virtualization Encapsulation protocol (GENEVE) provides a new
way to encapsulation designed to provide control-plane independence between the
endpoints of the tunnel.
Network Virtualization Platform: VMware NSX -
• VMware NSX Data Center transports the components of networking and security such as
switching, firewalling and routing that are defined and consumed in software.
• It transports the operational model of a virtual machine (VM) for the network.

Applications of Network Virtualization :


• Network virtualization may be used in the development of application testing to mimic
real-world hardware and system software.
• It helps us to integrate several physical networks into a single network or separate single
physical networks into multiple analytical networks.
• In the field of application performance engineering, network virtualization allows the
simulation of connections between applications, services, dependencies, and end-users for
software testing.
• It helps us to deploy applications in a quicker time frame, thereby supporting a faster go-
to-market.
• Network virtualization helps the software testing teams to derive actual results with
expected instances and congestion issues in a networked environment.

2.9 Storage Level Virtualization


Storage virtualization is a process of pooling physical storage devices so that IT may address a
single "virtual" storage unit. It offered considerable economic and operational savings over bare
metal storage but is now mostly overshadowed by the cloud paradigm.
What is Storage Virtualization?
Storage virtualization is functional RAID levels and controllers are made desirable, which is an
important component of storage servers. Applications and operating systems on the device can
directly access the discs for writing. Local storage is configured by the controllers in RAID groups,
and the operating system sees the storage based on the configuration. The controller, however, is
in charge of figuring out how to write or retrieve the data that the operating system requests
because the storage is abstracted.
Types of Storage Virtualization
Below are some types of Storage Virtualization.
• Kernel-level virtualization: In hardware virtualization, a different version of the Linux
kernel functions. One host may execute several servers thanks to the kernel level.
• Hypervisor Virtualization: Installed between the operating system and the hardware is a
section known as a hypervisor. It enables the effective operation of several operating
systems.
• Hardware-assisted Virtualization: Hardware-assisted virtualization is similar to
complete para-virtualization, however, it needs hardware maintenance.
• Para-virtualization: The foundation of para-virtualization is a hypervisor, which handles
software emulation and trapping.
Methods of Storage Virtualization
• Network-based storage virtualization: The most popular type of virtualization used by
businesses is network-based storage virtualization. All of the storage devices in an FC or
iSCSI SAN are connected to a network device, such as a smart switch or specially designed
server, which displays the network's storage as a single virtual pool.
• Host-based storage virtualization: Host-based storage virtualization is software-based
and most often seen in HCI systems and cloud storage. In this type of virtualization, the
host, or a hyper-converged system made up of multiple hosts, presents virtual drives of
varying capacity to the guest machines, whether they are VMs in an enterprise
environment, physical servers or computers accessing file shares or cloud storage.
• Array-based storage virtualization: Storage using arrays The most popular use of
virtualization is when a storage array serves as the main storage controller and is equipped
with virtualization software. This allows the array to share storage resources with other
arrays and present various physical storage types that can be used as storage tiers.
How Storage Virtualization Works?
• Physical storage hardware is replicated in a virtual volume during storage virtualization.
• A single server is utilized to aggregate several physical discs into a grouping that creates a
basic virtual storage system.
• Operating systems and programs can access and use the storage because a virtualization
layer separates the physical discs from the virtual volume.
• The physical discs are separated into objects called logical volumes (LV), logical unit
numbers (LUNs), or RAID groups, which are collections of tiny data blocks.
• RAID arrays can serve as virtual storage in a more complex setting. many physical drives
simulate a single storage device that copies data to several discs in the background while
stripping it.
• The virtualization program has to take an extra step in order to access data from the
physical discs.
• Block-level and file-level storage environments can both be used to create virtual storage.
Advantages of Storage Virtualization
Below are some Advantages of Storage Virtualization.
• Advanced features like redundancy, replication, and disaster recovery are all possible with
the storage devices.
• It enables everyone to establish their own company prospects.
• Data is kept in more practical places that are farther from the particular host. Not always
is the data compromised in the event of a host failure.
• IT operations may now provision, divide, and secure storage in a more flexible way by
abstracting the storage layer.
Disadvantages of Storage Virtualization
Below are some Disadvantages of Storage Virtualization.
• Storage Virtualization still has limitations which must be considered.
• Data security is still a problem. Virtual environments can draw new types of cyberattacks,
despite the fact that some may contend that virtual computers and servers are more secure
than physical ones.
• The deployment of storage virtualization is not always easy. There aren't many
technological obstacles, including scalability.
• Your data's end-to-end perspective is broken by virtualization. Integrating the virtualized
storage solution with current tools and systems is a requirement.
2.10 Operating system based Virtualization
Operating System-based Virtualization is also known as Containerization. It allows multiple
isolated user-space instances called containers to run on a single operating system (OS) kernel.

OS Based Virtualization
Traditional Virtualization Architecture (Using VMs)
• Each Virtual Machine (VM) operates as an isolated environment, having its own OS.
• This results in higher resource consumption (CPU, memory, storage).
• The Virtual Machine Management layer is responsible for:

Virtualization Architecture
• The Host Operating System (OS) runs above the physical hardware and provides the
environment for the hypervisor.
• The Hypervisor: Manages VMs & Allocates resources from the physical hardware to the
VMs.
• The Hardware (Virtualization Host) is the physical machine that provides necessary CPU,
memory, storage and I/O to run the hypervisor and VMs.
• Multiple VMs run simultaneously on the same physical hardware.
How OS-Based Virtualization Works
OS-Based Virtualization works as follows:
• The host OS kernel is shared among all containers, unlike full virtualization (e.g., VMs)
where each VM has its own kernel.
• The kernel enforces isolation between containers using namespaces (for process, network,
filesystem isolation) and cgroups (control groups) for resource allocation (CPU, memory,
disk I/O, network).
• cgroups limit and prioritize resource usage (CPU, memory, disk, network) per container.
• The kernel ensures that a container cannot exceed its allocated resources (unless explicitly
allowed).
• Namespaces prevent processes in one container from seeing or interfering with processes
in another.
• Programs inside a container cannot access resources outside unless explicitly granted (e.g.,
mounted volumes, network ports).
• The overhead comes from kernel-level isolation mechanisms (namespaces, cgroups), but
it’s minimal compared to full virtualization.
Operating System Based Services
Some major operating system based services are mentioned below:
• Backup and Recovery: Host operating systems can be utilized to back up and restore
virtual machines. Backup software tools can be used to ensure data safety and system
recovery.
• Security Management: Host operating systems help manage the security of virtual
machines. This includes configuring firewalls, installing antivirus software and applying
other essential security settings.
• Integration with Directory Services: Host operating systems can be integrated with
directory services like Active Directory, enabling centralized management of users and
groups.
Operating System Based Operations
Various major operations of Operating System Based Virtualization are described below:
1. Hardware capabilities can be employed such as the network connection and CPU.
2. Connected peripherals with which Host OS can interact such as a webcam, printer,
keyboard or scanners.
3. Host OS can be used to read or write data in files, folders and network shares.
Features of OS- Based Virtualization
• Resource isolation: Operating system based virtualization provides a high level of
resource isolation which allows each container to have its own set of resources, including
CPU, memory and I/O bandwidth.
• Lightweight: Containers are lighter compared to traditional virtual machines as they share
the same host operating system. This results in faster startup and lower resource usage.
• Portability: Containers are highly portable. They can be easily moved from one
environment to another without the need to modify the underlying application.
• Scalability: Containers can be easily scaled up or down based on the application
requirements. This makes it easier for applications to be highly responsive to changes in
demand.
• Security: Containers provide a high level of security by isolating the containerized
application from the host operating system and other containers running on the same
system.
• Reduced Overhead: Containers incur less overhead than traditional virtual machines as
they do not need to emulate a full hardware environment.
• Easy Management: Containers are easy to manage as they can be started, stopped and
monitored using simple commands.
Pros of OS-Based Virtualization
• Resource Efficiency: Operating system based virtualization allows for greater resource
efficiency as containers do not need to emulate a complete hardware environment, which
reduces resource overhead.
• High Scalability: Containers can be quickly and easily scaled up or down depending on
the demand, which makes it easy to respond to changes in the workload.
• Easy Management: Containers are easy to manage as they can be managed through
simple commands, which makes it easy to deploy and maintain large numbers of
containers.
• Reduced Costs: Operating system based virtualization can significantly reduce costs, as
it requires fewer resources and infrastructure than traditional virtual machines.
• Faster Deployment: Containers can be deployed quickly, reducing the time required to
launch new applications or update existing ones.
• Portability: Containers are highly portable, making it easy to move them from one
environment to another without requiring changes to the underlying application.
Cons of OS-Based Virtualization
• Security: Operating system based virtualization may pose security risks as containers
share the same host operating system, which means that a security breach in one container
could potentially affect all other containers running on the same system.
• Limited Isolation: Containers may not provide complete isolation between applications,
which can lead to performance degradation or resource contention.
• Complexity: Operating system based virtualization can be complex to set up and manage,
requiring specialized skills and knowledge.
• Dependency Issues: Containers may have dependency issues with other containers or the
host operating system, which can lead to compatibility issues and hinder deployment.
• Limited Hardware Access: Containers may have limited access to hardware resources
which can limit their ability to perform certain tasks or applications that require direct
hardware access.
2.11 Application virtualization:
Application virtualization is a process that deceives a standard app into believing that it interfaces
directly with an operating system's capacities when, in fact, it does not.
This ruse requires a virtualization layer inserted between the app and the OS. This layer, or
framework, must run an app's subsets virtually and without impacting the subjacent OS. The
virtualization layer replaces a portion of the runtime environment typically supplied by the OS,
transparently diverting files and registry log changes to a single executable file.
By diverting the app's processes into one file instead of many dispersed across the OS, the app
easily operates on a different device, and formerly incompatible apps can now run adjacently.
Used in conjunction with application virtualization is desktop virtualization—the abstraction of
the physical desktop environment and its related app software from the end-user device that
accesses it.
Executing Application Virtualization:
Application (and desktop virtualization) are a Desktop as a Service (DaaS) managed by a
hypervisor (aka virtual machine monitor or VMM). A VMM infrastructure—software, firmware,
and/or hardware—creates and operates virtual machines (VMs). A host (server) connects to
multiple guests (endpoints).
Application and desktop virtualization enables centralized management of the complete desktop
environment ecosystem. Organizations need only to patch but a few images of applications and
virtualized desktops rather than a myriad of endpoints, thereby deploying updates consistently,
completely, and rapidly.
Since software and security updates are stored on images in data center servers, endpoint device
exposure to vulnerabilities such as nascent malware or app manipulations is significantly reduced.
Benefits of Application Virtualization:
These server images facilitate regulatory compliance with standards such as the Payment Card
Industry Data Security Standards (PCI DSS) and the Health Insurance Portability and
Accountability Act (HIPAA). Since data is not processed or stored on endpoint devices, no data
breach occurs, should the device become compromised. The endpoint device is but a display
terminal.
Application and desktop virtualization both support incident management, resolving many
adverse desktop events by merely refreshing a virtualized image, and restoring the desktop
environment to its previous state.
Other virtualization benefits include:
Allows the running of legacy apps (e.g., those developed for end-of-life OS platforms
like Windows 7 and XP).
Enables cross-platform operations (e.g., running Windows apps on iOS, Android,
macOS, and Chrome OS).
Prevents conflicts with other virtualized apps (e.g., conflicting anti-malware software).
Permits users to run multiple app instances—if not virtualized, many apps can detect the
running of an instance and won't allow new ones.
However, some apps pose challenges to virtualization. For example, an application requiring a
device driver (which integrates into the OS and is thus OS-specific) can affect the use of
peripherals like printers. Also, 16-bit applications and apps requiring extensive OS-integration are
problematic to virtualize (e.g., some anti-virus programs). Latency caused by virtualization can
drag graphics- intensive apps during the rendering process.

How Server Virtualization and Application Virtualization Differ


Although the two processes share key features—such as lowering costs, bolstering data security,
and central control—they fulfill separate functions.
Server virtualization refers to the use of one or several servers clustered into multiple server
groups. Should a data center have 20 physical servers, they can be virtualized into two groups of
10, for example, or two groups with one of 5 servers and the other with 15. There's no difference
between a virtual server(s) and a group of 5, 10, or 15 physical servers operating as individual
servers.
Conversely, one physical server can be partitioned into separate multiple virtual servers, helping
to maximize organizational resources and facilitating recovery from unexpected server outages.
With virtual servers, further cost reductions are realized by reducing organizational needs for
multiple servers, which leads to lower maintenance and lower environmental and power
expenditures.
Virtualizing apps means that they run without any dependencies through another operating
system or browser. An example would be virtualizing Microsoft PowerPoint to run on Ubuntu
over an Opera browser.
The implementation of both environments differs, as well. Desktop virtualization impacts
network architecture, transmission protocol, and the data center while server virtualization only
affects changes to the server.
Advantages of Application Virtualization:
To lower costs and improve productivity, organizations must evolve their digital workspace. This
means migrating networking assets from on-premises to the cloud. DaaSis a result of this
paradigm shift.
Today's digital workspace, aggregates the devices, apps, and services that users require. These
workspaces must be managed securely and unified to allow for common access across the
enterprise.
DaaS solutions can be deployed with VMware Horizon, a desktop virtualization product that
streamlines the delivery, protection, and management of desktops and apps. With unsurpassed
simplicity, speed, flexibility, and scale, Horizon significantly constrains costs compared to
conventional VDI while assuring a persistent and engaging UX across any device anywhere, at
any time.
Horizon supports workplace mobility and allows users to access multiple OS-specific apps from
the cloud quickly and simultaneously across any device.
Specific Horizon products include:
VMware Horizon —a single platform that streamlines quick and efficient delivery and
management of virtual desktops and published apps in the cloud, on-premises, or in a multi-cloud
or hybrid configuration to any device at any location.
Horizon Cloud on Microsoft Azure—receive the advantages of applications-as-a-
service (SaaS) combined with intrinsically flexible IaaS for an enhanced digital workspace at a
lower cost.
Horizon Apps—provides end-users fast, easy access to SaaS apps, their published apps, and
mobile apps from a unified workspace.
Horizon Cloud on IBM Cloud—eliminate the cost and nuisance of managing on- premises
infrastructure while easily and quickly delivering cloud-hosted apps and desktops to any endpoint.
App Volumes—for application and UEM usage, App Volumes quickly delivers apps to desktop
environments and allows IT to scalably provision apps to users in an instant.

You might also like