Unit 2 For Students
Unit 2 For Students
• Service requester: The service requestor or web service client locates entries in the broker registry using
various find operations and then binds to the service provider to invoke one of its web services. The service
requester requests the service provider to run a specific service. It can be an entire system, application, or
other service. The service contract specifies the rules that the service provider and consumer must follow
when interacting with each other. Service providers and consumers can belong to different departments,
organizations, and even industries.
How does service-oriented architecture work?
• Throughout the life of a web service, there may be a variety of clients requesting
resources. Different clients can consume different representations of the same resource.
Therefore, a representation can take various forms, such as an image, a text file, an XML,
or a JSON format. However, all clients will use the same URI with appropriate Accept
header values for accessing the same resource in different representations.
• Code on demand: It is an optional feature. According to this, servers can also provide executable code to the client. The examples
of code on demand may include the compiled components such as Java Servlets and Server-Side Scripts such as JavaScript.
REST API
Rules of REST API
• There are certain rules which should be kept in mind while creating REST API
endpoints.
• REST is based on the resource or noun instead of action or verb based. It means
that a URI of a REST API should always end with a noun. Example: /api/users is a
good example, but /api?type=users is a bad example of creating a REST API.
• HTTP verbs are used to identify the action. Some of the HTTP verbs are – GET,
PUT, POST, DELETE, GET, PATCH.
• A web application should be organized into resources like users and then uses
HTTP verbs like – GET, PUT, POST, DELETE to modify those resources. And as
a developer it should be clear that what needs to be done just by looking at the
endpoint and HTTP method used.
• GET: to request information from a resource.
• POST: to send information to a specific resource.
• PUT: to update the information of a particular resource.
• DELETE: to delete a resource.
What is a system of systems?
• There are four types of system of systems: directed, acknowledged, collaborative and virtual. In
most cases, an SoS is a combination of these types and may change over time. The type of a SoS is
based on the degree of independence of its constituents as noted below:
• Directed. The SoS is created and managed to fulfill a specific purpose and the constituent systems
operate independently. However, independent operation is treated as less important.
• Acknowledged. The SoS has a specific purpose, but the constituent systems maintain independent
ownership, objectives and development. Changes made in this SoS type are based on cooperative
agreements between the system and the SoS.
• Collaborative. Component systems freely interact with each other to fulfill a defined purpose.
Management authorities have little impact over the behavior of the component systems.
• Virtual. The SoS does not have central management authority or a centrally agreed-upon purpose.
Typically, the acquisition of a virtual SoS is unplanned and is made up of component systems that
may not have been designed to be integrated. Once its use is over, the components are normally
disassembled and no longer operate in an SoS.
Example
• Embedded automotive systems is an example of a system of systems,
as they have numerous onboard computing, control and
communication-based systems that all work together to improve
safety, fuel efficiency and emissions. Safety systems could be
considered their own SoS, with airbag deployment, collision impact
warnings, seatbelt pretensioners, antilock and differential braking, as
well as traction and stability control all working together to increase
vehicle safety.
What Is Publish-Subscribe Pattern?
• The strongest use case for push APIs are for instances in which you
have time-sensitive data that changes often, and clients need to be
updated as soon as that data changes.
• Push APIs allow the server to send updates to the client whenever new
data becomes available, without a need for the client to explicitly
request it. When the server receives new information, it pushes the
data to the client application, no request needed. This makes it a more
efficient communication standard for data that stays changes often.
Advantages of publish/subscribe pattern
• Decoupled/loosely coupled components
• Pub/Sub allows you to separate the communication and application logic easily, thereby creating isolated
components. This results in:
• Ease of development
• Since pub/sub is not dependent on programming language, protocol, or a specific technology, any supported message broker
can be easily integrated into it using any programming language. Additionally, Pub/Sub can be used as a bridge to enable
communications between components built using different languages by managing inter-component communications.
• This leads to easy integrations with external systems without having to create functionality to facilitate communications or
worry about security implications. We can simply publish a message to a topic and let the external application subscribe to the
topic, eliminating the need for direct interaction with the underlying application.
• Increased scalability & reliability
• This messaging pattern is considered elastic—we do not have to pre-define a set number of publishers or subscribers. They
can be added to a required topic depending on the usage.
• The separation between communication and logic also leads to easier troubleshooting as developers can focus on the specific
component without worrying about it affecting the rest of the application.
• Pub/sub also improves the scalability of an application by allowing to change message brokers architecture, filters, and users
without affecting the underlying components. With pub/sub, a new messaging implementation is simply a matter of changing
the topic if the message formats are compatible even with complex architectural changes.
• Testability improvements
• With the modularity of the overall application, tests can be targeted towards each module, creating a more streamlined testing
pipeline. This drastically reduces the test case complexity by targeting tests for each component of the application.
• The pub/sub pattern also helps to easily understand the origin and destination of the data and the information
flow. It is particularly helpful in testing issues related to:
• Data corruption
• Formatting
• Security
Disadvantages of pub/sub pattern
• Pub/Sub is a robust messaging service, yet it is not the best option for all
requirements. some of the shortcomings of this pattern are.
• Unnecessary complexity in smaller systems
• Pub/sub needs to be properly configured and maintained. Where scalability and a
decoupled nature are not vital factors to your app, implementing Pub/Sub will be a
waste of resources and lead to unnecessary complexity for smaller systems
• Media streaming
• Pub/sub is not suitable when dealing with media such as audio or video as they
require smooth synchronous streaming between the host and the receiver. Because it
does not support synchronous end-to-end communications, pub/sub messaging is ill-
suited for:
• Video conferencing
• VOIP
• General media streaming applications
Elements of a Publish/Subscribe System
• A publisher submits a piece of information e (i.e., an event) to the pub/sub system by
executing the publish(e) operation.
• An event is structured as a set of attribute-value pairs.
• Each attribute has a name, a simple character string, and a type.
• The type is generally one of the common primitive data types defined in programming
languages or query languages (e.g. integer, real, string, etc.).
• On the subscriber’s side, interest in specific events is expressed through subscriptions.
• A subscription is a filter over a portion of the event content (or the whole of it).
• Expressed through a set of constraints that depend on the subscription language.
• A subscriber installs and removes a subscription from the pub/sub system by executing
the subscribe() and unsubscribe() operations respectively.
• An event e matches a subscription if it satisfies all the declared constraints on the
corresponding attributes
• The task of verifying whenever an event e matches a subscription is called matching
Virtualization
• Virtualization refers to the representation of physical computing
resources in simulated form having made through the software.
• This special layer of software (installed over active physical
machines) is referred as layer of virtualization.
• This layer transforms the physical computing resources into virtual
form which users use to satisfy their computing needs.
• The virtualization is the logical separation of physical resources from
direct access of users to fulfill their service needs. Although, at the
end, actually the physical resources are responsible to provide those
services.
• Virtualization provides a level of logical abstraction that liberates user-
installed software (starting from operating system and other systems as
well as application software) from being tied to a specific set of
hardware. Rather, the users install everything over the logical
operating environment (rather than physical ones) having created
through virtualization.
• In cloud computing, resource virtualization which adds a layer of
software over physical computing resources to create virtual resources,
acts as a layer of abstraction. This abstraction makes it easier to offer
more flexible, reliable and powerful service
• Any kind of computing resources can be virtualized.
• Apart from basic computing devices like processor, primary memory,
other resources like storage, network devices (like switch, router etc.),
the communication links and peripheral devices (like keyboard,
mouse, printer etc.) can also be virtualized.
• It should be noted that in case of core computing resources a
virtualized component can only be operational when a physical
resource empowers it from the back end. For example, a virtual
processor can only work when there is a physical processor linked
with it.
MACHINE OR SERVER LEVEL VIRTUALIZATION
• Machine virtualization (also called server virtualization) is the concept
of creating virtual machine (or virtual computer) on actual physical
machine. The parent system on which the virtual machines run is
called the host system, and the virtual machines are themselves
referred as guest systems.
• In conventional computing system, there has always been a one-to-one
relationship between physical computer and operating system. At a
time, a single OS can run over them. Hardware virtualization
eliminates this limitation of having a one-to-one relationship between
physical hardware and operating system. It facilitates the running of
multiple computer systems having their own operating systems on
single physical machine
What is virtual layer?
• From Figure above, it can be seen that virtual machines are created over the virtualization layers.
• This virtualization layer is actually a set of control programs that creates the environment for the
virtual machines to run on.
• This layer provides the access to the system resources to the virtual machines.
• It also controls and monitors the execution of the virtual machines over it.
• This software layer is referred as the Hypervisor or Virtual Machine Monitor (VMM).
• The hypervisor abstracts the underlying software and/or hardware environments and represents
virtual system resources to its users.
• This layer also facilitates the existence of multiple VMs those are not bound to share same
(underlying) OS kernel. Due to this reason, it becomes possible to run different operating systems
in those virtual machines as created over a hypervisor.
• The hypervisor layer provides an administrative system console through which the virtual system
environment (like number of virtual components to produce or capacity of the components) can be
managed
Machine Virtualization Techniques
• There are two different techniques of server or machine virtualization
as
• hosted approach and
• the bare metal approach.
Hosted Approach
• In this approach, an operating system is first installed on the physical
machine to activate it.
• This OS installed over the host machine is referred as host operating
system.
• The hypervisor is then installed over this host OS.
• This type of hypervisor is referred to as Type 2 hypervisor or Hosted
hypervisor.
• the host OS works as the first layer of software over the physical resources.
• Hypervisor is the second layer of software and guest operating systems run
as the third layer of software.
• Products like VMWare Workstation and Microsoft Virtual PC are the most
common examples of type 2 hypervisors.
• Benefits: In this approach, the host OS supplies the hardware drivers
for the underlying physical resources. This eases the installation and
configuration of the hypervisor. It makes the type-2 hypervisors
compatible for a wide variety of hardware platform.
• Drawbacks: A hosted hypervisor does not have direct access to the
hardware resources and hence, all of the requests from virtual
machines must go through the host OS. This may degrade the
performance of the virtual machines. Another drawback of the hosted
virtualization is the lack of support for real-time operating systems.
Since the underlying host OS controls the scheduling of jobs it
becomes unrealistic to run a real-time OS inside a VM using hosted
virtualization
Bare Metal Approach: Removal of the Host OS
• In this approach of machine virtualization, the hypervisor is directly
installed over the physical machine.
• The hypervisor is the first layer over hardware resources, hence, the
technique is referred as bare metal approach.
• The VMM or the hypervisor communicates directly with system
hardware.
• In this approach, the hypervisor acts as low-level virtual machine
monitor and also called as Type 1 hypervisor or Native Hypervisor.
• VMware’s ESX and ESXi Servers, Microsoft’s Hyper-V, solution Xen
are some of the examples of bare-metal hypervisors.
• Benefits: Since the bare metal hypervisor can directly access the
hardware resources in most of the cases it provides better performance
in comparison to the hosted hypervisor. For bigger application like
enterprise data centers, bare-metal virtualization is more suitable
because usually it provides advanced features for resource and security
management. Administrators get more control over the host
environment.
• Drawbacks: As any hypervisor usually have limited set of device
drivers built into it, so the bare metal hypervisors have limited
hardware support and cannot run on a wide variety of hardware
platform.
Machine reference model
• Modern computing systems can be expressed in terms of the reference model described in
Figure below. It defines the interfaces between the levels of abstractions, which hide
implementation details. Virtualization techniques actually replace one of the layers and
intercept the calls that are directed towards it.
• At the bottom layer, the model for the hardware is expressed in terms of the Instruction
Set Architecture (ISA), which defines the instruction set for the processor, registers,
memory, and interrupt management. ISA is the interface between hardware and software,
and it is important to the operating system (OS) developer (System ISA) and developers
of applications that directly manage the underlying hardware (User ISA).
• The application binary interface (ABI) separates the operating system layer from the
applications and libraries, which are managed by the OS. ABI covers details such as low-
level data types, alignment, and call conventions and defines a format for executable
programs. ABI defines how data structures or computational routines are accessed in
machine code, which is low level hardware dependent format. Adhering to am ABI is the
job of compiler.
• The highest level of abstraction is represented by the application programming interface
(API), which interfaces applications to libraries and/or the underlying operating system.
How data structures are accessed in source code is defined in API
• For this purpose, the instruction set exposed by the hardware has been
divided into different security classes that define who can operate with
them.
• The first distinction can be made between privileged and nonprivileged
instructions.
• Nonprivileged instructions are those instructions that can be used without
interfering with other tasks because they do not access shared resources.
This category contains, for example, all the floating, fixed-point, and
arithmetic instructions.
• Privileged instructions are those that are executed under specific
restrictions and are mostly used for sensitive operations, which expose
(behavior-sensitive) or modify (control-sensitive) the privileged state.
Virtualization and Protection Rings
• Protection Rings, are a mechanism to protect data and functionality from
faults (fault tolerance) and malicious behavior (computer security).
• Computer operating systems provide different levels of access to resources.
• A protection ring is one of two or more hierarchical levels or layers of
privilege within the architecture of a computer system. Rings are arranged
in a hierarchy from most privileged (most trusted, usually numbered zero)
to least privileged (least trusted, usually with the highest ring number). On
most operating systems, Ring 0 is the level with the most privileges and
interacts most directly with the physical hardware such as the CPU and
memory.
• x86 CPU hardware actually provides four protection rings: 0, 1, 2, and 3.
Only rings 0 (Kernel) and 3 (User) are typically used.
• In any modern operating system, the CPU is actually spending time in two very
distinct modes:
• 1.Kernel Mode
• In Kernel mode, the executing code has complete and unrestricted access to the
underlying hardware. It can execute any CPU instruction and reference any
memory address. Kernel mode is generally reserved for the lowest-level, most
trusted functions of the operating system. Crashes in kernel mode are catastrophic;
they will halt the entire PC.
• 2. User Mode
• In User mode, the executing code has no ability to directly access hardware or
reference memory. Code running in user mode must delegate to system APIs to
access hardware or memory. Due to the protection afforded by this sort of
isolation, crashes in user mode are always recoverable. Most of the code running
on your computer will execute in user mode.
Hypervisor mode
• The x86 family of CPUs provide a range of protection levels also known as rings in which code
can execute. Ring 0 has the highest level privilege and it is in this ring that the operating system
kernel normally runs. Code executing in ring 0 is said to be running in system space, kernel mode
or supervisor mode. All other code such as applications running on the operating system operates
in less privileged rings, typically ring 3.
• Under hypervisor virtualization a program known as a hypervisor (also known as a type 1 Virtual
Machine Monitor or VMM) runs directly on the hardware of the host system in ring 0. The task of
this hypervisor is to handle resource and memory allocation for the virtual machines in addition to
providing interfaces for higher level administration and monitoring tools.
• Clearly, with the hypervisor occupying ring 0 of the CPU, the kernels for any guest operating
systems running on the system must run in less privileged CPU rings. Unfortunately, most
operating system kernels are written explicitly to run in ring 0 for the simple reason that they need
to perform tasks that are only available in that ring, such as the ability to execute privileged CPU
instructions and directly manipulate memory.
• A number of different solutions to this problem have been devised in recent years, each of which is
described below:
Full Virtualization
• In an x86 architecture, the guest OSs and VMs are not executed in ring 0.
• There are two issues out of this action. The first is that these guest OSs should be supervised by a host OS running in ring 0.
• Hence VMM is run in ring 0.
• Where do we run the guest OS?
• These must run deprivileged in either ring 1 or 2.
• Typically the application is run in ring 3 and guest OS in ring 1.
• This methodology is termed as full virtualization.
• However, deprivileging an OS comes with an associated cost.
• Since the guest operating systems are written for execution on top of the hardware (ring 0), many instructions may be unsafe or even
potentially harmful to be run in user mode.
• Simple traps for all these ‘sensitive’ instructions may not work.
• To ensure safety of the virtualized environment, all those instructions that can cause problems must be intercepted and rewritten, if
required. Hence complete binary translation of the kernel code of the guest OSs is required to ensure the safety of the processor and
the machine while allowing the user code to run natively on the hardware.
• Further, a guest OS could easily determine that it is not running at privilege level 0. Since this is not desirable, the VMM must take
an appropriate action.
• Another problem of deprivileging the OS is that, as normal OS, the guest OS program expects to enjoy unrestrained access to the
full memory but as a mere application, running in a higher ring, it cannot enjoy the same privilege. Hence the VMM must make way
for ensuring that this is taken care of. This method is called the full virtualization with binary translation.
• In this virtualization one or more guest operating systems of virtual machines share hardware resources from the host system. The
presence of the hypervisor beneath is not known to the guests
• Full virtualization is the only option that requires no hardware assist or
operating system assist to virtualize sensitive and privileged instructions.
The hypervisor translates all operating system instructions on the fly and
caches the results for future use, while user level instructions run
unmodified at native speed.
• However, the issue that restricts full virtualization with binary translation is
the performance.
• Translation takes time and translating all the kernel codes of the guest OS is
expensive in terms of performance. This problem is resolved by using
dynamic binary translation. In dynamic binary translation, a block of code is
used. These blocks may or may not have critical instructions. For each
block, dynamic BT translates critical instructions, if any, into some
privilege instructions, which will trap to VMM for further emulation. Full
virtualization technology uses and exploits dynamic binary translation.
• Real-time systems also cannot be virtualized since such systems cannot
tolerate the delays caused by the translation
• VMware’s virtualization product VMWare ESXi Server and Microsoft
Virtual Server are few examples of full virtualization solution.
Para virtualization
• The problem of full virtualization is that the guest OS is unaware of the fact that it has been diprivileged and hence its behaviour
continues to be the same.
• In para virtualization, the guest OS is modified or patched for virtualization.
• Hypervisor sits as the base OS or in ring 0 in case of x86 and guest OS resides on top of VMM.
• Here, since the Guest OS is aware that it is running above VMM rather than on top of the physical machine, many problems of full
virtualization is taken care of.
• The modified kernel of the guest OS is able to communicate with the underlying hypervisor via special calls.
• These special calls are provided by specific APIs depending on the hypervisor employed.
• These special calls are equivalent to system calls generated by an application to a non virtualized OS.
• Xen Hypervisor is an example that uses paravirtualization technology.
• The Guest OS is modified and thus run kernel-level operations at Ring 1.
• The guest OS is now fully aware of how to process both privileged and sensitive instructions. Hence the necessity for translation of
instructions is not present any more.
• Guest OS uses a specialized call, called “hypercall” to talk to the VMM.
• VMM executes the privileged instructions. Thus VMM is responsible for handling the virtualization requests and putting them to the
hardware.
• The unmodified versions of available operating systems (like Windows, Linux) cannot be used in para-
virtualization. Since it involves modifications of the OS the para-virtualization is sometimes referred to as
OS-Assisted Virtualization too. This technique relieves the hypervisor from handling the entire virtualization
tasks to some extent. Best known example of para virtualization hypervisor is the open-source Xen project
which uses a customized Linux kernel.
• Advantages
• Para-virtualization allows calls from guest OS to directly communicate with hypervisor (without any binary
translation of instructions). The use of modified OS reduces the virtualization overhead of the hypervisor as
compared to the full virtualization.
• In para-virtualization, the system is not restricted by the device drivers provided by the virtualization
software layer. In fact, in para-virtualization, the virtualization layer (hypervisor) does not contain any device
drivers at all. Instead, the guest operating systems contain the required device drivers.
• Limitations
• Unmodified versions of available operating systems (like Windows or Linux) are not compatible with para-
virtualization hypervisors. Modifications are possible in Open source operating systems (like Linux) by the
user. But for proprietary operating systems (like Windows), it depends upon the owner. If owner agrees to
supply the required modified version of the OS for a hypervisor, then only that OS becomes available for the
para virtualization system
• Security is compromised in this approach as the guest OS has a comparatively more control of the underlying
hardware. Hence, the users of some VM with wrong intentions have more chances of causing harm to the
physical machine.
• Legacy processors are not designed for virtualization. Hence we observed that whatever the methods that may be applied for
implementing virtualization, each has its own problems.
• However, if the processors are made virtualization-aware, the VMM design will be more efficient and simple.
• Many issues mentioned in the earlier methods can be easily taken care of with such a processor.
• This is the reason why hardware vendors rapidly embraced virtualization and developed new features to simplify virtualization
techniques.
• The two giants in the hardware arena, Intel and AMD came up with came up with designs of new CPU execution mode that allows
VMM to run in a new root mode below ring 0.
• This is the way to handle the privileged mode.
• In this new design, both privileged and sensitive calls automatically trap to the hypervisor.
• Hence, in this new design, no need for either binary translation or paravirtualization.
• Examples of this new design are Intel VT-x (2005) and AMD-V (2006).
• Intel VT-x have two modes of operations: VMX root and VMX non-root.
• While “VMX root” mode operation executes the hypervisor / VMM in the ring 0, “VMX non-root” mode operation executes the
guest OS, also in ring 0, thereby removing the need to deprivilege the guest OS. Both the modes support all privilege rings and are
identical. Unmodified guest OS runs in ring 0 in non-root mode and traps instructions to root mode. The privileged and sensitive
calls automatically trap to the hypervisor. VMM controls the execution of the guest OS.
Levels of Virtualization Implementation
• The main function of the software layer for virtualization is to
virtualize the physical hardware of a host machine into virtual
resources to be used by the VMs, exclusively. This can be
implemented at various operational levels, .
• The virtualization software creates the abstraction of VMs by
interposing a virtualization layer at various levels of a computer
system. Common virtualization layers include the instruction set
architecture (ISA) level, hardware level, operating system level,
library support level, and application level
• Instruction Set Architecture Level (ISA)
• At the ISA level, virtualization can work via emulating a given ISA by the ISA of the host
machine. For instance, MIPS binary code can operate on an x86-based host machine with
the help of ISA emulation.
Thus, this strategy makes it possible to run a large volume of legacy binary code written
for several processors on any provided different hardware host machine.
The first emulation method is through code interpretation. Therefore, an interpreter
program defines the source instructions to target instructions one by one.
• Hardware-level virtualization is performed right on top of the bare hardware. On the one
hand, this approach generates a virtual hardware environment for a VM. On the other
hand, the process manages the underlying hardware through virtualization. The idea is to
virtualize a computer’s resources, such as its processors, memory, and I/O devices. The
intention is to upgrade the hardware utilization rate by multiple users concurrently. The
idea was implemented in the IBM VM/370 in the 1960s. More recently, the Xen
hypervisor has been applied to virtualize x86-based machines to run Linux or other guest
OS applications.
• The virtualization model can create an abstract layer between the operating
system and the user application at this operating system level.
It is an isolated container on the operating system and the physical server,
which uses the software and hardware. Thus, each of these then operates in
the form of a server.
Therefore, when there are numerous users and no one wants to share the
hardware, then at this point, the virtualization level comes into use.
Each user will get one virtual environment using a virtual hardware
resource that is dedicated. Hence, in this manner, there is no issue of any
conflict.
Any hardware which is in a virtualized environment will process within this
operating system.
• And the essential requirement at the operating system level is that all the
user systems on the virtualized environment will hold the same family
operating system. Otherwise, we can’t transfer the service to the users.
• Library support level
• Most utmost applications use APIs exported by user-level libraries rather than practicing lengthy system calls
by the OS.
Since most of the systems provide well-documented APIs, such an interface becomes another applicant for
virtualization.
Therefore, virtualization with library interfaces is possible by checking the communication link between
applications and the system through API hooks.
Activity happenings within the Library Support Level:
• Use of the application programming interface (API).
• At this level, the emulator’s idea worked as a tool and provided the guest operating system to practice the
resources they want. In short, users use the emulator to run different applications of the other operating
systems.
• User-Application Level
• The application-level virtualization works where there is a desire to virtualize only one application and is the
last of the implementation levels of virtualization in cloud computing.
One does not require to virtualize the complete environment of the platform.
Therefore, it generally works when you run virtual machines that practice high-level languages. Also, it lets
the high-level language programs compiled be of use in the application level of the virtual machine that runs
seamlessly.
Activity happenings within the Library User-Application Level:
• Virtual machine as an application operates at the user system with the help of the virtualization layer.
• Also, the users excess the services if the environment in which user and host were of a different type
THE TOOLS OF THE VIRTUALIZATION
• VMware It was a virtual machine ‘VM’ that assisted in executing unmodified OS on the Host or
User-Level application. The OS that utilized with VMware may get stopped, reinstalled, restarted,
or crashed without having any influence on the application that runs on the Hosted CPU. VMware
provided the distribution of Guest-OS from the actual Host-OS. As a consequence, if the Guest-OS
failed later the physical hardware or the hosted computer did not suffer from the failure. VMware
was utilized to create standard illusion hardware on the inner side of the Virtual Machine ‘VM’.
Hence, the VMware was utilized to execute numerous unmodified OS simultaneously on the
distinct hardware engine by executing the OS in the Virtual Machine of a particular OS. Despite
that, the code running on the hardware like a simulator, Virtual Machine executed the code
straightly on the physical hardware without any software that interprets the code .
• Xen It was the most common virtualization open-source tool that supported both Full-
Virtualization ‘FV’ and Para-Virtualization ‘PV’. Xen was an extremely famed virtualization
resolution, initially established at the Cambridge University. It was the single Bare-Metal solution
that was obtainable as an open-source. It contained several elements that cooperated to supply the
virtualization atmosphere comprising Xen Hypervisor ‘XH’, Domain-0-Guest shortened to Dom-0,
and Domain-U-Guest shortened to Dom-U that could be either ‘PV’- Guest or ‘FV’-Guest. The
Xen Hypervisor ‘XH’ was the layer that resided straightly on the hardware underneath any OS. It
was responsible for CPU scheduling and memory segregating of the different ‘VMs’. It represented
the administration of Domain-U-Guest ‘Dom-U’ to the Domain-0-Guest ‘Dom-0’ [23].
• Qemu This virtualization tool was utilized to execute the virtualization
in the OSs such as Linux and Windows. It was counted as the
renowned open-source emulator that offered swift emulation with the
assist of dynamic translation. It had several valuable commands for
managing the Virtual Machine ‘VM’. Qemu was the major open-
source tool for various hardware architectures. Indeed, It was an
example of Native- Virtualization ‘NV’
• Docker Docker is open-source. It is relied on using containers to
automatically distribute Linux application. All the necessities like
codes, runtime system tools, and system libraries are included in the
Docker containers. Docker utilized Linux containers (LXC) library till
version 0.9, but after this version, Docker utilizes a lib container for
virtualization capabilities provided by a kernel of Linux. It uses to
implement an isolated container via a highlevel application program
interface (API). The operating system (OS) is not required in Docker.
The same Linux kernel utilizes by a Docker container but is performed
by isolating the user space from the host OS. Docker is only available
and compatible with Linux.
• Kernel-Based Virtual Machine (KVM) A KVM is also open-source and is
required central processing unit (CPU) technology for (Intel VT or AMD-
V). It utilizes the full virtualization ‘FV’ for Linux x86 and including the
extensions of virtualization, the KVM’s kernel component is included in
Linux, but the KVM’s userspace components are included in a quick
emulator (QEMU). However, for some devices KVM also supports the
para-virtualization ‘PV’ mechanism. By using KVM end user can turn
Linux into a Hypervisor that can run multiple and isolated virtual
environments called guests. The main limitation of KVM is that it cannot
execute emulation. Instead of that, it reveals the KVM interface and it sets
up the virtual machine address space and feeds the simulated input/output
via QEMU
• OpenVZ It was also an open-source virtualization tool that relied on the
control group conceptions. OpenVZ provided Container-Based-
Virtualization ‘CBV’ for the Linux platform. It allowed several distributed
execution that named Virtual Environments ‘VEs’ or Containers with a
distinct operating system kernel. It also provided superior performance and
scalability when compared with the other virtualization tools
What Is Disaster Recovery?