Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
36 views92 pages

Unit 2 For Students

Uploaded by

iamankitappy116
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views92 pages

Unit 2 For Students

Uploaded by

iamankitappy116
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 92

What is service-oriented architecture?

• Service-oriented architecture (SOA) is a method of software development


that uses software components called services to create business
applications. Each service provides a business capability, and services can
also communicate with each other across platforms and languages.
Developers use SOA to reuse services in different systems or combine
several independent services to perform complex tasks.
• For example, multiple business processes in an organization require the user
authentication functionality. Instead of rewriting the authentication code for
all business processes, you can create a single authentication service and
reuse it for all applications. Similarly, almost all systems across a healthcare
organization, such as patient management systems and electronic health
record (EHR) systems, need to register patients. These systems can call a
single, common service to perform the patient registration task.
What are the benefits of service-oriented
architecture?
• Service-oriented architecture (SOA) has several benefits over the traditional monolithic
architectures in which all processes run as a single unit. Some major benefits of SOA include the
following:
• Faster time to market
• Developers reuse services across different business processes to save time and costs. They can
assemble applications much faster with SOA than by writing code and performing integrations
from scratch.
• Efficient maintenance
• It’s easier to create, update, and debug small services than large code blocks in monolithic
applications. Modifying any service in SOA does not impact the overall functionality of the
business process.
• Greater adaptability
• SOA is more adaptable to advances in technology. You can modernize your applications efficiently
and cost effectively. For example, healthcare organizations can use the functionality of older
electronic health record systems in newer cloud-based applications.
What are the basic principles of service-oriented
architecture?
• There are no well-defined standard guidelines for implementing service-oriented architecture (SOA). However, some basic
principles are common across all SOA implementations.
• Interoperability
• Each service in SOA includes description documents that specify the functionality of the service and the related terms and
conditions. Any client system can run a service, regardless of the underlying platform or programming language. For instance,
business processes can use services written in both C# and Python. Since there are no direct interactions, changes in one service do
not affect other components using the service.
• Loose coupling
• Services in SOA should be loosely coupled, having as little dependency as possible on external resources such as data models or
information systems. They should also be stateless without retaining any information from past sessions or transactions. This way, if
you modify a service, it won’t significantly impact the client applications and other services using the service.
• Abstraction
• Clients or service users in SOA need not know the service's code logic or implementation details. To them, services should appear
like a black box. Clients get the required information about what the service does and how to use it through service contracts and
other service description documents.
• Granularity
• Services in SOA should have an appropriate size and scope, ideally packing one discrete
business function per service. Developers can then use multiple services to create a composite service for performing complex
operations.
What are the components in service-oriented
architecture?
• There are four main components in service-oriented architecture (SOA).
• Service
• Services are the basic building blocks of SOA. They can be private—available only to internal users of an organization—or
public—accessible over the internet to all. Individually, each service has three main features.
• Service implementation
The service implementation is the code that builds the logic for performing the specific service function, such as user authentication
or bill calculation.
• Service contract
• The service contract defines the nature of the service and its associated terms and conditions, such as the prerequisites for using the
service, service cost, and quality of service provided.

• Service interface
• In SOA, other services or systems communicate with a service through its service interface. The interface defines how you can
invoke the service to perform activities or exchange data. It reduces dependencies between services and the service requester. For
example, even users with little or no understanding of the underlying code logic can use a service through its interface.
• Service provider: The service provider creates a web service and possibly publishes its interface and access
information to the service registry. Each provider must decide which services to expose, how to make trade-
offs between security and easy availability, how to price the services, or how to exploit free services for other
value. The provider also has to decide which category to list the service in for a given broker service and what
sort of trading partner agreements are required to use the service.
• Service broker: The service registry is a central location where service providers can publish their service
descriptions and where service requesters can find those service descriptions. The service broker, also known
as service registry, is responsible for making the web service interface and implementation access information
available to any potential service requestor. The implementer of the broker decides the scope of the broker.
Public brokers are available through the Internet, while private brokers are only accessible to a limited
audience, for example, users of a company intranet. The Universal Description, Discovery and Integration
(UDDI) specification defines a way to publish and discover information about web services. The registry is an
optional component of the web services architecture because service requesters and providers can
communicate without it in many situations. For example, the organization that provides a service can
distribute the service description directly to the users of the service in a number of ways, including offering
the service as a download from an FTP site.

• Service requester: The service requestor or web service client locates entries in the broker registry using
various find operations and then binds to the service provider to invoke one of its web services. The service
requester requests the service provider to run a specific service. It can be an entire system, application, or
other service. The service contract specifies the rules that the service provider and consumer must follow
when interacting with each other. Service providers and consumers can belong to different departments,
organizations, and even industries.
How does service-oriented architecture work?

• In service-oriented architecture (SOA), services function independently and provide functionality


or data exchanges to their consumers. The consumer requests information and sends input data to
the service. The service processes the data, performs the task, and sends back a response. For
example, if an application uses an authorization service, it gives the service the username and
password. The service verifies the username and password and returns an appropriate response.
• Communication protocols
• Services communicate using established rules that determine data transmission over a network.
These rules are called communication protocols. Some standard protocols to implement SOA
include the following:
• • Simple Object Access Protocol (SOAP)
• RESTful HTTP
• Apache Thrift
• Apache ActiveMQ
• Java Message Service (JMS)
• You can even use more than one protocol in your SOA implementation.
Implementing Service-Oriented Architecture

• When it comes to implementing service-oriented architecture (SOA), there


is a wide range of technologies that can be used, depending on what your
end goal is and what you’re trying to accomplish.
• Typically, Service-Oriented Architecture is implemented with web services,
which makes the “functional building blocks accessible over standard
internet protocols.”
• An example of a web service standard is SOAP, which stands for Simple
Object Access Protocol. In a nutshell, SOAP “is a messaging protocol
specification for exchanging structured information in the implementation
of web services in computer networks.
• Other options for implementing Service-Oriented Architecture include Jini,
COBRA, or REST.
Why Service-Oriented Architecture Is Important
• Use Service-Oriented Architecture to create reusable code: Not only does this
cut down on time spent on the development process, but there’s no reason to
reinvent the coding wheel every time you need to create a new service or process.
Service-Oriented Architecture also allows for using multiple coding languages
because everything runs through a central interface.
• Use Service-Oriented Architecture to promote interaction: With Service-
Oriented Architecture, a standard form of communication is put in place, allowing
the various systems and platforms to function independent of each other. With this
interaction, Service-Oriented Architecture is also able to work around firewalls,
allowing “companies to share services that are vital to operations.”
• Use Service-Oriented Architecture for scalability: It’s important to be able to
scale a business to meet the needs of the client, however certain dependencies can
get in the way of that scalability. Using Service-Oriented Architecture cuts back on
the client-service interaction, which allows for greater scalability.
• Use Service-Oriented Architecture to reduce costs: With Service-Oriented
Architecture, it’s possible to reduce costs while still “maintaining a desired level
of output.” Using Service-Oriented Architecture allows businesses to limit the
amount of analysis required when developing custom solutions.
Disadvantages of SOA

• High overhead: A validation of input parameters of services is done


whenever services interact this decreases performance as it increases
load and response time.
• High investment: A huge initial investment is required for SOA.
• Complex service management: When services interact they
exchange messages to tasks. the number of messages may go in
millions. It becomes a cumbersome task to handle a large number of
messages.
Web Services in Cloud Computing
• A web service is a standardized method for propagating messages between client
and server applications on the World Wide Web.
• A web service is a software module that aims to accomplish a specific set of tasks.
• Web services can be found and implemented over a network in cloud computing.
• The web service would be able to provide the functionality to the client that
invoked the web service.
• A web service is a set of open protocols and standards that allow data exchange
between different applications or systems.
• Any software, application, or cloud technology that uses a standardized Web
protocol (HTTP or HTTPS) to connect, interoperate, and exchange data messages
over the Internet-usually XML (Extensible Markup Language) is considered a Web
service.
Features of web service
• Web services are self-contained: On the client side, no additional software is required.
A programming language with Extensible Markup Language (XML) and HTTP client
support is enough to get you started. On the server side, a web server and a SOAP server
are required. It is possible to enable an existing application for web services without
writing a single line of code.
• Web services are self-describing: Neither the client nor the server knows or cares about
anything besides the format and content of the request and response messages (loosely
coupled application integration). The definition of the message format travels with the
message; no external metadata repositories or code generation tool are required.
• Web services can be published, located, and invoked across the Internet: This
technology uses established lightweight Internet standards such as HTTP and it leverages
the existing infrastructure. Some other standards that are required include, SOAP, Web
Services Description Language (WSDL), and UDDI.
• Web services are language-independent and interoperable: The client and server can
be implemented in different environments. Existing code does not have to change in order
to be web services-enabled.
• Web services are inherently open and standard-based: XML and HTTP are the major technical foundation
for web services. A large part of the Web service technology has been built using open-source projects.
• Web services are dynamic: Dynamic e-business can become reality using web services because with UDDI
and WSDL you can automate the web service description and discovery.
• Web services are composable: Simple web services can be aggregated to more complex ones, either using
workflow techniques or by calling lower-layer web services from a web service implementation. Web services
can be chained together to perform higher-level business functions. This chaining shortens development time
and enables best-of-breed implementations.
• Web services are loosely coupled: Traditionally, application design has depended on tight interconnections
at both ends. Web services require a simpler level of coordination that supports a more flexible
reconfiguration for an integration of the services.
• Web services provide programmatic access: The approach provides no graphical user interface; it operates
at the code level. Service consumers need to know the interfaces to web services, but do not need to know the
implementation details of services.
• Web services provide the ability to wrap existing applications: Already existing stand-alone applications
can easily integrate into the SOA by implementing a web service as an interface.
Benefits of web service
• Web services is a technology for deploying, and providing access to,
business functions over the World Wide Web. Use web services to
integrate your applications into the Web.
• Web services can help your business in these ways:Reducing the cost
of doing business
• Making it possible to deploy solutions more rapidly
• Opening up new opportunities
• The key to achieving all these benefits is a common program-to-
program communication model, built on existing and emerging
standards such as HTTP, XML, SOAP, and WSDL.
Interactions between a service provider, a service
requester, and, a service registry
• The interactions between the service provider, service requester, and
service registry involve the following operations:
• Publish
• When a service registry is used, a service provider publishes its service
description in a service registry for the service requester to find.
• Find
• When a service registry is used, a service requester finds the service
description in the registry.
• Bind
• The service requester uses the service description to bind with the service
provider and interact with the web service implementation.
• Web service description
A web service description is a document by which the service
provider communicates the specifications for starting the web service to
the service requester. Web service descriptions are expressed in the XML
application known as Web Service Description Language (WSDL).
• UDDI
• UDDI stands for Universal Description, Discovery, and Integration.
• UDDI is a specification for a distributed registry of web services.
• UDDI is a platform-independent, open framework.
• UDDI can communicate via SOAP, CORBA, Java RMI Protocol.
• UDDI uses Web Service Definition Language(WSDL) to describe interfaces to web
services.
• UDDI is seen with SOAP and WSDL as one of the three foundation standards of web
services.
• UDDI is an open industry initiative, enabling businesses to discover each other and
define how they interact over the Internet.
Web services architecture
• The service provider sends a WSDL file to UDDI. The service
requester contacts UDDI to find out who is the provider for the data it
needs, and then it contacts the service provider using the SOAP
protocol. The service provider validates the service request and sends
structured data in an XML file, using the SOAP protocol. This XML
file would be validated again by the service requester using an XSD
file.
REST
• The Representational State Transfer (REST) is an architectural style for
distributed hypermedia systems.
• It defines a set of principles that a system must comply with in order to get some
benefits such as loose coupling between the client and the server, scalability,
reliability, and better performance.
• Rest is stateless and relies on client-server communication. The Style was built to
be used in a network application.
• It refers to a basic method of organizing interactions between disparate systems.
• Rest puts a high level of design and makes us do our implementation.
• Data and functionality are considered resources in the Rest architecture, and they
are accessible via Uniform Resource Identifiers and acted upon by well-defined
operations.
• REST is a popular framework for developing and designing web services.
Resources

• A RESTful resource is anything that is addressable over the Web. By


addressable, we mean resources that can be accessed and transferred
between clients and servers. Subsequently, a resource is a logical, temporal
mapping to a concept in the problem domain for which we are
implementing a solution.
• Here are some examples of the REST resources:
• A news story
• The temperature in NY at 4:00 p.m. EST
• A tax return stored in the IRS database
• A list of code revision history in a repository such as SVN or CVS
• A student in a classroom in a school
• A search result for a particular item in a Web index, such as Google
• Resources, which are identified by logical URLs. Both state and functionality are represented using resources.
• The logical URLs imply that the resources are universally addressable by other parts of the system.
• Resources are the key element of a true RESTful design, as opposed to "methods" or "services" used in RPC
and SOAP Web Services, respectively. You do not issue a "getProductName" and then a "getProductPrice"
RPC calls in REST; rather, you view the product data as a resource -- and this resource should contain all the
required information (or links to it).
• A web of resources, meaning that a single resource should not be overwhelmingly large and contain too fine-
grained details. Whenever relevant, a resource should contain links to additional information -- just as in web
pages.
• The system has a client-server, but of course one component's server can be another component's client.
• There is no connection state; interaction is stateless (although the servers and resources can of course be
stateful). Each new request should carry all the information required to complete it, and must not rely on
previous interactions with the same client.
• Resources should be cachable whenever possible (with an expiration date/time). The protocol must allow the
server to explicitly specify which resources may be cached, and for how long.
• Since HTTP is universally used as the REST protocol, the HTTP cache-control headers are used for this
purpose.
• Clients must respect the server's cache specification for each resource.
• Proxy servers can be used as part of the architecture, to improve performance and scalability. Any standard
HTTP proxy can be used.
URI( Uniform resource indentifier)

• A URI is a string of characters used to identify a resource over the


Web. In simple words, the URI in a RESTful web service is a
hyperlink to a resource, and it is the only means for clients and servers
to exchange representations.
• The client uses a URI to locate the resources over Web and then, sends
a request to the server and reads the response. In a RESTful system,
the URI is not meant to change over time as it may break the contract
between a client and a server. More importantly, even if the underlying
infrastructure or hardware changes (for example, swapping the
database servers) for a server hosting REST APIs, the URIs for
resources are expected to remain the same as long as the web service
is up and running.
• The representation of resources is what is sent back and forth between clients and servers
in a RESTful system. A representation is a temporal state of the actual data located in
some storage device at the time of a request. In general terms, it is a binary stream
together with its metadata that describes how the stream is to be consumed by the client.
The metadata can also contain extra information about the resource, for example,
validation, encryption information, or extra code to be executed at runtime.

• Throughout the life of a web service, there may be a variety of clients requesting
resources. Different clients can consume different representations of the same resource.
Therefore, a representation can take various forms, such as an image, a text file, an XML,
or a JSON format. However, all clients will use the same URI with appropriate Accept
header values for accessing the same resource in different representations.

• For the human-generated requests through a web browser, a representation is typically in


the form of an HTML page. For automated requests from the other web services,
readability is not as important and a more efficient representation, such as JSON or XML,
can be used.
• In REST-speak, a client and server exchange representations of a resource, which reflect
its current state or its desired state. REST, or Representational state transfer, is a way for
two machines to transfer the state of a resource via representations.
A Restful system consists of a

• client who requests for the resources.


• server who has the resources.
• It is important to create REST API according to industry standards
which results in ease of development and increase client adoption.
Key components of a REST architecture:
• Uniform Interface: It is a key constraint that differentiate between a REST API
and Non-REST API. It suggests that there should be an uniform way of interacting
with a given server irrespective of device or type of application (website, mobile
app).
• There are four guidelines principle of Uniform Interface are:
• Resource-Based: Individual resources are identified in requests. For example: API/users.
• Manipulation of Resources Through Representations: Client has representation of resource
and it contains enough information to modify or delete the resource on the server, provided it
has permission to do so. Example: Usually user get a user id when user request for a list of
users and then use that id to delete or modify that particular user.
• Self-descriptive Messages: Each message includes enough information to describe how to
process the message so that server can easily analyses the request.
• Hypermedia as the Engine of Application State (HATEOAS): It need to include links for
each response so that client can discover other resources easily.
• Stateless: It means that the necessary state to handle the request is contained within the request itself and server would not store
anything related to the session. In REST, the client must include all information for the server to fulfill the request whether as a part
of query params, headers or URI. Statelessness enables greater availability since the server does not have to maintain, update or
communicate that session state. There is a drawback when the client need to send too much data to the server so it reduces the scope
of network optimization and requires more bandwidth.
• Cacheable: Every response should include whether the response is cacheable or not and for how much duration responses can be
cached at the client side. Client will return the data from its cache for any subsequent request and there would be no need to send the
request again to the server. A well-managed caching partially or completely eliminates some client–server interactions, further
improving availability and performance. But sometime there are chances that user may receive stale data.
• Client-Server: REST application should have a client-server architecture. A Client is someone who is requesting resources and are
not concerned with data storage, which remains internal to each server, and server is someone who holds the resources and are not
concerned with the user interface or user state. They can evolve independently. Client doesn’t need to know anything about business
logic and server doesn’t need to know anything about frontend UI.
• Layered system: An application architecture needs to be composed of multiple layers. Each layer doesn’t know any thing about any
layer other than that of immediate layer and there can be lot of intermediate servers between client and the end server. Intermediary
servers may improve system availability by enabling load-balancing and by providing shared caches.
• In this type of “layered” architecture, clients are able to communicate with other pre-authorized intermediaries while still
receiving a response from the server, despite the “additional” communications.
• The security, application, and business logic layers of the RESTful web service can be designed to cooperate across multiple
servers to fulfill client requests. The client cannot see these layers.

• Code on demand: It is an optional feature. According to this, servers can also provide executable code to the client. The examples
of code on demand may include the compiled components such as Java Servlets and Server-Side Scripts such as JavaScript.
REST API
Rules of REST API
• There are certain rules which should be kept in mind while creating REST API
endpoints.
• REST is based on the resource or noun instead of action or verb based. It means
that a URI of a REST API should always end with a noun. Example: /api/users is a
good example, but /api?type=users is a bad example of creating a REST API.
• HTTP verbs are used to identify the action. Some of the HTTP verbs are – GET,
PUT, POST, DELETE, GET, PATCH.
• A web application should be organized into resources like users and then uses
HTTP verbs like – GET, PUT, POST, DELETE to modify those resources. And as
a developer it should be clear that what needs to be done just by looking at the
endpoint and HTTP method used.
• GET: to request information from a resource.
• POST: to send information to a specific resource.
• PUT: to update the information of a particular resource.
• DELETE: to delete a resource.
What is a system of systems?

• A system of systems (SoS) is the collection of multiple, independent


systems in context as part of a larger, more complex system. A system
is a group of interacting, interrelated and interdependent components
that form a complex and unified whole.
• A system is a group of interacting elements (or subsystems) having an internal structure
which links them into a unified whole. The boundary of a system is to be defined, as well
as the nature of the internal structure linking its elements (physical, logical, etc.). Its
essential properties are autonomy, coherence, permanence, and organization
• A complex system is made by many components interacting in a network structure. Most
often, the components are physically and functionally heterogeneous, and organized in a
hierarchy of subsystems that contributes to the system function. This leads both to
structural and dynamic complexity .Structural complexity derives from
• heterogeneity of components across different technological domains due to increased integration
among systems and
• scale and dimensionality of connectivity through a large number of components (nodes) highly
interconnected by dependences and interdependences. Dynamic complexity manifests through the
emergence of (unexpected) system behavior in response to changes in the environmental and
operational conditions of its components.
• SOS is ‘a collection of task-oriented or dedicated systems that pool their resources and
capabilities together to obtain a new, more complex ‘meta-system’ which offers more
functionality and performance than simply the sum of the constituent systems”.
• One interpretation of the concept of system of systems is given by Maier
that identifies five properties
• operational independence, i.e., each system is independent and it achieves
its purposes by itself,
• managerial independence, i.e., each system is managed in large part for its
own purposes rather than the purposes of the systems of systems,
• geographic distribution, i.e., a system of systems is distributed over a large
geographic extent,
• emergent behavior, i.e., a system of systems has capabilities and properties
that do not reside in the component systems, and
• evolutionary development, i.e., a system of systems evolves with time and
experience.
What are the types of system of systems?

• There are four types of system of systems: directed, acknowledged, collaborative and virtual. In
most cases, an SoS is a combination of these types and may change over time. The type of a SoS is
based on the degree of independence of its constituents as noted below:
• Directed. The SoS is created and managed to fulfill a specific purpose and the constituent systems
operate independently. However, independent operation is treated as less important.
• Acknowledged. The SoS has a specific purpose, but the constituent systems maintain independent
ownership, objectives and development. Changes made in this SoS type are based on cooperative
agreements between the system and the SoS.
• Collaborative. Component systems freely interact with each other to fulfill a defined purpose.
Management authorities have little impact over the behavior of the component systems.
• Virtual. The SoS does not have central management authority or a centrally agreed-upon purpose.
Typically, the acquisition of a virtual SoS is unplanned and is made up of component systems that
may not have been designed to be integrated. Once its use is over, the components are normally
disassembled and no longer operate in an SoS.
Example
• Embedded automotive systems is an example of a system of systems,
as they have numerous onboard computing, control and
communication-based systems that all work together to improve
safety, fuel efficiency and emissions. Safety systems could be
considered their own SoS, with airbag deployment, collision impact
warnings, seatbelt pretensioners, antilock and differential braking, as
well as traction and stability control all working together to increase
vehicle safety.
What Is Publish-Subscribe Pattern?

• The publish-subscribe pattern is a messaging pattern that allows different


services in a system to communicate in a decoupled manner.
• The publish-subscribe model's fundamental semantic feature lies in how messages
flow from publishers to subscribers. In this model, publishers do not directly target
subscribers. Instead, subscribers express their interest by issuing subscriptions for
specific messages. Subscribers express their interest independently in the
notifications they seek, independent of the publishers that produce them. Once
established, subscribers are asynchronously notified of all notifications submitted
by any publisher that matches their subscription.
• Asynchronous communication is key in this model, ensuring that subscribers don’t
wait for notifications to arrive and can perform concurrent operations instead. This
makes it possible to handle numerous notifications without worrying about
potential blockage, making the publish-subscribe model an efficient solution for
information-driven applications.
How does pub/sub messaging work?
• The publish-subscribe (pub/sub) system has four key components.
• Messages
• A message is communication data sent from sender to receiver. Message data types can be anything
from strings to complex objects representing text, video, sensor data, audio, or other digital content.
• Topics
• Every message has a topic associated with it. The topic acts like an intermediary channel between
senders and receivers. It maintains a list of receivers who are interested in messages about that
topic.
• Subscribers
• A subscriber is the message recipient. Subscribers have to register (or subscribe) to topics of
interest. They can perform different functions or do something different with the message in
parallel.
• Publishers
• The publisher is the component that sends messages. It creates messages about a topic and sends
them once only to all subscribers of that topic. This interaction between the publisher and
subscribers is a one-to-many relationship. The publisher doesn’t need to know who is using the
information it is broadcasting, and the subscribers don’t need to know where the message comes
from.
Pull API Architecture
• If your client app needs to find out if there is any new information on the
server side, the easiest—if not always the best—way to accomplish this is to
call and ask, which is basically how pull APIs work. They are a type of
network communication where the client application initiates the
communication by requesting updates. The server receives the request,
verifies it, processes it, and sends an appropriate response back to the client.
• This whole process of polling updates usually takes hundreds of
milliseconds. If there are many requests, it can slow down or even overload
your server layer, resulting in a poor user experience, customer complaints,
and even economic losses. Therefore, to control traffic, many companies
often use rate limits to limit how often a client can ask for the same
information.
• While some of the drawbacks of pull architecture can be mitigated by a
system that can distribute processing, even this quickly becomes a very
expensive, resource-intensive option. If you need the ability to
communicate frequent updates, push architecture is likely a better choice.
Push API architecture

• The strongest use case for push APIs are for instances in which you
have time-sensitive data that changes often, and clients need to be
updated as soon as that data changes.
• Push APIs allow the server to send updates to the client whenever new
data becomes available, without a need for the client to explicitly
request it. When the server receives new information, it pushes the
data to the client application, no request needed. This makes it a more
efficient communication standard for data that stays changes often.
Advantages of publish/subscribe pattern
• Decoupled/loosely coupled components
• Pub/Sub allows you to separate the communication and application logic easily, thereby creating isolated
components. This results in:

• Creating more modularized, robust, and secure software components or modules


• Improving code quality and maintainability
• Greater system-wide visibility
• The simplicity of the pub/sub pattern means that users can understand the flow of the application easily.
• The pattern also allows creating decoupled components that help us get a bird’s eye view of the information flow. We can
know exactly where information is coming from and where it is delivered without explicitly defining origins or destinations
within the source code.
• Real-time communication
• Pub/sub delivers messages to subscribers instantaneously with push-based delivery, making it the ideal choice for near real-
time communication requirements. This eliminates the need for any polling to check for messages in queues and reduces the
delivery latency of the application.

• Ease of development
• Since pub/sub is not dependent on programming language, protocol, or a specific technology, any supported message broker
can be easily integrated into it using any programming language. Additionally, Pub/Sub can be used as a bridge to enable
communications between components built using different languages by managing inter-component communications.
• This leads to easy integrations with external systems without having to create functionality to facilitate communications or
worry about security implications. We can simply publish a message to a topic and let the external application subscribe to the
topic, eliminating the need for direct interaction with the underlying application.
• Increased scalability & reliability
• This messaging pattern is considered elastic—we do not have to pre-define a set number of publishers or subscribers. They
can be added to a required topic depending on the usage.

• The separation between communication and logic also leads to easier troubleshooting as developers can focus on the specific
component without worrying about it affecting the rest of the application.

• Pub/sub also improves the scalability of an application by allowing to change message brokers architecture, filters, and users
without affecting the underlying components. With pub/sub, a new messaging implementation is simply a matter of changing
the topic if the message formats are compatible even with complex architectural changes.

• Testability improvements
• With the modularity of the overall application, tests can be targeted towards each module, creating a more streamlined testing
pipeline. This drastically reduces the test case complexity by targeting tests for each component of the application.

• The pub/sub pattern also helps to easily understand the origin and destination of the data and the information
flow. It is particularly helpful in testing issues related to:

• Data corruption
• Formatting
• Security
Disadvantages of pub/sub pattern

• Pub/Sub is a robust messaging service, yet it is not the best option for all
requirements. some of the shortcomings of this pattern are.
• Unnecessary complexity in smaller systems
• Pub/sub needs to be properly configured and maintained. Where scalability and a
decoupled nature are not vital factors to your app, implementing Pub/Sub will be a
waste of resources and lead to unnecessary complexity for smaller systems
• Media streaming
• Pub/sub is not suitable when dealing with media such as audio or video as they
require smooth synchronous streaming between the host and the receiver. Because it
does not support synchronous end-to-end communications, pub/sub messaging is ill-
suited for:
• Video conferencing
• VOIP
• General media streaming applications
Elements of a Publish/Subscribe System
• A publisher submits a piece of information e (i.e., an event) to the pub/sub system by
executing the publish(e) operation.
• An event is structured as a set of attribute-value pairs.
• Each attribute has a name, a simple character string, and a type.
• The type is generally one of the common primitive data types defined in programming
languages or query languages (e.g. integer, real, string, etc.).
• On the subscriber’s side, interest in specific events is expressed through subscriptions.
• A subscription is a filter over a portion of the event content (or the whole of it).
• Expressed through a set of constraints that depend on the subscription language.
• A subscriber installs and removes a subscription from the pub/sub system by executing
the subscribe() and unsubscribe() operations respectively.
• An event e matches a subscription if it satisfies all the declared constraints on the
corresponding attributes
• The task of verifying whenever an event e matches a subscription is called matching
Virtualization
• Virtualization refers to the representation of physical computing
resources in simulated form having made through the software.
• This special layer of software (installed over active physical
machines) is referred as layer of virtualization.
• This layer transforms the physical computing resources into virtual
form which users use to satisfy their computing needs.
• The virtualization is the logical separation of physical resources from
direct access of users to fulfill their service needs. Although, at the
end, actually the physical resources are responsible to provide those
services.
• Virtualization provides a level of logical abstraction that liberates user-
installed software (starting from operating system and other systems as
well as application software) from being tied to a specific set of
hardware. Rather, the users install everything over the logical
operating environment (rather than physical ones) having created
through virtualization.
• In cloud computing, resource virtualization which adds a layer of
software over physical computing resources to create virtual resources,
acts as a layer of abstraction. This abstraction makes it easier to offer
more flexible, reliable and powerful service
• Any kind of computing resources can be virtualized.
• Apart from basic computing devices like processor, primary memory,
other resources like storage, network devices (like switch, router etc.),
the communication links and peripheral devices (like keyboard,
mouse, printer etc.) can also be virtualized.
• It should be noted that in case of core computing resources a
virtualized component can only be operational when a physical
resource empowers it from the back end. For example, a virtual
processor can only work when there is a physical processor linked
with it.
MACHINE OR SERVER LEVEL VIRTUALIZATION
• Machine virtualization (also called server virtualization) is the concept
of creating virtual machine (or virtual computer) on actual physical
machine. The parent system on which the virtual machines run is
called the host system, and the virtual machines are themselves
referred as guest systems.
• In conventional computing system, there has always been a one-to-one
relationship between physical computer and operating system. At a
time, a single OS can run over them. Hardware virtualization
eliminates this limitation of having a one-to-one relationship between
physical hardware and operating system. It facilitates the running of
multiple computer systems having their own operating systems on
single physical machine
What is virtual layer?
• From Figure above, it can be seen that virtual machines are created over the virtualization layers.
• This virtualization layer is actually a set of control programs that creates the environment for the
virtual machines to run on.
• This layer provides the access to the system resources to the virtual machines.
• It also controls and monitors the execution of the virtual machines over it.
• This software layer is referred as the Hypervisor or Virtual Machine Monitor (VMM).
• The hypervisor abstracts the underlying software and/or hardware environments and represents
virtual system resources to its users.
• This layer also facilitates the existence of multiple VMs those are not bound to share same
(underlying) OS kernel. Due to this reason, it becomes possible to run different operating systems
in those virtual machines as created over a hypervisor.
• The hypervisor layer provides an administrative system console through which the virtual system
environment (like number of virtual components to produce or capacity of the components) can be
managed
Machine Virtualization Techniques
• There are two different techniques of server or machine virtualization
as
• hosted approach and
• the bare metal approach.
Hosted Approach
• In this approach, an operating system is first installed on the physical
machine to activate it.
• This OS installed over the host machine is referred as host operating
system.
• The hypervisor is then installed over this host OS.
• This type of hypervisor is referred to as Type 2 hypervisor or Hosted
hypervisor.
• the host OS works as the first layer of software over the physical resources.
• Hypervisor is the second layer of software and guest operating systems run
as the third layer of software.
• Products like VMWare Workstation and Microsoft Virtual PC are the most
common examples of type 2 hypervisors.
• Benefits: In this approach, the host OS supplies the hardware drivers
for the underlying physical resources. This eases the installation and
configuration of the hypervisor. It makes the type-2 hypervisors
compatible for a wide variety of hardware platform.
• Drawbacks: A hosted hypervisor does not have direct access to the
hardware resources and hence, all of the requests from virtual
machines must go through the host OS. This may degrade the
performance of the virtual machines. Another drawback of the hosted
virtualization is the lack of support for real-time operating systems.
Since the underlying host OS controls the scheduling of jobs it
becomes unrealistic to run a real-time OS inside a VM using hosted
virtualization
Bare Metal Approach: Removal of the Host OS
• In this approach of machine virtualization, the hypervisor is directly
installed over the physical machine.
• The hypervisor is the first layer over hardware resources, hence, the
technique is referred as bare metal approach.
• The VMM or the hypervisor communicates directly with system
hardware.
• In this approach, the hypervisor acts as low-level virtual machine
monitor and also called as Type 1 hypervisor or Native Hypervisor.
• VMware’s ESX and ESXi Servers, Microsoft’s Hyper-V, solution Xen
are some of the examples of bare-metal hypervisors.
• Benefits: Since the bare metal hypervisor can directly access the
hardware resources in most of the cases it provides better performance
in comparison to the hosted hypervisor. For bigger application like
enterprise data centers, bare-metal virtualization is more suitable
because usually it provides advanced features for resource and security
management. Administrators get more control over the host
environment.
• Drawbacks: As any hypervisor usually have limited set of device
drivers built into it, so the bare metal hypervisors have limited
hardware support and cannot run on a wide variety of hardware
platform.
Machine reference model
• Modern computing systems can be expressed in terms of the reference model described in
Figure below. It defines the interfaces between the levels of abstractions, which hide
implementation details. Virtualization techniques actually replace one of the layers and
intercept the calls that are directed towards it.

• At the bottom layer, the model for the hardware is expressed in terms of the Instruction
Set Architecture (ISA), which defines the instruction set for the processor, registers,
memory, and interrupt management. ISA is the interface between hardware and software,
and it is important to the operating system (OS) developer (System ISA) and developers
of applications that directly manage the underlying hardware (User ISA).
• The application binary interface (ABI) separates the operating system layer from the
applications and libraries, which are managed by the OS. ABI covers details such as low-
level data types, alignment, and call conventions and defines a format for executable
programs. ABI defines how data structures or computational routines are accessed in
machine code, which is low level hardware dependent format. Adhering to am ABI is the
job of compiler.
• The highest level of abstraction is represented by the application programming interface
(API), which interfaces applications to libraries and/or the underlying operating system.
How data structures are accessed in source code is defined in API
• For this purpose, the instruction set exposed by the hardware has been
divided into different security classes that define who can operate with
them.
• The first distinction can be made between privileged and nonprivileged
instructions.
• Nonprivileged instructions are those instructions that can be used without
interfering with other tasks because they do not access shared resources.
This category contains, for example, all the floating, fixed-point, and
arithmetic instructions.
• Privileged instructions are those that are executed under specific
restrictions and are mostly used for sensitive operations, which expose
(behavior-sensitive) or modify (control-sensitive) the privileged state.
Virtualization and Protection Rings
• Protection Rings, are a mechanism to protect data and functionality from
faults (fault tolerance) and malicious behavior (computer security).
• Computer operating systems provide different levels of access to resources.
• A protection ring is one of two or more hierarchical levels or layers of
privilege within the architecture of a computer system. Rings are arranged
in a hierarchy from most privileged (most trusted, usually numbered zero)
to least privileged (least trusted, usually with the highest ring number). On
most operating systems, Ring 0 is the level with the most privileges and
interacts most directly with the physical hardware such as the CPU and
memory.
• x86 CPU hardware actually provides four protection rings: 0, 1, 2, and 3.
Only rings 0 (Kernel) and 3 (User) are typically used.
• In any modern operating system, the CPU is actually spending time in two very
distinct modes:
• 1.Kernel Mode
• In Kernel mode, the executing code has complete and unrestricted access to the
underlying hardware. It can execute any CPU instruction and reference any
memory address. Kernel mode is generally reserved for the lowest-level, most
trusted functions of the operating system. Crashes in kernel mode are catastrophic;
they will halt the entire PC.
• 2. User Mode
• In User mode, the executing code has no ability to directly access hardware or
reference memory. Code running in user mode must delegate to system APIs to
access hardware or memory. Due to the protection afforded by this sort of
isolation, crashes in user mode are always recoverable. Most of the code running
on your computer will execute in user mode.
Hypervisor mode

• The x86 family of CPUs provide a range of protection levels also known as rings in which code
can execute. Ring 0 has the highest level privilege and it is in this ring that the operating system
kernel normally runs. Code executing in ring 0 is said to be running in system space, kernel mode
or supervisor mode. All other code such as applications running on the operating system operates
in less privileged rings, typically ring 3.
• Under hypervisor virtualization a program known as a hypervisor (also known as a type 1 Virtual
Machine Monitor or VMM) runs directly on the hardware of the host system in ring 0. The task of
this hypervisor is to handle resource and memory allocation for the virtual machines in addition to
providing interfaces for higher level administration and monitoring tools.
• Clearly, with the hypervisor occupying ring 0 of the CPU, the kernels for any guest operating
systems running on the system must run in less privileged CPU rings. Unfortunately, most
operating system kernels are written explicitly to run in ring 0 for the simple reason that they need
to perform tasks that are only available in that ring, such as the ability to execute privileged CPU
instructions and directly manipulate memory.
• A number of different solutions to this problem have been devised in recent years, each of which is
described below:
Full Virtualization
• In an x86 architecture, the guest OSs and VMs are not executed in ring 0.
• There are two issues out of this action. The first is that these guest OSs should be supervised by a host OS running in ring 0.
• Hence VMM is run in ring 0.
• Where do we run the guest OS?
• These must run deprivileged in either ring 1 or 2.
• Typically the application is run in ring 3 and guest OS in ring 1.
• This methodology is termed as full virtualization.
• However, deprivileging an OS comes with an associated cost.
• Since the guest operating systems are written for execution on top of the hardware (ring 0), many instructions may be unsafe or even
potentially harmful to be run in user mode.
• Simple traps for all these ‘sensitive’ instructions may not work.
• To ensure safety of the virtualized environment, all those instructions that can cause problems must be intercepted and rewritten, if
required. Hence complete binary translation of the kernel code of the guest OSs is required to ensure the safety of the processor and
the machine while allowing the user code to run natively on the hardware.
• Further, a guest OS could easily determine that it is not running at privilege level 0. Since this is not desirable, the VMM must take
an appropriate action.
• Another problem of deprivileging the OS is that, as normal OS, the guest OS program expects to enjoy unrestrained access to the
full memory but as a mere application, running in a higher ring, it cannot enjoy the same privilege. Hence the VMM must make way
for ensuring that this is taken care of. This method is called the full virtualization with binary translation.
• In this virtualization one or more guest operating systems of virtual machines share hardware resources from the host system. The
presence of the hypervisor beneath is not known to the guests
• Full virtualization is the only option that requires no hardware assist or
operating system assist to virtualize sensitive and privileged instructions.
The hypervisor translates all operating system instructions on the fly and
caches the results for future use, while user level instructions run
unmodified at native speed.
• However, the issue that restricts full virtualization with binary translation is
the performance.
• Translation takes time and translating all the kernel codes of the guest OS is
expensive in terms of performance. This problem is resolved by using
dynamic binary translation. In dynamic binary translation, a block of code is
used. These blocks may or may not have critical instructions. For each
block, dynamic BT translates critical instructions, if any, into some
privilege instructions, which will trap to VMM for further emulation. Full
virtualization technology uses and exploits dynamic binary translation.
• Real-time systems also cannot be virtualized since such systems cannot
tolerate the delays caused by the translation
• VMware’s virtualization product VMWare ESXi Server and Microsoft
Virtual Server are few examples of full virtualization solution.
Para virtualization
• The problem of full virtualization is that the guest OS is unaware of the fact that it has been diprivileged and hence its behaviour
continues to be the same.
• In para virtualization, the guest OS is modified or patched for virtualization.
• Hypervisor sits as the base OS or in ring 0 in case of x86 and guest OS resides on top of VMM.
• Here, since the Guest OS is aware that it is running above VMM rather than on top of the physical machine, many problems of full
virtualization is taken care of.
• The modified kernel of the guest OS is able to communicate with the underlying hypervisor via special calls.
• These special calls are provided by specific APIs depending on the hypervisor employed.
• These special calls are equivalent to system calls generated by an application to a non virtualized OS.
• Xen Hypervisor is an example that uses paravirtualization technology.
• The Guest OS is modified and thus run kernel-level operations at Ring 1.
• The guest OS is now fully aware of how to process both privileged and sensitive instructions. Hence the necessity for translation of
instructions is not present any more.
• Guest OS uses a specialized call, called “hypercall” to talk to the VMM.
• VMM executes the privileged instructions. Thus VMM is responsible for handling the virtualization requests and putting them to the
hardware.
• The unmodified versions of available operating systems (like Windows, Linux) cannot be used in para-
virtualization. Since it involves modifications of the OS the para-virtualization is sometimes referred to as
OS-Assisted Virtualization too. This technique relieves the hypervisor from handling the entire virtualization
tasks to some extent. Best known example of para virtualization hypervisor is the open-source Xen project
which uses a customized Linux kernel.
• Advantages
• Para-virtualization allows calls from guest OS to directly communicate with hypervisor (without any binary
translation of instructions). The use of modified OS reduces the virtualization overhead of the hypervisor as
compared to the full virtualization.
• In para-virtualization, the system is not restricted by the device drivers provided by the virtualization
software layer. In fact, in para-virtualization, the virtualization layer (hypervisor) does not contain any device
drivers at all. Instead, the guest operating systems contain the required device drivers.
• Limitations
• Unmodified versions of available operating systems (like Windows or Linux) are not compatible with para-
virtualization hypervisors. Modifications are possible in Open source operating systems (like Linux) by the
user. But for proprietary operating systems (like Windows), it depends upon the owner. If owner agrees to
supply the required modified version of the OS for a hypervisor, then only that OS becomes available for the
para virtualization system
• Security is compromised in this approach as the guest OS has a comparatively more control of the underlying
hardware. Hence, the users of some VM with wrong intentions have more chances of causing harm to the
physical machine.
• Legacy processors are not designed for virtualization. Hence we observed that whatever the methods that may be applied for
implementing virtualization, each has its own problems.
• However, if the processors are made virtualization-aware, the VMM design will be more efficient and simple.
• Many issues mentioned in the earlier methods can be easily taken care of with such a processor.
• This is the reason why hardware vendors rapidly embraced virtualization and developed new features to simplify virtualization
techniques.
• The two giants in the hardware arena, Intel and AMD came up with came up with designs of new CPU execution mode that allows
VMM to run in a new root mode below ring 0.
• This is the way to handle the privileged mode.
• In this new design, both privileged and sensitive calls automatically trap to the hypervisor.
• Hence, in this new design, no need for either binary translation or paravirtualization.
• Examples of this new design are Intel VT-x (2005) and AMD-V (2006).
• Intel VT-x have two modes of operations: VMX root and VMX non-root.
• While “VMX root” mode operation executes the hypervisor / VMM in the ring 0, “VMX non-root” mode operation executes the
guest OS, also in ring 0, thereby removing the need to deprivilege the guest OS. Both the modes support all privilege rings and are
identical. Unmodified guest OS runs in ring 0 in non-root mode and traps instructions to root mode. The privileged and sensitive
calls automatically trap to the hypervisor. VMM controls the execution of the guest OS.
Levels of Virtualization Implementation
• The main function of the software layer for virtualization is to
virtualize the physical hardware of a host machine into virtual
resources to be used by the VMs, exclusively. This can be
implemented at various operational levels, .
• The virtualization software creates the abstraction of VMs by
interposing a virtualization layer at various levels of a computer
system. Common virtualization layers include the instruction set
architecture (ISA) level, hardware level, operating system level,
library support level, and application level
• Instruction Set Architecture Level (ISA)
• At the ISA level, virtualization can work via emulating a given ISA by the ISA of the host
machine. For instance, MIPS binary code can operate on an x86-based host machine with
the help of ISA emulation.
Thus, this strategy makes it possible to run a large volume of legacy binary code written
for several processors on any provided different hardware host machine.
The first emulation method is through code interpretation. Therefore, an interpreter
program defines the source instructions to target instructions one by one.

• Hardware Abstraction Level

• Hardware-level virtualization is performed right on top of the bare hardware. On the one
hand, this approach generates a virtual hardware environment for a VM. On the other
hand, the process manages the underlying hardware through virtualization. The idea is to
virtualize a computer’s resources, such as its processors, memory, and I/O devices. The
intention is to upgrade the hardware utilization rate by multiple users concurrently. The
idea was implemented in the IBM VM/370 in the 1960s. More recently, the Xen
hypervisor has been applied to virtualize x86-based machines to run Linux or other guest
OS applications.
• The virtualization model can create an abstract layer between the operating
system and the user application at this operating system level.
It is an isolated container on the operating system and the physical server,
which uses the software and hardware. Thus, each of these then operates in
the form of a server.
Therefore, when there are numerous users and no one wants to share the
hardware, then at this point, the virtualization level comes into use.
Each user will get one virtual environment using a virtual hardware
resource that is dedicated. Hence, in this manner, there is no issue of any
conflict.
Any hardware which is in a virtualized environment will process within this
operating system.
• And the essential requirement at the operating system level is that all the
user systems on the virtualized environment will hold the same family
operating system. Otherwise, we can’t transfer the service to the users.
• Library support level
• Most utmost applications use APIs exported by user-level libraries rather than practicing lengthy system calls
by the OS.
Since most of the systems provide well-documented APIs, such an interface becomes another applicant for
virtualization.
Therefore, virtualization with library interfaces is possible by checking the communication link between
applications and the system through API hooks.
Activity happenings within the Library Support Level:
• Use of the application programming interface (API).
• At this level, the emulator’s idea worked as a tool and provided the guest operating system to practice the
resources they want. In short, users use the emulator to run different applications of the other operating
systems.
• User-Application Level
• The application-level virtualization works where there is a desire to virtualize only one application and is the
last of the implementation levels of virtualization in cloud computing.
One does not require to virtualize the complete environment of the platform.
Therefore, it generally works when you run virtual machines that practice high-level languages. Also, it lets
the high-level language programs compiled be of use in the application level of the virtual machine that runs
seamlessly.
Activity happenings within the Library User-Application Level:
• Virtual machine as an application operates at the user system with the help of the virtualization layer.
• Also, the users excess the services if the environment in which user and host were of a different type
THE TOOLS OF THE VIRTUALIZATION
• VMware It was a virtual machine ‘VM’ that assisted in executing unmodified OS on the Host or
User-Level application. The OS that utilized with VMware may get stopped, reinstalled, restarted,
or crashed without having any influence on the application that runs on the Hosted CPU. VMware
provided the distribution of Guest-OS from the actual Host-OS. As a consequence, if the Guest-OS
failed later the physical hardware or the hosted computer did not suffer from the failure. VMware
was utilized to create standard illusion hardware on the inner side of the Virtual Machine ‘VM’.
Hence, the VMware was utilized to execute numerous unmodified OS simultaneously on the
distinct hardware engine by executing the OS in the Virtual Machine of a particular OS. Despite
that, the code running on the hardware like a simulator, Virtual Machine executed the code
straightly on the physical hardware without any software that interprets the code .
• Xen It was the most common virtualization open-source tool that supported both Full-
Virtualization ‘FV’ and Para-Virtualization ‘PV’. Xen was an extremely famed virtualization
resolution, initially established at the Cambridge University. It was the single Bare-Metal solution
that was obtainable as an open-source. It contained several elements that cooperated to supply the
virtualization atmosphere comprising Xen Hypervisor ‘XH’, Domain-0-Guest shortened to Dom-0,
and Domain-U-Guest shortened to Dom-U that could be either ‘PV’- Guest or ‘FV’-Guest. The
Xen Hypervisor ‘XH’ was the layer that resided straightly on the hardware underneath any OS. It
was responsible for CPU scheduling and memory segregating of the different ‘VMs’. It represented
the administration of Domain-U-Guest ‘Dom-U’ to the Domain-0-Guest ‘Dom-0’ [23].
• Qemu This virtualization tool was utilized to execute the virtualization
in the OSs such as Linux and Windows. It was counted as the
renowned open-source emulator that offered swift emulation with the
assist of dynamic translation. It had several valuable commands for
managing the Virtual Machine ‘VM’. Qemu was the major open-
source tool for various hardware architectures. Indeed, It was an
example of Native- Virtualization ‘NV’
• Docker Docker is open-source. It is relied on using containers to
automatically distribute Linux application. All the necessities like
codes, runtime system tools, and system libraries are included in the
Docker containers. Docker utilized Linux containers (LXC) library till
version 0.9, but after this version, Docker utilizes a lib container for
virtualization capabilities provided by a kernel of Linux. It uses to
implement an isolated container via a highlevel application program
interface (API). The operating system (OS) is not required in Docker.
The same Linux kernel utilizes by a Docker container but is performed
by isolating the user space from the host OS. Docker is only available
and compatible with Linux.
• Kernel-Based Virtual Machine (KVM) A KVM is also open-source and is
required central processing unit (CPU) technology for (Intel VT or AMD-
V). It utilizes the full virtualization ‘FV’ for Linux x86 and including the
extensions of virtualization, the KVM’s kernel component is included in
Linux, but the KVM’s userspace components are included in a quick
emulator (QEMU). However, for some devices KVM also supports the
para-virtualization ‘PV’ mechanism. By using KVM end user can turn
Linux into a Hypervisor that can run multiple and isolated virtual
environments called guests. The main limitation of KVM is that it cannot
execute emulation. Instead of that, it reveals the KVM interface and it sets
up the virtual machine address space and feeds the simulated input/output
via QEMU
• OpenVZ It was also an open-source virtualization tool that relied on the
control group conceptions. OpenVZ provided Container-Based-
Virtualization ‘CBV’ for the Linux platform. It allowed several distributed
execution that named Virtual Environments ‘VEs’ or Containers with a
distinct operating system kernel. It also provided superior performance and
scalability when compared with the other virtualization tools
What Is Disaster Recovery?

• Disaster recovery (DR) is an organization’s ability to restore access


and functionality to IT infrastructure after a disaster event, whether
natural or caused by human action (or error). DR is considered a
subset of business continuity, explicitly focusing on ensuring that the
IT systems that support critical business functions are operational as
soon as possible after a disruptive event occurs.
What Are the Types of Disaster Recovery?
• Businesses can choose from a variety of disaster recovery methods, or combine several:
• Back-up: This is the simplest type of disaster recovery and entails storing data off site or on a removable
drive. However, just backing up data provides only minimal business continuity help, as the IT infrastructure
itself is not backed up.
• Cold Site: In this type of disaster recovery, an organization sets up a basic infrastructure in a second, rarely
used facility that provides a place for employees to work after a natural disaster or fire. It can help with
business continuity because business operations can continue, but it does not provide a way to protect or
recover important data, so a cold site must be combined with other methods of disaster recovery.
• Hot Site: A hot site maintains up-to-date copies of data at all times. Hot sites are time-consuming to set up
and more expensive than cold sites, but they dramatically reduce down time.
• Disaster Recovery as a Service (DRaaS): In the event of a disaster or ransomware attack, a DRaaS provider
moves an organization’s computer processing to its own cloud infrastructure, allowing a business to continue
operations seamlessly from the vendor’s location, even if an organization’s servers are down. DRaaS plans are
available through either subscription or pay-per-use models. There are pros and cons to choosing a local
DRaaS provider: latency will be lower after transferring to DRaaS servers that are closer to an organization’s
location, but in the event of a widespread natural disaster, a DRaaS that is nearby may be affected by the same
disaster.
• Back Up as a Service: Similar to backing up data at a remote location, with Back Up as a Service, a third
party provider backs up an organization’s data, but not its IT infrastructure.
• Datacenter disaster recovery: The physical elements of a data center can protect data and contribute to
faster disaster recovery in certain types of disasters. For instance, fire suppression tools will help data and
computer equipment survive a fire. A backup power source will help businesses sail through power outages
without grinding operations to a halt. Of course, none of these physical disaster recovery tools will help in the
event of a cyber attack.
• Virtualization: Organizations can back up certain operations and data or
even a working replica of an organization’s entire computing environment
on off-site virtual machines that are unaffected by physical disasters. Using
virtualization as part of a disaster recovery plan also allows businesses to
automate some disaster recovery processes, bringing everything back online
faster. For virtualization to be an effective disaster recovery tool, frequent
transfer of data and workloads is essential, as is good communication within
the IT team about how many virtual machines are operating within an
organization.
• Point-in-time copies: Point-in-time copies, also known as point-in-time
snapshots, make a copy of the entire database at a given time. Data can be
restored from this back-up, but only if the copy is stored off site or on a
virtual machine that is unaffected by the disaster.
• Instant recovery: Instant recovery is similar to point-in-time copies, except
that instead of copying a database, instant recovery takes a snapshot of an
entire virtual machine.
Virtual disaster recovery
• The general idea of virtual disaster recovery is that combining server
and storage virtualization allows companies to store backups in places
that are not tied to their own physical location.
• This protects data and systems from fires, floods and other types of
natural disasters, as well as other emergencies.
• Many vendor systems feature redundant design with availability
zones, so that if data in one zone is compromised, another zone can
keep backups alive.
Good disaster recovery plan
• The first step in disaster recovery planning is to look at the business
requirements and match applications to service-level objectives. In the
disaster recovery realm, the standard measurements are recovery time
objective (RTO) and recovery point objective (RPO).
• RTO specifies the amount of outage time that can be tolerated by the
application before services must be resumed. Mission-critical applications
have low, or even zero, RTOs (indicating services must continue at all
times).
• RPO describes the amount of data loss that can be tolerated by an
application. This may be zero (ie, no data loss) or measured in minutes or
hours. Some non-core apps (like those used for reporting) may be able to
tolerate an RPO of up to 24 hours, especially where data can be generated
from other sources.

You might also like