Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
21 views45 pages

Unit1 Cloud Computing

The document discusses cloud architecture, including its components such as hypervisors, management software, and network infrastructure, which are essential for cloud computing. It outlines various system models like clusters, grid computing, and peer-to-peer networks, emphasizing their characteristics and applications. Additionally, it highlights the benefits of cloud computing, such as scalability and cost-effectiveness, while addressing challenges like security and resource management.

Uploaded by

narenradhakrish
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views45 pages

Unit1 Cloud Computing

The document discusses cloud architecture, including its components such as hypervisors, management software, and network infrastructure, which are essential for cloud computing. It outlines various system models like clusters, grid computing, and peer-to-peer networks, emphasizing their characteristics and applications. Additionally, it highlights the benefits of cloud computing, such as scalability and cost-effectiveness, while addressing challenges like security and resource management.

Uploaded by

narenradhakrish
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

1

CCS335-CLOUD COMPUTING
UNIT-1

CLOUD ARCHITECTURE MODELS AND INFRASTRUCTURE


Cloud architecture: system models for distributed and cloud computing –NIST cloud computing
reference architecture-Cloud deployment models-Cloud service models; Cloud infrastructure:
architectural design of compute and storage clouds-Design challenges.

1.1 CLOUD ARCHITECTURE

Cloud infrastructure is a term used to describe the components needed for cloud computing, which
includes hardware, abstracted resources, storage, and network resources.
An abstraction technology or process—like virtualization—is used to separate resources from
physical hardware and pool them into clouds; automation software and management tools allocate
these resources and provision new environments so users can access what they need—when they
need it.
Components of Cloud Infrastructure:

Hypervisor :
Hypervisor is a firmware or a low level program which is a key to enable virtualization. It is
used to divide and allocate cloud resources between several customers. As it monitors and
manages cloud services/resources that’s why hypervisor is called as VMM (Virtual Machine
Monitor) or (Virtual Machine Manager).
2. Management Software:
Management software helps in maintaining and configuring the infrastructure. Cloud
management software monitors and optimizes resources, data, applications and services.
2

3. Deployment Software :
Deployment software helps in deploying and integrating the application on the cloud. So,
typically it helps in building a virtual computing environment.
4. Network :
It is one of the key components of cloud infrastructure which is responsible for connecting cloud
services over the internet. For the transmission of data and resources externally and internally
network is must required.
5. Server :
Server which represents the computing portion of the cloud infrastructure is responsible for
managing and delivering cloud services for various services and partners, maintaining security
etc.
6. Storage :
Storage represents the storage facility which is provided to different organizations for storing
and managing data. It provides a facility of extracting another resource if one of the resource
fails as it keeps many copies of storage.
Along with this, virtualization is also considered as one of important component of cloud
infrastructure. Because it abstracts the available data storage and computing power away from
the actual hardware and the users interact with their cloud infrastructure through GUI (Graphical
User Interface).
Distributed and cloud computing systems are built over a large number of autonomous computer
nodes. These node machines are interconnected by SANs, LANs, or WANs in a hierarchical man-
ner. With today’s networking technology, a few LAN switches can easily connect hundreds of
machines as a working cluster. A WAN can connect many local clusters to form a very large cluster
of clusters. In this sense, one can build a massive system with millions of computers connected to
edge networks.
3

Massive systems are considered highly scalable, and can reach web-scale connectivity, either
physically or logically. In Table 1.2, massive systems are classified into four groups: clusters,
P2P networks, computing grids, and Internet clouds over huge data centers. In terms of node
number, these four system classes may involve hundreds, thousands, or even millions of computers
as participating nodes. These machines work collectively, cooperatively, or collaboratively at
various levels. The table entries characterize these four system classes in various technical and
application aspects.
From the application perspective, clusters are most popular in supercomputing applications. In
2009, 417 of the Top 500 supercomputers were built with cluster architecture. It is fair to say that
clusters have laid the necessary foundation for building large-scale grids and clouds. P2P networks
appeal most to business applications. However, the content industry was reluctant to accept P2P
technology for lack of copyright protection in ad hoc networks. Many national grids built in the
past decade were underutilized for lack of reliable middleware or well-coded applications.
Potential advantages of cloud computing include its low cost and simplicity for both providers and
users.
4

1.2 SYSTEM MODELS FOR DISTRIBUTED AND CLOUD COMPUTING

1. CLUSTERS
2. GRID COMPUTING
3. PEER TO PEER NETWORK
4. INTERNET CLOUDS

1.2.1 Clusters of Cooperative Computers


A computing cluster consists of interconnected stand-alone computers which work cooperatively
as a single integrated computing resource. In the past, clustered computer systems have
demonstrated impressive results in handling heavy workloads with large data sets.
In Cluster Computing, the computers are connected to make it a single computing. The tasks in
Cluster computing are performed concurrently by each computer also known as the nodes which
are connected to the network. So the activities performed by any single node are known to all the
nodes of the computing which may increase the performance, transparency, and processing speed.
To eliminate the cost, cluster computing has come into existence. We can also resize the cluster
computing by removing or adding the nodes.
1.2.1.1 Cluster Architecture
Figure 1.15 shows the architecture of a typical server cluster built around a low-latency, high-
bandwidth interconnection network. This network can be as simple as a SAN (e.g., Myrinet) or a
LAN (e.g., Ethernet). To build a larger cluster with more nodes, the interconnection network can
be built with multiple levels of Gigabit Ethernet, Myrinet, or InfiniBand switches. Through
hierarchical construction using a SAN, LAN, or WAN, one can build scalable clusters with an
increasing number of nodes. The cluster is connected to the Internet via a virtual private network
(VPN) gateway. The gateway IP address locates the cluster. The system image of a computer is
decided by the way the OS manages the shared cluster resources. Most clusters have loosely
coupled node computers. All resources of a server node are managed by their own OS. Thus, most
clusters have multiple system images as a result of having many autonomous nodes under different
OS control.
5

1.2.1.2 Single-System Image


Greg Pfister has indicated that an ideal cluster should merge multiple system images into a single-
system image (SSI). Cluster designers desire a cluster operating system or some middle-ware to
support SSI at various levels, including the sharing of CPUs, memory, and I/O across all cluster
nodes. An SSI is an illusion created by software or hardware that presents a collection of resources
as one integrated, powerful resource. SSI makes the cluster appear like a single machine to the
user. A cluster with multiple system images is nothing but a collection of inde-pendent computers.
1.2.1.3 Hardware, Software, and Middleware Support
Clusters exploring massive parallelism are commonly known as MPPs. Almost all HPC clusters
in the Top 500 list are also MPPs. The building blocks are computer nodes (PCs, workstations,
servers, or SMP), special communication software such as PVM or MPI, and a network interface
card in each computer node. Most clusters run under the Linux OS. The computer nodes are
interconnected by a high-bandwidth network (such as Gigabit Ethernet, Myrinet, InfiniBand, etc.).
Special cluster middleware supports are needed to create SSI or high availability (HA). Both
sequential and parallel applications can run on the cluster, and special parallel environments are
needed to facilitate use of the cluster resources. For example, distributed memory has multiple
images. Users may want all distributed memory to be shared by all servers by forming distribu-ted
shared memory (DSM). Many SSI features are expensive or difficult to achieve at various cluster
operational levels. Instead of achieving SSI, many clusters are loosely coupled machines. Using
virtualization, one can build many virtual clusters dynamically, upon user demand.
1.2.1.4 Major Cluster Design Issues
Unfortunately, a cluster-wide OS for complete resource sharing is not available yet. Middleware
or OS extensions were developed at the user space to achieve SSI at selected functional levels.
Without this middleware, cluster nodes cannot work together effectively to achieve cooperative
6

computing. The software environments and applications must rely on the middleware to achieve
high performance. The cluster benefits come from scalable performance, efficient message
passing, high system availability, seamless fault tolerance, and cluster-wide job management.
1.2.2 Grid Computing Infrastructures
It was introduced in the year 1990. As the computing structure includes different computers or
nodes, in this case, the different nodes are placed in different geographical places but are connected
to the same network using the internet.
The other computing methods seen so far, it has homogeneous nodes that are located in the same
place. But in this grid computing, the nodes are placed in different organizations. It minimized the
problems of cluster computing but the distance between the nodes raised a new problem.

1.2.2.1 Computational Grids


Like an electric utility power grid, a computing grid offers an infrastructure that couples
computers, software/middleware, special instruments, and people and sensors together. The grid
is often con-structed across LAN, WAN, or Internet backbone networks at a regional, national, or
global scale. Enterprises or organizations present grids as integrated computing resources. They
can also be viewed as virtual platforms to support virtual organizations. The computers used in a
grid are pri-marily workstations, servers, clusters, and supercomputers. Personal computers,
laptops, and PDAs can be used as access devices to a grid system.
Figure 1.16 shows an example computational grid built over multiple resource sites owned by
different organizations. The resource sites offer complementary computing resources, including
workstations, large servers, a mesh of processors, and Linux clusters to satisfy a chain of
computational needs. The grid is built across various IP broadband networks including LANs and
7

WANs already used by enterprises or organizations over the Internet. The grid is presented to users
as an integrated resource pool as shown in the upper half of the figure.
1.2.2.2 Grid Families
Grid technology demands new distributed computing models, software/middleware support,
network protocols, and hardware infrastructures. National grid projects are followed by industrial
grid plat-form development by IBM, Microsoft, Sun, HP, Dell, Cisco, EMC, Platform Computing,
and others. New grid service providers (GSPs) and new grid applications have emerged rapidly,
similar to the growth of Internet and web services in the past two decades. In Table 1.4, grid
systems are classified in essentially two categories: computational or data grids and P2P grids.
Computing or data grids are built primarily at the national level.

1.2.3 Peer-to-Peer Network Families


An example of a well-established distributed system is the client-server architecture. In this
scenario, client machines (PCs and workstations) are connected to a central server for compute, e-
mail, file access, and database applications. The P2P architecture offers a distributed model of
8

networked systems. First, a P2P network is client-oriented instead of server-oriented. In this


section, P2P systems are introduced at the physical level and overlay networks at the logical level.

1.2.3.1 P2P Systems


In a P2P system, every node acts as both a client and a server, providing part of the system
resources. Peer machines are simply client computers connected to the Internet. All client
machines act autonomously to join or leave the system freely. This implies that no master-slave
relationship exists among the peers. No central coordination or central database is needed. In other
words, no peer machine has a global view of the entire P2P system. The system is self-organizing
with distributed control.
Figure 1.17 shows the architecture of a P2P network at two abstraction levels. Initially, the peers
are totally unrelated. Each peer machine joins or leaves the P2P network voluntarily. Only the
participating peers form the physical network at any time. Unlike the cluster or grid, a P2P network
does not use a dedicated interconnection network. The physical network is simply an ad hoc
network formed at various Internet domains randomly using the TCP/IP and NAI protocols. Thus,
the physical network varies in size and topology dynamically due to the free membership in the
P2P network.
1.2.3.2 Overlay Networks
Data items or files are distributed in the participating peers. Based on communication or file-
sharing needs, the peer IDs form an overlay network at the logical level. This overlay is a virtual
network

formed by mapping each physical machine with its ID, logically, through a virtual mapping as
shown in Figure 1.17. When a new peer joins the system, its peer ID is added as a node in the
overlay network. When an existing peer leaves the system, its peer ID is removed from the overlay
network automatically. Therefore, it is the P2P overlay network that characterizes the logical
connectivity among the peers.
9

There are two types of overlay networks: unstructured and structured. An unstructured
overlay network is characterized by a random graph. There is no fixed route to send messages or
files among the nodes. Often, flooding is applied to send a query to all nodes in an unstructured
overlay, thus resulting in heavy network traffic and nondeterministic search results. Structured
overlay net-works follow certain connectivity topology and rules for inserting and removing nodes
(peer IDs) from the overlay graph. Routing mechanisms are developed to take advantage of the
structured overlays.
1.2.3.5 P2P Application Families
Based on application, P2P networks are classified into four groups, as shown in Table 1.5. The
first family is for distributed file sharing of digital contents (music, videos, etc.) on the P2P
network. This includes many popular P2P networks such as Gnutella, Napster, and BitTorrent,
among others. Colla-boration P2P networks include MSN or Skype chatting, instant messaging,
and collaborative design, among others. The third family is for distributed P2P computing in
specific applications. For example, SETI@home provides 25 Tflops of distributed computing
power, collectively, over 3 million Internet host machines. Other P2P platforms, such as JXTA,
.NET, and FightingAID@home, support naming, discovery, communication, security, and
resource aggregation in some P2P applications.
1.2.3.6 P2P Computing Challenges
P2P computing faces three types of heterogeneity problems in hardware, software, and network
requirements. There are too many hardware models and architectures to select from;
incompatibility exists between software and the OS; and different network connections and
protocols

make it too complex to apply in real applications. We need system scalability as the workload
increases. System scaling is directly related to performance and bandwidth. P2P networks do have
these properties. Data location is also important to affect collective performance. Data locality,
network proximity, and interoperability are three design objectives in distributed P2P applications.
10

P2P performance is affected by routing efficiency and self-organization by participating peers.


Fault tolerance, failure management, and load balancing are other important issues in using overlay
networks. Lack of trust among peers poses another problem. Peers are strangers to one another.
Security, privacy, and copyright violations are major worries by those in the industry in terms of
applying P2P technology in business applications [35]. In a P2P network, all clients provide
resources including computing power, storage space, and I/O bandwidth. The distributed nature of
P2P net-works also increases robustness, because limited peer failures do not form a single point
of failure.
By replicating data in multiple peers, one can easily lose data in failed nodes. On the other hand,
disadvantages of P2P networks do exist. Because the system is not centralized, managing it is
difficult. In addition, the system lacks security. Anyone can log on to the system and cause damage
or abuse. Further, all client computers connected to a P2P network cannot be considered reliable
or virus-free. In summary, P2P networks are reliable for a small number of peer nodes. They are
only useful for applica-tions that require a low level of security and have no concern for data
sensitivity. We will discuss P2P networks in Chapter 8, and extending P2P technology to social
networking in Chapter 9.
1.3 Cloud Computing over the Internet
Cloud computing has been defined differently by many users and designers. For example, IBM,
a major player in cloud computing, has defined it as follows: “A cloud is a pool of virtualized
computer resources. A cloud can host a variety of different workloads, including batch-style
backend jobs and interactive and user-facing applications.” Based on this definition, a cloud allows
workloads to be deployed and scaled out quickly through rapid provisioning of virtual or physical
machines. The cloud supports redundant, self-recovering, highly scalable programming models
that allow workloads to recover from many unavoidable hardware/software failures. Finally, the
cloud system should be able to monitor resource use in real time to enable rebalancing of
allocations when needed.
1.3.1 Internet Clouds
Cloud computing applies a virtualized platform with elastic resources on demand by provisioning
hardware, software, and data sets dynamically (see Figure 1.18). The idea is to move desktop
computing to a service-oriented platform using server clusters and huge databases at data centers.
Cloud computing leverages its low cost and simplicity to benefit both users and providers. Machine
virtualization has enabled such cost-effectiveness. Cloud computing intends to satisfy many user
11

applications simultaneously. The cloud ecosystem must be designed to be secure, trustworthy, and
dependable. Some computer users think of the cloud as a centralized resource pool. Others
consider the cloud to be a server cluster which practices distributed computing over all the servers
used.
1.3.2 The Cloud Landscape
Traditionally, a distributed computing system tends to be owned and operated by an autonomous
administrative domain (e.g., a research laboratory or company) for on-premises computing needs.
However, these traditional systems have encountered several performance bottlenecks: constant
system maintenance, poor utilization, and increasing costs associated with hardware/software
upgrades. Cloud computing as an on-demand computing paradigm resolves or relieves us from
these problems.
• Infrastructure as a Service (IaaS) This model puts together infrastructures demanded by users—
namely servers, storage, networks, and the data center fabric. The user can deploy and run on
multiple VMs running guest OSes on specific applications. The user does not manage or control
the underlying cloud infrastructure, but can specify when to request and release the needed
resources.

• Platform as a Service (PaaS) This model enables the user to deploy user-built applications onto
a virtualized cloud platform. PaaS includes middleware, databases, development tools, and
some runtime support such as Web 2.0 and Java. The platform includes both hardware and
software integrated with specific programming interfaces. The provider supplies the API and
software tools (e.g., Java, Python, Web 2.0, .NET). The user is freed from managing the cloud
infrastructure.

• Software as a Service (SaaS) This refers to browser-initiated application software over


thousands of paid cloud customers. The SaaS model applies to business processes, industry
applications, consumer relationship management (CRM), enterprise resources planning
(ERP), human resources (HR), and collaborative applications. On the customer side, there is no
12

upfront investment in servers or software licensing. On the provider side, costs are rather low,
compared with conventional hosting of user applications.

Internet clouds offer four deployment modes: private, public, managed, and hybrid [11]. These
modes demand different levels of security implications. The different SLAs imply that the security
responsibility is shared among all the cloud providers, the cloud resource consumers, and the third-
party cloud-enabled software providers. Advantages of cloud computing have been advocated by
many IT experts, industry leaders, and computer science researchers.
The following list highlights eight reasons to adapt the cloud for upgraded Internet applications
and web services:
1. Desired location in areas with protected space and higher energy efficiency
2. Sharing of peak-load capacity among a large pool of users, improving overall utilization
3. Separation of infrastructure maintenance duties from domain-specific application development
4. Significant reduction in cloud computing cost, compared with traditional computing paradigms
5. Cloud computing programming and application development
6. Service and data discovery and content/service distribution
7. Privacy, security, copyright, and reliability issues
8. Service agreements, business models, and pricing policies
13

1.3 NIST CLOUD COMPUTING REFERENCE ARCHITECTURE

● NIST stands for National Institute of Standards and Technology

● The goal is to achieve effective and secure cloud computing to reduce cost and improve
services
● NIST composed for six major workgroups specific to cloud computing

○ Cloud computing target business use cases work group


○ Cloud computing Reference architecture and Taxonomy work group
○ Cloud computing standards roadmap work group
○ Cloud computing SAJACC (Standards Acceleration to Jumpstart Adoption ofCloud
Computing) work group
○ Cloud Computing security work group

● Objectives of NIST Cloud Computing reference architecture


○ Illustrate and understand the various level of services
○ To provide technical reference
○ Categorize and compare services of cloud computing
○ Analysis of security, interoperatability and portability

● In general, NIST generates report for future reference which includes survey, analysis of
existing cloud computing reference model, vendors and federal agencies.
● The conceptual reference architecture shown in figure 3.2 involves five actors. Eachactor as
entity participates in cloud computing
14

Actors in Cloud Computing

Actor Definition
Cloud Consumer A person or organization that maintains a business
relationship with, and uses service from, Cloud
Providers.
Cloud Provider A person, organization, or entity responsible for making a
service available to interested parties.
Cloud Auditor A party that can conduct independent assessment of cloud
services, information system operations, performance and
security of the cloud implementation.
Cloud Broker An entity that manages the use, performance and delivery
of cloud services, and negotiates relationships between
Cloud Providers and Cloud Consumers.
Cloud Carrier An intermediary that provides connectivity and transport
of cloud services from Cloud Providers to Cloud
Consumers.

Consumer Auditor

Broker Provider

Figure 3.3 Interaction between actors

Figure illustrates the interactions among the actors. A cloud consumer may request cloud
services from a cloud provider directly or via a cloud broker.
A cloud auditor conducts independent audits and may contact the others to collect necessary
information.
15

Cloud Consumer Cloud auditor

Cloud Broker Cloud Provider

The communication path between a cloud provider and cloud consumer


The communication paths for a cloud auditor to collect auditing information

The communication paths for a cloud broker to privide service to a cloud consumer

Figure : Interactions between the Actors in Cloud Computing

Example Usage Scenario 1:

A cloud consumer may request service from a cloud broker instead of contacting a cloud
provider directly. The cloud broker may create a new service by combining multiple services or
by enhancing an existing service. In this example, the actual cloud providers are invisible to the
cloud consumer and the cloud consumer interacts directly with the cloud broker.

Figure: Usage Scenario for Cloud Brokers

Example Usage Scenario 2:

Cloud carriers provide the connectivity and transport of cloud services from cloud
providers to cloud consumers. As illustrated in Figure 3.6, a cloud provider participates in and
arranges for two unique service level agreements (SLAs), one with a cloud carrier (e.g. SLA2)
and one with a cloud consumer (e.g. SLA1). A cloud provider arranges service level agreements
(SLAs) with a cloud carrier and may request dedicated and encrypted connections to ensure the
cloud services are consumed at a consistent level according to the contractual obligations with
the cloud consumers. In this case, the provider may specify its requirements on capability,
flexibility and functionality in SLA2 in order to provide essential requirements in SLA1.
16

Figure : Usage Scenario for Cloud Carriers

Example Usage Scenario 3:

For a cloud service, a cloud auditor conducts independent assessments of the operation and
security of the cloud service implementation. The audit may involve interactions with both the
Cloud Consumer and the Cloud Provider.

Figure: Usage Scenario for Cloud Auditors


Cloud Consumer

The cloud consumer is the principal stakeholder for the cloud computing service. A cloud
consumer represents a person or organization that maintains a business relationship with, and uses
the service from a cloud provider.
A cloud consumer browses the service catalog from a cloud provider, requests the appropriate
service, sets up service contracts with the cloud provider, and uses the service. The cloud
consumer may be billed for the service provisioned, and needs to arrange payments accordingly.
Cloud consumers need SLAs to specify the technical performance requirements fulfilled
by a cloud provider. SLAs can cover terms regarding the quality of service, security, remedies
for performance failures. A cloud provider may also list in the SLAs a set of promises explicitly
not made to consumers, i.e. limitations, and obligations that cloud consumers must accept.
A cloud consumer can freely choose a cloud provider with better pricing and more favorable
terms. Typically a cloud provider s pricing policy and SLAs are non-negotiable, unless the
customer expects heavy usage and might be able to negotiate for better contracts.
Depending on the services requested, the activities and usage scenarios can be different among
cloud consumers.
Figure presents some example cloud services available to a cloud consumer SaaS applications
in the cloud and made accessible via a network to the SaaS consumers.
17

The consumers of SaaS can be organizations that provide their members with access to software
applications, end users who directly use software applications, or software application
administrators who configure applications for end users. SaaS consumers can be billed based on
the number of end users, the time of use, the network bandwidth consumed, the amount of data
stored or duration of stored data.
Figure: Example Services Available to a Cloud Consumer

Cloud consumers of PaaS can employ the tools and execution resources provided by cloud
providers to develop, test, deploy and manage the applications hosted in a cloud environment.
PaaS consumers can be application developers who design and implement application software,
application testers who run and test applications in cloud-based environments, application
deployers who publish applications into the cloud, and application administrators who configure
and monitor application performance on a platform.
PaaS consumers can be billed according to, processing, database storage and network resources
consumed by the PaaS application, and the duration of the platform usage.
Consumers of IaaS have access to virtual computers, network-accessible storage, network
infrastructure components, and other fundamental computing resources on which they can
deploy and run arbitrary software.
The consumers of IaaS can be system developers, system administrators and IT managers who
are interested in creating, installing, managing and monitoring services for IT infrastructure
operations. IaaS consumers are provisioned with the capabilities to access these computing
resources, and are billed according to the amount or duration of the resources consumed, such
as CPU hours used by virtual computers, volume and duration of data stored, network
bandwidth consumed, number of IP addresses used for certain intervals.
18

Cloud Provider
A cloud provider is a person, an organization; it is the entity responsible for making a service
available to interested parties.
A Cloud Provider acquires and manages the computing infrastructure required for providing
the services, runs the cloud software that provides the services, and makes arrangement to deliver
the cloud services to the Cloud Consumers through network access.
For Software as a Service, the cloud provider deploys, configures, maintains and updates the
operation of the software applications on a cloud infrastructure so that the services are
provisioned at the expected service levels to cloud consumers.
The provider of SaaS assumes most of the responsibilities in managing and controlling the
applications and the infrastructure, while the cloud consumers have limited administrative control
of the applications.
For PaaS, the Cloud Provider manages the computing infrastructure for the platform and runs
the cloud software that provides the components of the platform, such as runtime software
execution stack, databases, and other middleware components.
The PaaS Cloud Provider typically also supports the development, deployment and
management process of the PaaS Cloud Consumer by providing tools such as integrated
development environments (IDEs), development version of cloud software, software
development kits (SDKs), deployment and management tools.
The PaaS Cloud Consumer has control over the applications and possibly some the hosting
environment settings, but has no or limited access to the infrastructure underlying the platform
such as network, servers, operating systems (OS), or storage.
For IaaS, the Cloud Provider acquires the physical computing resources underlying the service,
including the servers, networks, storage and hosting infrastructure.
The Cloud Provider runs the cloud software necessary to makes computing resources available
to the IaaS Cloud Consumer through a set of service interfaces and computing resource
abstractions, such as virtual machines and virtual network interfaces.
The IaaS Cloud Consumer in turn uses these computing resources, such as a virtual computer,
for their fundamental computing needs Compared to SaaS and PaaS Cloud Consumers, an IaaS
Cloud Consumer has access to more fundamental forms of computing resources and thus has
more control over the more software components in an application stack, including the OS and
network.
The IaaS Cloud Provider, on the other hand, has control over the physical hardware and cloud
software that makes the provisioning of these infrastructure services possible, for example,
the physical servers, network equipment, storage devices, host OS and hypervisors for
virtualization.
19

A Cloud Provider s activities can be described in five major areas, as shown in Figure 3.9, a cloud
provider conducts its activities in the areas of service deployment, service orchestration, cloud
service management, security, and privacy.

Orchestration

Figure: Cloud Provider - Major Activities


Cloud Auditor

A cloud auditor is a party that can perform an independent examination of cloud service controls
with the intent to express an opinion thereon.
Audits are performed to verify conformance to standards through review of objective evidence.
A cloud auditor can evaluate the services provided by a cloud provider in terms of security
controls, privacy impact, performance, etc.
Auditing is especially important for federal agencies as “agencies should include a contractual
clause enabling third parties to assess security controls of cloud providers” (by Vivek Kundra,
Federal Cloud Computing Strategy, Feb. 2011.).
Security controls are the management, operational, and technical safeguards or countermeasures
employed within an organizational information system to protect the confidentiality, integrity,
and availability of the system and its information.
For security auditing, a cloud auditor can make an assessment of the security controls in the
information system to determine the extent to which the controls are implemented correctly,
operating as intended, and producing the desired outcome with respect to the security
requirements for the system.
The security auditing should also include the verification of the compliance with regulation and
security policy. For example, an auditor can be tasked with ensuring that the correct policies are
applied to data retention according to relevant rules for the jurisdiction. The auditor may ensure
that fixed content has not been modified and that the legal and business data archival
requirements have been satisfied.
A privacy impact audit can help Federal agencies comply with applicable privacy laws and
20

regulations governing an individual s privacy, and to ensure confidentiality, integrity, and


availability of an individual s personal information at every stage of development and
operation.
Cloud Broker

A cloud consumer may request cloud services from a cloud broker, instead of contacting a cloud
provider directly. A cloud broker is an entity that manages the use, performance and delivery of
cloud services and negotiates relationships between cloud providers and cloud consumers.
In general, a cloud broker can provide services in three categories:

Service Intermediation: A cloud broker enhances a given service by improving some specific
capability and providing value-added services to cloud consumers. The improvement can be
managing access to cloud services, identity management, performance reporting, enhanced
security, etc.
Service Aggregation: A cloud broker combines and integrates multiple services into one or more
new services. The broker provides data integration and ensures the secure data movement
between the cloud consumer and multiple cloud providers.
Service Arbitrage: Service arbitrage is similar to service aggregation except that the services
being aggregated are not fixed. Service arbitrage means a broker has the flexibility to choose
services from multiple agencies. The cloud broker, for example, can use a credit-scoring service
to measure and select an agency with the best score.

Cloud Carrier

A cloud carrier acts as an intermediary that provides connectivity and transport of cloud
services between cloud consumers and cloud providers. Cloud carriers provide access to
consumers through network, telecommunication and other access devices.
For example, cloud consumers can obtain cloud services through network access devices, such
as computers, laptops, mobile phones, mobile Internet devices (MIDs), etc.
The distribution of cloud services is normally provided by network and telecommunication
carriers or a transport agent , where a transport agent refers to a business organization that
provides physical transport of storage media such as high-capacity hard drives.
Note that a cloud provider will set up SLAs with a cloud carrier to provide services consistent
with the level of SLAs offered to cloud consumers, and may require the cloud carrier to provide
dedicated and secure connections between cloud consumers and cloud providers.

Scope of Control between Provider and Consumer

The Cloud Provider and Cloud Consumer share the control of resources in a cloud system. As
21

illustrated in Figure, different service models affect an organization s control over the
computational resources and thus what can be done in a cloud system.
The figure shows these differences using a classic software stack notation comprised of the
application, middleware, and OS layers.
This analysis of delineation of controls over the application stack helps understand the
responsibilities of parties involved in managing the cloud application.

SaaS

PaaS
SaaS

PaaS

IaaS

IaaS

Figure: Scope of Controls between Provider and Consumer


The application layer includes software applications targeted at end users or programs. The
applications are used by SaaS consumers, or installed/ managed/ maintained by PaaS
consumers, IaaS consumers, and SaaS providers.
The middleware layer provides software building blocks (e.g., libraries, database, and Java
virtual machine) for developing application software in the cloud. The middleware is used by
PaaS consumers, installed/managed/ maintained by IaaS consumers or PaaS providers, and
hidden from SaaS consumers.
The OS layer includes operating system and drivers, and is hidden from SaaS consumers and
PaaS consumers.
An IaaS cloud allows one or multiple guest OS s to run virtualized on a single physical host.
Generally, consumers have broad freedom to choose which OS to be hosted among all the OS s
that could be supported by the cloud provider.
The IaaS consumers should assume full responsibility for the guest OS s, while the IaaS provider
controls the host OS.
22

1.4 Cloud Deployment Model

• A cloud infrastructure may be operated in one of the following deployment models: public cloud,
private cloud, community cloud, or hybrid cloud. The differences are based on how exclusive the
computing resources are made to a Cloud Consumer.
Public Cloud

• A public cloud is built over the Internet and can be accessed by any user who has paid for the
service. Public clouds are owned by service providers and are accessible through a subscription.
• The callout box in top of Figure shows the architecture of a typical public cloud.

FIGURE Public, private, and hybrid clouds illustrated by functional architecture and
connectivity of representative clouds available by 2011.

• Many public clouds are available, including Google App Engine (GAE), Amazon Web Services
(AWS), Microsoft Azure, IBM Blue Cloud, and Salesforce.com’s Force.com. The providers of
the aforementioned clouds are commercial providers that offer a publicly accessible remote
interface for creating and managing VM instances within their proprietary infrastructure.
• A public cloud delivers a selected set of business processes.

• The application and infrastructure services are offered on a flexible price- peruse basis.
• A public cloud is one in which the cloud infrastructure and computing resources are made
available to the general public over a public network.
23

The NIST definition “Cloud computing”


It is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of
configurable computing resources (e.g., networks, servers, storage, applications, and services) that
can be rapidly provisioned and released with minimal management effort or service provider
interaction.”

• The NIST definition also identifies


o 5 essential characteristics
o 3 service models
o 4 deployment models
4 deployment models / Types of cloud
• Public: Accessible, via the Internet, to anyone who pays
Owned by service providers; e.g., Google App Engine, Amazon Web Services, Force.com.
A public cloud is a publicly accessible cloud environment owned by a third-party cloud provider.
The IT resources on public clouds are usually provisioned via the previously described cloud
delivery models and are generally offered to cloud consumers at a cost or are commercialized via
other avenues (such as advertisement).
The cloud provider is responsible for the creation and on-going maintenance of the public cloud
and its IT resources. Many of the scenarios and architectures explored in upcoming chapters
involve public clouds and the relationship between the providers and consumers of IT resources
via public clouds.

Figure 1 shows a partial view of the public cloud landscape, highlighting some of the primary
vendors in the marketplace.
24

Benefits of choosing a Public Cloud

● One of the main benefits that come with using public cloud services is near unlimited
scalability.
● The resources are pretty much offered based on demand. So any changes in activity level
can be handled very easily.
● This in turn brings with it cost effectiveness.

● Public cloud allows pooling of a large number of resources, users are benefiting from the
savings of large scale operations.
● There are many services like Google Drive which are offered for free.

● Finally, the vast network of servers involved in public cloud services means that it can benefit
from greater reliability.

Even if one data center was to fail entirely, the network simply redistributes the load among the
remaining enters making it highly unlikely that the public cloud would ever fail.

● In summary, the benefits of the public cloud are:

○ Easy scalability
○ Cost effectiveness
○ Increased reliability

Disadvantages of choosing a Public Cloud

● There are of course downsides to using public cloud services.

● At the top of the list is the fact that the security of data held within a public cloud is a cause
for concern.
● It is often seen as an advantage that the public cloud has no geographical restrictions making
access easy from everywhere, but on the flip side this could mean that the server is in a
different country which is governed by an entirely different set of security and/or privacy
regulations.
● This could mean that your data is not all that secure making it unwise to use public cloud
services for sensitive data.

Community: Shared by two or more organizations with joint interests, such as colleges within a
university
25

A community cloud is similar to a public cloud except that its access is limited to a specific
community of cloud consumers. The community cloud may be jointly owned by the community
members or by a third-party cloud provider that provisions a public cloud with limited access. The
member cloud consumers of the community typically share the responsibility for defining and
evolving the community cloud (Figure 1).
Membership in the community does not necessarily guarantee access to or control of all the cloud's
IT resources. Parties outside the community are generally not granted access unless allowed by the
community.

An example of a "community" of organizations accessing IT resources from a community cloud.


Private: Accessible via an intranet to the members of the owning organization
– Can be built using open source software such as CloudStack or OpenStack
– Example of private cloud: NASA’s cloud for climate modeling
A private cloud is owned by a single organization. Private clouds enable an organization to use
cloud computing technology as a means of centralizing access to IT resources by different parts,
locations, or departments of the organization. When a private cloud exists as a controlled
environment, the problems described in the Risks and Challenges section do not tend to apply.
26

The use of a private cloud can change how organizational and trust boundaries are defined and
applied. The actual administration of a private cloud environment may be carried out by internal
or outsourced staff.

A cloud service consumer in the organization's on-premise environment accesses a cloud service
hosted on the same organization's private cloud via a virtual private network.
With a private cloud, the same organization is technically both the cloud consumer and cloud
provider . In order to differentiate these roles:
▪ a separate organizational department typically assumes the responsibility for provisioning the
cloud (and therefore assumes the cloud provider role)
▪ departments requiring access to the private cloud assume the cloud consumer role

Benefits of choosing a Private Cloud


● The main benefit of choosing a private cloud is the greater level of security offered making it
ideal for business users who need to store and/or process sensitive data.
● A good example is a company dealing with financial information such as bank or lender who is
required by law to use secure internal storage to store consumer information.
● With a private cloud this can be achieved while still allowing the organization to benefit from
cloud computing.
● Private cloud services also offer some other benefits for business users including more control
over the server allowing it to be tailored to your own preferences and in house styles.

● While this can remove some of the scalability options, private cloud providers often offerwhat is
known as cloud bursting which is when non sensitive data is switched to a publiccloud to free up
private cloud space in the event of a significant spike in demand until such times as the private
cloud can be expanded.
27

● In summary, the main benefits of the private cloud are:

○ Improved security
○ Greater control over the server
○ Flexibility in the form of Cloud Bursting

Disadvantages of choosing a Private Cloud

● The downsides of private cloud services include a higher initial outlay, although in the long term
many business owners find that this balances out and actual becomes more cost effective than
public cloud use.
● It is also more difficult to access the data held in a private cloud from remote locations due to the
increased security measures.
Hybrid
A hybrid cloud is a cloud environment comprised of two or more different cloud deployment
models. For example, a cloud consumer may choose to deploy cloud services processing sensitive
data to a private cloud and other, less sensitive cloud services to a public cloud. The result of this
combination is a hybrid deployment model.

An organization using a hybrid cloud architecture that utilizes both a private and public cloud.
Hybrid deployment architectures can be complex and challenging to create and maintain due to
the potential disparity in cloud environments and the fact that management responsibilities are
typically split between the private cloud provider organization and the public cloud provider.
28

1.5 Cloud service models

Three service models / Categories of cloud computing.


• Software as a Service (SaaS)
– Use provider’s applications over a network
• Platform as a Service (PaaS)
– Deploy customer-created applications to a cloud
• Infrastructure as a Service (IaaS)
• Software as a Service (SaaS). The capability provided to the consumer is to use the provider’s
applications running on a cloud infrastructure. The applications are accessible from various
client devices through a thin client interface such as a web browser (e.g., web-based email).
The consumer does not manage or control the underlying cloud infrastructure including
network, servers, operating systems, storage, or even individual application capabilities, with
the possible exception of provider-defined user-specific application configuration settings.
• Platform as a Service (PaaS). The capability provided to the consumer is to deploy onto the
cloud infrastructure consumer-created or acquired applications created using programming
languages and tools supported by the provider. The consumer does not manage or control the
underlying cloud infrastructure including network, servers, operating systems, or storage, but
has control over the deployed applications and possibly application hosting environment
configurations.
• Infrastructure as a Service (IaaS). The capability provided to the consumer is to provision
processing, storage, networks, and other fundamental computing resources where the consumer
is able to deploy and run arbitrary software, which can include operating systems and
applications. The consumer does not manage or control the underlying cloud physical
infrastructure but has control over operating systems, storage, deployed applications, and
possibly limited control of select networking components.
Comparison of cloud service models
29

Service Orchestration
• Service Orchestration refers to the composition of system components to support the Cloud
Providers activities in arrangement, coordination and management of computing resources in
order to provide cloud services to Cloud Consumers.
1.5.1 INFRASTRUCTURE-AS-A-SERVICE (IAAS)

• Cloud computing delivers infrastructure, platform, and software (application) as services, which
are made available as subscription-based services in a pay-as-you-go model to consumers.
• The services provided over the cloud can be generally categorized into three different models:
namely IaaS, Platform as a Service (PaaS), and Software as a Service (SaaS). All three models
allow users to access services over the Internet, relying entirely on the infrastructures of cloud
service providers.
• These models are offered based on various SLAs between providers and users. In a broad sense,
the SLA for cloud computing is addressed in terms of service availability, performance, and data
protection and security.
• Figure illustrates three cloud models at different service levels of the cloud. SaaS is applied at
the application end using special interfaces by users or clients.
30

FIGURE: The IaaS, PaaS, and SaaS cloud service models at different service levels.

• At the PaaS layer, the cloud platform must perform billing services and handle job queuing,
launching, and monitoring services. At the bottom layer of the IaaS services, databases, compute
instances, the file system, and storage must be provisioned to satisfy user demands.
Infrastructure as a Service
• This model allows users to use virtualized IT resources for computing, storage, and networking. In
short, the service is performed by rented cloud infrastructure. The user can deploy and run his
applications over his chosen OS environment.
• The user does not manage or control the underlying cloud infrastructure, but has control over
the OS, storage, deployed applications, and possibly select networking components. This IaaS
model encompasses storage as a service, compute instances as a service, and communication
as a service.
• The Virtual Private Cloud (VPC) in Example shows how to provide Amazon EC2 clusters and S3
storage to multiple users. Many startup cloud providers have appeared in recent years. GoGrid,
FlexiScale, and Aneka are good examples. Table summarizes the IaaS offerings by five public cloud
providers. Interested readers can visit the companies’ web sites for updated information.
Public Cloud Offerings of IaaS

Cloud Name VM Instance Capacity API and Hypervisor,


Access Tools Guest OS
Amazon Each instance has 1 –20 EC2 CLI or web Xen, Linux,
EC2 processors, 1.7–15 GB of memory, Service (WS) Windows
and 160-1.69 TB of storage. portal
31

GoGrid Each instance has 1–6 CPUs, 0.5–8 GB REST. Java. Xen, Linux,
of memory, and 30-480 GB of storage. PHP, Python, Windows
Ruby
Rackspace Each instance has a four-core CPU, 0.25– REST, Python, Xen, Linux
Cloud 16 GB of memory, and 10-620 GB PHP, Java, C#,
of storage. .NET
Flexi Scale Each instance has 1–4 CPUs, 0.5–16 GB of web console Xen, Linux,
in the UK memory, and 20-270 GB of storage. Windows
Joyent Cloud Each instance has up to eight CPUs, 0.25– No specific OS-level
32 GB of memory, and 30-480 GB API, SSH, virtualization,
of storage. Virtual/Min Open Solaris

Example

Amazon VPC for Multiple Tenants

• A user can use a private facility for basic computations. When he must meet a specific workload
requirement, he can use the Amazon VPC to provide additional EC2 instances or more storage
(S3) to handle urgent applications.
• Figure shows VPC which is essentially a private cloud designed to address the privacy concerns
of public clouds that hamper their application when sensitive data and software are involved.

FIGURE Amazon VPC (virtual private cloud) Courtesy of VMWare,


http://aws.amazon.com/vpc/

• Amazon EC2 provides the following services: resources from multiple data centers globally
distributed, CL1, web services (SOAP and Query), web- based console user interfaces, access to
VM instances via SSH and Windows,
32

99.5 percent available agreements, per-hour pricing, Linux and Windows OSes, and automatic
scaling and load balancing.
• VPC allows the user to isolate provisioned AWS processors, memory, and storage from
interference by other users. Both autoscaling and elastic load balancing services can support
related demands. Autoscaling enables users to automatically scale their VM instance capacity up
or down. With auto- scaling, one can ensure that a sufficient number of Amazon EC2 instances
are provisioned to meet desired performance. Or one can scale down the VM instance capacity to
reduce costs, when the workload is reduced.
1.5.2 PLATFORM-AS-A-SERVICE (PAAS) AND SOFTWARE-AS- A-SERVICE (SAAS)

SaaS is often built on top of the PaaS, which is in turn built on top of the IaaS.

1.5.2.1 Platform as a Service (PaaS)

• To be able to develop, deploy, and manage the execution of applications using provisioned
resources demands a cloud platform with the proper software environment. Such a platform
includes operating system and runtime library support.
• This has triggered the creation of the PaaS model to enable users to develop and deploy their
user applications. Table highlights cloud platform services offered by five PaaS services.

Five Public Cloud Offerings of PaaS

Languages and Programming Models Target Applications


Cloud Name
Developer Tools Supported by Provider and Storage Option
Google App Python, Java, and MapReduce, web Web applications and
Engine Eclipse-based IDE programming on demand BigTable storage
Salesforce.com’s Apex, Eclipse-based Workflow, Excel-like Business applications
Force.com IDE, web-based Wizard formula, Web such as CRM
programming on demand
Microsoft Azure .NET, Azure tools for Unrestricted model Enterprise and web
MS Visual Studio applications
Amazon Elastic Hive, Pig, Cascading, MapReduce Data processing and e-
MapReduce Java, Ruby, Perl, commerce
Python, PHP, R, C++
Aneka .NET, stand-alone SDK Threads, task, .NET enterprise
MapReduce applications, HPC

• The platform cloud is an integrated computer system consisting of both hardware and software
33

infrastructure. The user application can be developed on this virtualized cloud platform using
some programming languages and software tools supported by the provider (e.g., Java, Python,
.NET).
• The user does not manage the underlying cloud infrastructure. The cloud provider supports user
application development and testing on a well-defined service platform. This PaaS model
enables a collaborated software development platform for users from different parts of the
world. This model also encourages third parties to provide software management, integration,
and service monitoring solutions.
Example

Google App Engine for PaaS Applications

As web applications are running on Google’s server clusters, they share the same capability with
many other users. The applications have features such as automatic scaling and load balancing
which are very convenient while building web applications. The distributed scheduler mechanism
can also schedule tasks for triggering events at specified times and regular intervals.
Figure shows the operational model for GAE. To develop applications using GAE, a development
environment must be provided.
Google provides a fully featured local development environment that simulates GAE on the developer’s
computer. All the functions and application logic can be implemented locally which is quite similar to
traditional software development. The coding and debugging stages can be performed locally as well.
After these steps are finished, the SDK provided provides a tool for uploading the user’s application to
Google’s infrastructure where the applications are actually deployed. Many additional third-party
capabilities, including software management, integration, and service monitoring solutions, are also
provided.
34

User
s

HTTP HTTP

reques respons
User
interfac
e

Google load
balance

Data Data Data Dat


Data a
Data

FIGURE Google App Engine platform for PaaS operations


1.5.2.2 Software as a Service (SaaS)

• This refers to browser-initiated application software over thousands of cloud customers. Services
and tools offered by PaaS are utilized in construction of applications and management of their
deployment on resources offered by IaaS providers. The SaaS model provides software
applications as a service.
• As a result, on the customer side, there is no upfront investment in servers or software licensing.
On the provider side, costs are kept rather low, compared with conventional hosting of user
applications.
• Customer data is stored in the cloud that is either vendor proprietary or publicly hosted to support
PaaS and IaaS. The best examples of SaaS services include Google Gmail and docs, Microsoft
SharePoint, and the CRM software from Salesforce.com. They are all very successful in
promoting their own business or are used by thousands of small businesses in their dayto-day
operations.
• Providers such as Google and Microsoft offer integrated IaaS and PaaS services, whereas others
such as Amazon and GoGrid offer pure IaaS services and expect third-party PaaS providers such
as Manjrasoft to offer application development and deployment services on top of their
infrastructure services. To identify important cloud applications in enterprises, the success
stories of three real-life cloud applications are presented in Example 3.6 for HTC, news media,
and business transactions. The benefits of using cloud services are evident in these SaaS
applications.
35

Example

Three Success Stories on SaaS Applications

1. To discover new drugs through DNA sequence analysis, Eli Lily Company has used Amazon’s
AWS platform with provisioned server and storage clusters to conduct high-performance
biological sequence analysis without using an expensive supercomputer. The benefit of this
IaaS application is reduced drug deployment time with much lower costs.
2. The New York Times has applied Amazon’s EC2 and S3 services to retrieve useful pictorial
information quickly from millions of archival articles and newspapers. The New York Times has
significantly reduced the time and cost in getting the job done.
3. Pitney Bowes, an e-commerce company, offers clients the opportunity to perform B2B
transactions using the Microsoft Azure platform, along with .NET and SQL services. These
offerings have significantly increased the company’s client base.
1.5.3 Mashup of Cloud Services

• At the time of this writing, public clouds are in use by a growing number of users. Due to the
lack of trust in leaking sensitive data in the business world, more and more enterprises,
organizations, and communities are developing private clouds that demand deep customization.
• An enterprise cloud is used by multiple users within an organization. Each user may build some
strategic applications on the cloud, and demands customized partitioning of the data, logic, and
database in the metadata representation. More private clouds may appear in the future.
• Based on a 2010 Google search survey, interest in grid computing is declining rapidly. Cloud
mashups have resulted from the need to use multiple clouds simultaneously or in sequence.
• For example, an industrial supply chain may involve the use of different cloud resources or
services at different stages of the chain. Some public repository provides thousands of service
APIs and mashups for web commerce services. Popular APIs are provided by Google Maps,
Twitter, YouTube, Amazon eCommerce, Salesforce.com, etc.

1.6 ARCHITECTURAL DESIGN OF COMPUTE AND STORAGE CLOUDS

The cloud architecture is divided into 2 parts i.e.


1. Frontend
2. Backend

1.Frontend:
Frontend of the cloud architecture refers to the client side of cloud computing system. Means it
36

contains all the user interfaces and applications which are used by the client to access the cloud
computing services/resources. For example, use of a web browser to access the cloud platform.
• Client Infrastructure – Client Infrastructure is a part of the frontend component. It contains
the applications and user interfaces which are required to access the cloud platform.
• In other words, it provides a GUI( Graphical User Interface ) to interact with the cloud.
2.Backend:
Backend refers to the cloud itself which is used by the service provider. It contains the resources
as well as manages the resources and provides security mechanisms. Along with this, it includes
huge storage, virtual applications, virtual machines, traffic control mechanisms, deployment
models, etc.

1. Application –
Application in backend refers to a software or platform to which client accesses. Means it
provides the service in backend as per the client requirement.
2. Service –
Service in backend refers to the major three types of cloud based services like SaaS, PaaS and
IaaS. Also manages which type of service the user accesses.
3. RuntimeCloud-
Runtime cloud in backend provides the execution and Runtime platform/environment to the
Virtual machine.
4. Storage –
Storage in backend provides flexible and scalable storage service and management of stored
data.
5. Infrastructure –
Cloud Infrastructure in backend refers to the hardware and software components of cloud like
it includes servers, storage, network devices, virtualization software etc.
37

6. Management –
Management in backend refers to management of backend components like application,
service, runtime cloud, storage, infrastructure, and other security mechanisms etc.
7. Security –
Security in backend refers to implementation of different security mechanisms in the backend
for secure cloud resources, systems, files, and infrastructure to end-users.
8. Internet –
Internet connection acts as the medium or a bridge between frontend and backend and
establishes the interaction and communication between frontend and backend.
9. Database– Database in backend refers to provide database for storing structured data, such
as SQL and NOSQL databases. Example of Databases services include Amazon RDS,
Microsoft Azure SQL database and Google CLoud SQL.
10. Networking– Networking in backend services that provide networking infrastructure for
application in the cloud, such as load balancing, DNS and virtual private networks.
11. Analytics– Analytics in backend service that provides analytics capabillities for data in the
cloud, such as warehousing, bussness intellegence and machine learning.

Benefits of Cloud Computing Architecture :


• Makes overall cloud computing system simpler.
• Improves data processing requirements.
• Helps in providing high security.
• Makes it more modularized.
• Results in better disaster recovery.
• Gives good user accessibility.
• Reduces IT operating costs.
• Provides high level reliability.
• Scalability.

1.6.1 CLOUD PLATFORM DESIGN GOALS:

An internet cloud is envisioned as public cluster of servers provisioned on demand to perform collective web
services or distributed applications using data centre resources

Four major design goals of a cloud computing platform:


1. Scalability
2. Virtualization
3. Efficiency
4. Reliability
• System scalability can benefit from cluster architecture. If one service takes a lot of
processing power,storage capacity or network traffic,it is simple to add more servers and
38

bandwidth.
• The scale of the cloud architecture can be easily expanded by adding more servers and
enlarging the network connectivity accordingly.
• System reliability can benefit from this architecture.Data can be put in to multiple locations.
• Goal of virtualization is to centralize administrative tasks while improving scalability and
workloads.
• Cloud support web 2.0 applications

ENABLING TECHNOLOGIES OF CLOUD

TECHNOLOGY REQUIREMENTS AND BENEFITS

Fast Platform Deployment Fast,efficient and flexible deployment of cloud


resources to provide dynamic computing to
environments
Virtual clusters on demand Virtual clusters of VMs provisioned to satisfy user
demand and virtual clusters reconfigured as
workload changes.
Multi tenant Techniques SAAS for distributing software for a large number of
users for their simultaneous use and resource
sharing if so desired
39

Massive data Processing Internet search and web services often require
massive data processing,especially to support
personalized services.
Distributed storage Large scale storage of public records and public
archive information which demands distributed
storage over cloud
Licensing and billing services License management AND BILLING SERVICES
which greatly benefit all kinds of cloud services
in utility computing

1.6.2 LAYERED CLOUD ARCHITECTURE DESIGN

The architecture of a cloud is developed at three layers:

• Infrastructure,

• Platform

• Application
• These three development layers are implemented with virtualization and standardization of
hardware and software resources provisioned in the cloud. The services to public, private, and
hybrid clouds are conveyed to users through networking support over the Internet and intranets
involved.
• It is clear that the infrastructure layer is deployed first to support IaaS services. This
infrastructure layer serves as the foundation for building the platform layer of the cloud for
supporting PaaS services. In turn, the platform layer is a foundation for implementing the
application layer for SaaS applications.
• Different types of cloud services demand application of these resources separately. The
infrastructure layer is built with virtualized compute, storage, and network resources.
• The platform layer is for general-purpose and repeated usage of the collection of software
resources. This layer provides users with an environment to develop their applications, to test
operation flows, and to monitor execution results and performance. The platform should be able
to assure users that they have scalability, dependability, and security protection.
• In a way, the virtualized cloud platform serves as a “system middleware” between the
infrastructure and application layers of the cloud.
40

FIGURE: Layered architectural development of the cloud platform for IaaS, PaaS, and
SaaS applications over the Internet.

• The application layer is formed with a collection of all needed software modules for SaaS
applications. Service applications in this layer include daily office management work, such as
information retrieval, document processing, and calendar and authentication services.
• The application layer is also heavily used by enterprises in business marketing and sales, consumer
relationship management (CRM), financial transactions, and supply chain management.
• In general, SaaS demands the most work from the provider, PaaS is in the middle, and IaaS
demands the least.
• For example, Amazon EC2 provides not only virtualized CPU resources to users, but also
management of these provisioned resources. Services at the application layer demand more work
from providers. The best example of this is the Salesforce.com CRM service, in which the
provider supplies not only the hardware at the bottom layer and the software at the top layer, but
also the platform and software tools for user application development and monitoring.
1.6.3 Market-Oriented Cloud Architecture
• Cloud providers consider and meet the different QoS parameters of each individual consumer as
negotiated in specific SLAs. To achieve this, the providers cannot deploy traditional system-
centric resource management architecture. Instead, market-oriented resource management is
necessary to regulate the supply and demand of cloud resources to achieve market equilibrium
between supply and demand.
• The designer needs to provide feedback on economic incentives for both consumers and
providers. The purpose is to promote QoS-based resource allocation mechanisms. In addition,
41

clients can benefit from the potential cost reduction of providers, which could lead to a more
competitive market, and thus lower prices.
• Figure shows the high-level architecture for supporting market-oriented resource allocation in a
cloud computing environment.
• This cloud is basically built with the following entities: Users or brokers acting on user’s behalf
submit service requests from anywhere in the world to the data center and cloud to be processed.
The SLA resource allocator acts as the interface between the data center/cloud service provider
and external users/brokers. It requires the interaction of the following mechanisms to
support SLA-oriented resource management. When a service request is first submitted the
service request examiner interprets the submitted request for QoS requirements before
determining whether to accept or reject the request.
• The request examiner ensures that there is no overloading of resources whereby many service
requests cannot be fulfilled successfully due to limited resources. It also needs the latest status
information regarding resource availability (from the VM Monitor mechanism) and workload
processing (from the Service Request Monitor mechanism) in order to make resource allocation
decisions effectively.
• Then it assigns requests to VMs and determines resource entitlements for allocated VMs. The
Pricing mechanism decides how service requests are charged.
• For instance, requests can be charged based on submission time (peak/off- peak), pricing rates
(fixed/changing), or availability of resources (supply/ demand). Pricing serves as a basis for
managing the supply and demand of computing resources within the data center and facilitates
in prioritizing resource allocations effectively.

F IGURE: Market-oriented cloud architecture to expand/shrink leasing of resources with


variation in QoS/demand from users.
42

• The Accounting mechanism maintains the actual usage of resources by requests so that the
final cost can be computed and charged to users. In
addition, the maintained historical usage information can be utilized by the Service Request
Examiner and Admission Control mechanism to improve resource allocation decisions.
• The VM Monitor mechanism keeps track of the availability of VMs and their resource
entitlements. The Dispatcher mechanism starts the execution of accepted service requests on
allocated VMs. The Service Request Monitor mechanism keeps track of the execution progress
of service requests.
• Multiple VMs can be started and stopped on demand on a single physical machine to meet
accepted service requests, hence providing maximum flexibility to configure various partitions
of resources on the same physical machine to different specific requirements of service requests.
• In addition, multiple VMs can concurrently run applications based on different operating
system environments on a single physical machine since the VMs are isolated from one another
on the same physical machine.
1.64 Quality of Service Factors

• The data center comprises multiple computing servers that provide resources to meet service
demands. In the case of a cloud as a commercial offering to enable crucial business operations
of companies, there are critical QoS parameters to consider in a service request, such as time,
cost, reliability, and trust/security.
• In short, there should be greater importance on customers since they pay to access services in
clouds. In addition, the state of the art in cloud computing has no or limited support for dynamic
negotiation of SLAs between participants and mechanisms for automatic allocation of resources
to multiple competing requests. Negotiation mechanisms are needed to respond to alternate
offers protocol for establishing SLAs.
• Commercial cloud offerings must be able to support customer-driven service management based
on customer profiles and requested service requirements. Commercial clouds define
computational risk management tactics to identify, assess, and manage risks involved in the
execution of applications with regard to service requirements and customer needs.
• The cloud also derives appropriate market-based resource management strategies that
encompass both customer-driven service management and computational risk management to
sustain SLA-oriented resource allocation.
• The system incorporates autonomic resource management models that effectively self-manage
changes in service requirements to satisfy both new service demands and existing service
obligations, and leverage VM technology to dynamically assign resource shares according to service
requirements.
43

1.7 ARCHITECTURAL DESIGN CHALLENGES

Six open challenges in cloud architecture development

1 Service Availability and Data Lock-in Problem

2 Data Privacy and Security Concerns

3 Unpredictable Performance and Bottlenecks

4 Distributed Storage and Widespread Software Bugs

5 Cloud Scalability, Interoperability, and Standardization

6 Software Licensing and Reputation Sharing


Challenge1—Service Availabilityand Data Lock-inProblem

• The management of a cloud service by a single company is often the source of single points of
failure. To achieve HA, one can consider using multiple cloud providers. Even if a company has
multiple data centers located in different geographic regions, it may have common software
infrastructure and accounting systems. Therefore, using multiple cloud providers may provide
more protection from failures.
• Another availability obstacle is distributed denial of service (DDoS) attacks. Criminals threaten
to cut off the incomes of SaaS providers by making their services unavailable. Some utility
computing services offer SaaS providers the opportunity to defend against DDoS attacks by
using quick scale-ups.
• Software stacks have improved interoperability among different cloud platforms, but the APIs
itself are still proprietary. Thus, customers cannot easily extract their data and programs from
one site to run on another.
• The obvious solution is to standardize the APIs so that a SaaS developer can deploy services and
data across multiple cloud providers. This will rescue the loss of all data due to the failure of a
single company.
• In addition to mitigating data lock-in concerns, standardization of APIs enables a new usage
model in which the same software infrastructure can be used in both public and private clouds.
Such an option could enable “surge computing,” in which the public cloud is used to capture the
extra tasks that cannot be easily run in the data center of a private cloud.

Challenge 2—Data Privacy and Security Concerns

• Current cloud offerings are essentially public (rather than private) networks, exposing the system
to more attacks. Many obstacles can be overcome immediately with well-understood
technologies such as encrypted storage, virtual LANs, and network middleboxes (e.g., firewalls,
44

packet filters).
• For example, you could encrypt your data before placing it in a cloud. Many nations have laws
requiring SaaS providers to keep customer data and copyrighted material within national
boundaries.
• Traditional network attacks include buffer overflows, DoS attacks, spyware, malware, rootkits,
Trojan horses, and worms. In a cloud environment, newer attacks may result from hypervisor
malware, guest hopping and hijacking, or VM rootkits.
• Another type of attack is the man-in-the-middle attack for VM migrations. In general, passive
attacks steal sensitive data or passwords. Active attacks may manipulate kernel data structures
which will cause major damage to cloud servers.
Challenge 3—Unpredictable Performance and Bottlenecks

• Multiple VMs can share CPUs and main memory in cloud computing, but I/ O sharing is
problematic. For example, to run 75 EC2 instances with the STREAM benchmark requires a
mean bandwidth of 1,355 MB/second. However, for each of the 75 EC2 instances to write 1
GB files to the local disk requires a mean disk write bandwidth of only 55 MB/second. This
demonstrates the problem of I/O interference between VMs. One solution is to improve I/O
architectures and operating systems to efficiently virtualize interrupts and I/O channels.
• Internet applications continue to become more data-intensive. If we assume applications to be
“pulled apart” across the boundaries of clouds, this may complicate data placement and
transport. Cloud users and providers have to think about the implications of placement and traffic
at every level of the system, if they want to minimize costs. This kind of reasoning can be seen
in Amazon’s development of its new CloudFront service. Therefore, data transfer bottlenecks
must be removed, bottleneck links must be widened, and weak servers should be removed.
Challenge 4—Distributed Storage and Widespread Software Bugs

• The database is always growing in cloud applications. The opportunity is to create a storage
system that will not only meet this growth, but also combine it with the cloud advantage of scaling
arbitrarily up and down on demand. This demands the design of efficient distributed SANs.
• Data centers must meet programmers’ expectations in terms of scalability, data durability, and
HA. Data consistence checking in SAN-connected data centers is a major challenge in cloud
computing.
• Large-scale distributed bugs cannot be reproduced, so the debugging must occur at a scale in the
production data centers. No data center will provide such a convenience. One solution may be a
reliance on using VMs in cloud computing. The level of virtualization may make it possible to
capture valuable information in ways that are impossible without using VMs. Debugging over
simulators is another approach to attacking the problem, if the simulator is well designed.
45

Challenge 5—Cloud Scalability, Interoperability, and Standardization

• The pay-as-you-go model applies to storage and network bandwidth; both are counted in terms
of the number of bytes used. Computation is different depending on virtualization level. GAE
automatically scales in response to load increases and decreases; users are charged by the cycles
used.
• AWS charges by the hour for the number of VM instances used, even if the machine is idle. The
opportunity here is to scale quickly up and down in response to load variation, in order to save
money, but without violating SLAs.
• Open Virtualization Format (OVF) describes an open, secure, portable, efficient, and extensible
format for the packaging and distribution of VMs. It also defines a format for distributing
software to be deployed in VMs. This VM format does not rely on the use of a specific host
platform,virtualization platform, or guest operating system. The approach is to address virtual
platform-agnostic packaging with certification and integrity of packaged software. The package
supports virtual appliances to span more than one VM.
• OVF also defines a transport mechanism for VM templates, and can apply to different
virtualization platforms with different levels of virtualization. In terms of cloud standardization,
we suggest the ability for virtual appliances to run on any virtual platform. We also need to enable
VMs to run on heterogeneous hardware platform hypervisors. This requires hypervisor-
agnostic VMs. We also need to realize cross-platform live migration between x86 Intel and AMD
technologies and support legacy hardware for load balancing. All these issue are wide open for
further research.
Challenge 6—Software Licensing and Reputation Sharing

• Many cloud computing providers originally relied on open source software because the licensing
model for commercial software is not ideal for utility computing.
• The primary opportunity is either for open source to remain popular or simply for commercial
software companies to change their licensing structure to better fit cloud computing. One can
consider using both pay-for-use and bulk-use licensing schemes to widen the business coverage.
• One customer’s bad behavior can affect the reputation of the entire cloud. For instance,
blacklisting of EC2 IP addresses by spam-prevention services may limit smooth VM installation.
• An opportunity would be to create reputation-guarding services similar to the “trusted e-mail”
services currently offered (for a fee) to services hosted on smaller ISPs. Another legal issue
concerns the transfer of legal liability. Cloud providers want legal liability to remain with the
customer, and vice versa. This problem must be solved at the SLA level.

You might also like