Unit1 Cloud Computing
Unit1 Cloud Computing
CCS335-CLOUD COMPUTING
UNIT-1
Cloud infrastructure is a term used to describe the components needed for cloud computing, which
includes hardware, abstracted resources, storage, and network resources.
An abstraction technology or process—like virtualization—is used to separate resources from
physical hardware and pool them into clouds; automation software and management tools allocate
these resources and provision new environments so users can access what they need—when they
need it.
Components of Cloud Infrastructure:
Hypervisor :
Hypervisor is a firmware or a low level program which is a key to enable virtualization. It is
used to divide and allocate cloud resources between several customers. As it monitors and
manages cloud services/resources that’s why hypervisor is called as VMM (Virtual Machine
Monitor) or (Virtual Machine Manager).
2. Management Software:
Management software helps in maintaining and configuring the infrastructure. Cloud
management software monitors and optimizes resources, data, applications and services.
2
3. Deployment Software :
Deployment software helps in deploying and integrating the application on the cloud. So,
typically it helps in building a virtual computing environment.
4. Network :
It is one of the key components of cloud infrastructure which is responsible for connecting cloud
services over the internet. For the transmission of data and resources externally and internally
network is must required.
5. Server :
Server which represents the computing portion of the cloud infrastructure is responsible for
managing and delivering cloud services for various services and partners, maintaining security
etc.
6. Storage :
Storage represents the storage facility which is provided to different organizations for storing
and managing data. It provides a facility of extracting another resource if one of the resource
fails as it keeps many copies of storage.
Along with this, virtualization is also considered as one of important component of cloud
infrastructure. Because it abstracts the available data storage and computing power away from
the actual hardware and the users interact with their cloud infrastructure through GUI (Graphical
User Interface).
Distributed and cloud computing systems are built over a large number of autonomous computer
nodes. These node machines are interconnected by SANs, LANs, or WANs in a hierarchical man-
ner. With today’s networking technology, a few LAN switches can easily connect hundreds of
machines as a working cluster. A WAN can connect many local clusters to form a very large cluster
of clusters. In this sense, one can build a massive system with millions of computers connected to
edge networks.
3
Massive systems are considered highly scalable, and can reach web-scale connectivity, either
physically or logically. In Table 1.2, massive systems are classified into four groups: clusters,
P2P networks, computing grids, and Internet clouds over huge data centers. In terms of node
number, these four system classes may involve hundreds, thousands, or even millions of computers
as participating nodes. These machines work collectively, cooperatively, or collaboratively at
various levels. The table entries characterize these four system classes in various technical and
application aspects.
From the application perspective, clusters are most popular in supercomputing applications. In
2009, 417 of the Top 500 supercomputers were built with cluster architecture. It is fair to say that
clusters have laid the necessary foundation for building large-scale grids and clouds. P2P networks
appeal most to business applications. However, the content industry was reluctant to accept P2P
technology for lack of copyright protection in ad hoc networks. Many national grids built in the
past decade were underutilized for lack of reliable middleware or well-coded applications.
Potential advantages of cloud computing include its low cost and simplicity for both providers and
users.
4
1. CLUSTERS
2. GRID COMPUTING
3. PEER TO PEER NETWORK
4. INTERNET CLOUDS
computing. The software environments and applications must rely on the middleware to achieve
high performance. The cluster benefits come from scalable performance, efficient message
passing, high system availability, seamless fault tolerance, and cluster-wide job management.
1.2.2 Grid Computing Infrastructures
It was introduced in the year 1990. As the computing structure includes different computers or
nodes, in this case, the different nodes are placed in different geographical places but are connected
to the same network using the internet.
The other computing methods seen so far, it has homogeneous nodes that are located in the same
place. But in this grid computing, the nodes are placed in different organizations. It minimized the
problems of cluster computing but the distance between the nodes raised a new problem.
WANs already used by enterprises or organizations over the Internet. The grid is presented to users
as an integrated resource pool as shown in the upper half of the figure.
1.2.2.2 Grid Families
Grid technology demands new distributed computing models, software/middleware support,
network protocols, and hardware infrastructures. National grid projects are followed by industrial
grid plat-form development by IBM, Microsoft, Sun, HP, Dell, Cisco, EMC, Platform Computing,
and others. New grid service providers (GSPs) and new grid applications have emerged rapidly,
similar to the growth of Internet and web services in the past two decades. In Table 1.4, grid
systems are classified in essentially two categories: computational or data grids and P2P grids.
Computing or data grids are built primarily at the national level.
formed by mapping each physical machine with its ID, logically, through a virtual mapping as
shown in Figure 1.17. When a new peer joins the system, its peer ID is added as a node in the
overlay network. When an existing peer leaves the system, its peer ID is removed from the overlay
network automatically. Therefore, it is the P2P overlay network that characterizes the logical
connectivity among the peers.
9
There are two types of overlay networks: unstructured and structured. An unstructured
overlay network is characterized by a random graph. There is no fixed route to send messages or
files among the nodes. Often, flooding is applied to send a query to all nodes in an unstructured
overlay, thus resulting in heavy network traffic and nondeterministic search results. Structured
overlay net-works follow certain connectivity topology and rules for inserting and removing nodes
(peer IDs) from the overlay graph. Routing mechanisms are developed to take advantage of the
structured overlays.
1.2.3.5 P2P Application Families
Based on application, P2P networks are classified into four groups, as shown in Table 1.5. The
first family is for distributed file sharing of digital contents (music, videos, etc.) on the P2P
network. This includes many popular P2P networks such as Gnutella, Napster, and BitTorrent,
among others. Colla-boration P2P networks include MSN or Skype chatting, instant messaging,
and collaborative design, among others. The third family is for distributed P2P computing in
specific applications. For example, SETI@home provides 25 Tflops of distributed computing
power, collectively, over 3 million Internet host machines. Other P2P platforms, such as JXTA,
.NET, and FightingAID@home, support naming, discovery, communication, security, and
resource aggregation in some P2P applications.
1.2.3.6 P2P Computing Challenges
P2P computing faces three types of heterogeneity problems in hardware, software, and network
requirements. There are too many hardware models and architectures to select from;
incompatibility exists between software and the OS; and different network connections and
protocols
make it too complex to apply in real applications. We need system scalability as the workload
increases. System scaling is directly related to performance and bandwidth. P2P networks do have
these properties. Data location is also important to affect collective performance. Data locality,
network proximity, and interoperability are three design objectives in distributed P2P applications.
10
applications simultaneously. The cloud ecosystem must be designed to be secure, trustworthy, and
dependable. Some computer users think of the cloud as a centralized resource pool. Others
consider the cloud to be a server cluster which practices distributed computing over all the servers
used.
1.3.2 The Cloud Landscape
Traditionally, a distributed computing system tends to be owned and operated by an autonomous
administrative domain (e.g., a research laboratory or company) for on-premises computing needs.
However, these traditional systems have encountered several performance bottlenecks: constant
system maintenance, poor utilization, and increasing costs associated with hardware/software
upgrades. Cloud computing as an on-demand computing paradigm resolves or relieves us from
these problems.
• Infrastructure as a Service (IaaS) This model puts together infrastructures demanded by users—
namely servers, storage, networks, and the data center fabric. The user can deploy and run on
multiple VMs running guest OSes on specific applications. The user does not manage or control
the underlying cloud infrastructure, but can specify when to request and release the needed
resources.
• Platform as a Service (PaaS) This model enables the user to deploy user-built applications onto
a virtualized cloud platform. PaaS includes middleware, databases, development tools, and
some runtime support such as Web 2.0 and Java. The platform includes both hardware and
software integrated with specific programming interfaces. The provider supplies the API and
software tools (e.g., Java, Python, Web 2.0, .NET). The user is freed from managing the cloud
infrastructure.
upfront investment in servers or software licensing. On the provider side, costs are rather low,
compared with conventional hosting of user applications.
Internet clouds offer four deployment modes: private, public, managed, and hybrid [11]. These
modes demand different levels of security implications. The different SLAs imply that the security
responsibility is shared among all the cloud providers, the cloud resource consumers, and the third-
party cloud-enabled software providers. Advantages of cloud computing have been advocated by
many IT experts, industry leaders, and computer science researchers.
The following list highlights eight reasons to adapt the cloud for upgraded Internet applications
and web services:
1. Desired location in areas with protected space and higher energy efficiency
2. Sharing of peak-load capacity among a large pool of users, improving overall utilization
3. Separation of infrastructure maintenance duties from domain-specific application development
4. Significant reduction in cloud computing cost, compared with traditional computing paradigms
5. Cloud computing programming and application development
6. Service and data discovery and content/service distribution
7. Privacy, security, copyright, and reliability issues
8. Service agreements, business models, and pricing policies
13
● The goal is to achieve effective and secure cloud computing to reduce cost and improve
services
● NIST composed for six major workgroups specific to cloud computing
● In general, NIST generates report for future reference which includes survey, analysis of
existing cloud computing reference model, vendors and federal agencies.
● The conceptual reference architecture shown in figure 3.2 involves five actors. Eachactor as
entity participates in cloud computing
14
Actor Definition
Cloud Consumer A person or organization that maintains a business
relationship with, and uses service from, Cloud
Providers.
Cloud Provider A person, organization, or entity responsible for making a
service available to interested parties.
Cloud Auditor A party that can conduct independent assessment of cloud
services, information system operations, performance and
security of the cloud implementation.
Cloud Broker An entity that manages the use, performance and delivery
of cloud services, and negotiates relationships between
Cloud Providers and Cloud Consumers.
Cloud Carrier An intermediary that provides connectivity and transport
of cloud services from Cloud Providers to Cloud
Consumers.
Consumer Auditor
Broker Provider
Figure illustrates the interactions among the actors. A cloud consumer may request cloud
services from a cloud provider directly or via a cloud broker.
A cloud auditor conducts independent audits and may contact the others to collect necessary
information.
15
The communication paths for a cloud broker to privide service to a cloud consumer
A cloud consumer may request service from a cloud broker instead of contacting a cloud
provider directly. The cloud broker may create a new service by combining multiple services or
by enhancing an existing service. In this example, the actual cloud providers are invisible to the
cloud consumer and the cloud consumer interacts directly with the cloud broker.
Cloud carriers provide the connectivity and transport of cloud services from cloud
providers to cloud consumers. As illustrated in Figure 3.6, a cloud provider participates in and
arranges for two unique service level agreements (SLAs), one with a cloud carrier (e.g. SLA2)
and one with a cloud consumer (e.g. SLA1). A cloud provider arranges service level agreements
(SLAs) with a cloud carrier and may request dedicated and encrypted connections to ensure the
cloud services are consumed at a consistent level according to the contractual obligations with
the cloud consumers. In this case, the provider may specify its requirements on capability,
flexibility and functionality in SLA2 in order to provide essential requirements in SLA1.
16
For a cloud service, a cloud auditor conducts independent assessments of the operation and
security of the cloud service implementation. The audit may involve interactions with both the
Cloud Consumer and the Cloud Provider.
The cloud consumer is the principal stakeholder for the cloud computing service. A cloud
consumer represents a person or organization that maintains a business relationship with, and uses
the service from a cloud provider.
A cloud consumer browses the service catalog from a cloud provider, requests the appropriate
service, sets up service contracts with the cloud provider, and uses the service. The cloud
consumer may be billed for the service provisioned, and needs to arrange payments accordingly.
Cloud consumers need SLAs to specify the technical performance requirements fulfilled
by a cloud provider. SLAs can cover terms regarding the quality of service, security, remedies
for performance failures. A cloud provider may also list in the SLAs a set of promises explicitly
not made to consumers, i.e. limitations, and obligations that cloud consumers must accept.
A cloud consumer can freely choose a cloud provider with better pricing and more favorable
terms. Typically a cloud provider s pricing policy and SLAs are non-negotiable, unless the
customer expects heavy usage and might be able to negotiate for better contracts.
Depending on the services requested, the activities and usage scenarios can be different among
cloud consumers.
Figure presents some example cloud services available to a cloud consumer SaaS applications
in the cloud and made accessible via a network to the SaaS consumers.
17
The consumers of SaaS can be organizations that provide their members with access to software
applications, end users who directly use software applications, or software application
administrators who configure applications for end users. SaaS consumers can be billed based on
the number of end users, the time of use, the network bandwidth consumed, the amount of data
stored or duration of stored data.
Figure: Example Services Available to a Cloud Consumer
Cloud consumers of PaaS can employ the tools and execution resources provided by cloud
providers to develop, test, deploy and manage the applications hosted in a cloud environment.
PaaS consumers can be application developers who design and implement application software,
application testers who run and test applications in cloud-based environments, application
deployers who publish applications into the cloud, and application administrators who configure
and monitor application performance on a platform.
PaaS consumers can be billed according to, processing, database storage and network resources
consumed by the PaaS application, and the duration of the platform usage.
Consumers of IaaS have access to virtual computers, network-accessible storage, network
infrastructure components, and other fundamental computing resources on which they can
deploy and run arbitrary software.
The consumers of IaaS can be system developers, system administrators and IT managers who
are interested in creating, installing, managing and monitoring services for IT infrastructure
operations. IaaS consumers are provisioned with the capabilities to access these computing
resources, and are billed according to the amount or duration of the resources consumed, such
as CPU hours used by virtual computers, volume and duration of data stored, network
bandwidth consumed, number of IP addresses used for certain intervals.
18
Cloud Provider
A cloud provider is a person, an organization; it is the entity responsible for making a service
available to interested parties.
A Cloud Provider acquires and manages the computing infrastructure required for providing
the services, runs the cloud software that provides the services, and makes arrangement to deliver
the cloud services to the Cloud Consumers through network access.
For Software as a Service, the cloud provider deploys, configures, maintains and updates the
operation of the software applications on a cloud infrastructure so that the services are
provisioned at the expected service levels to cloud consumers.
The provider of SaaS assumes most of the responsibilities in managing and controlling the
applications and the infrastructure, while the cloud consumers have limited administrative control
of the applications.
For PaaS, the Cloud Provider manages the computing infrastructure for the platform and runs
the cloud software that provides the components of the platform, such as runtime software
execution stack, databases, and other middleware components.
The PaaS Cloud Provider typically also supports the development, deployment and
management process of the PaaS Cloud Consumer by providing tools such as integrated
development environments (IDEs), development version of cloud software, software
development kits (SDKs), deployment and management tools.
The PaaS Cloud Consumer has control over the applications and possibly some the hosting
environment settings, but has no or limited access to the infrastructure underlying the platform
such as network, servers, operating systems (OS), or storage.
For IaaS, the Cloud Provider acquires the physical computing resources underlying the service,
including the servers, networks, storage and hosting infrastructure.
The Cloud Provider runs the cloud software necessary to makes computing resources available
to the IaaS Cloud Consumer through a set of service interfaces and computing resource
abstractions, such as virtual machines and virtual network interfaces.
The IaaS Cloud Consumer in turn uses these computing resources, such as a virtual computer,
for their fundamental computing needs Compared to SaaS and PaaS Cloud Consumers, an IaaS
Cloud Consumer has access to more fundamental forms of computing resources and thus has
more control over the more software components in an application stack, including the OS and
network.
The IaaS Cloud Provider, on the other hand, has control over the physical hardware and cloud
software that makes the provisioning of these infrastructure services possible, for example,
the physical servers, network equipment, storage devices, host OS and hypervisors for
virtualization.
19
A Cloud Provider s activities can be described in five major areas, as shown in Figure 3.9, a cloud
provider conducts its activities in the areas of service deployment, service orchestration, cloud
service management, security, and privacy.
Orchestration
A cloud auditor is a party that can perform an independent examination of cloud service controls
with the intent to express an opinion thereon.
Audits are performed to verify conformance to standards through review of objective evidence.
A cloud auditor can evaluate the services provided by a cloud provider in terms of security
controls, privacy impact, performance, etc.
Auditing is especially important for federal agencies as “agencies should include a contractual
clause enabling third parties to assess security controls of cloud providers” (by Vivek Kundra,
Federal Cloud Computing Strategy, Feb. 2011.).
Security controls are the management, operational, and technical safeguards or countermeasures
employed within an organizational information system to protect the confidentiality, integrity,
and availability of the system and its information.
For security auditing, a cloud auditor can make an assessment of the security controls in the
information system to determine the extent to which the controls are implemented correctly,
operating as intended, and producing the desired outcome with respect to the security
requirements for the system.
The security auditing should also include the verification of the compliance with regulation and
security policy. For example, an auditor can be tasked with ensuring that the correct policies are
applied to data retention according to relevant rules for the jurisdiction. The auditor may ensure
that fixed content has not been modified and that the legal and business data archival
requirements have been satisfied.
A privacy impact audit can help Federal agencies comply with applicable privacy laws and
20
A cloud consumer may request cloud services from a cloud broker, instead of contacting a cloud
provider directly. A cloud broker is an entity that manages the use, performance and delivery of
cloud services and negotiates relationships between cloud providers and cloud consumers.
In general, a cloud broker can provide services in three categories:
Service Intermediation: A cloud broker enhances a given service by improving some specific
capability and providing value-added services to cloud consumers. The improvement can be
managing access to cloud services, identity management, performance reporting, enhanced
security, etc.
Service Aggregation: A cloud broker combines and integrates multiple services into one or more
new services. The broker provides data integration and ensures the secure data movement
between the cloud consumer and multiple cloud providers.
Service Arbitrage: Service arbitrage is similar to service aggregation except that the services
being aggregated are not fixed. Service arbitrage means a broker has the flexibility to choose
services from multiple agencies. The cloud broker, for example, can use a credit-scoring service
to measure and select an agency with the best score.
Cloud Carrier
A cloud carrier acts as an intermediary that provides connectivity and transport of cloud
services between cloud consumers and cloud providers. Cloud carriers provide access to
consumers through network, telecommunication and other access devices.
For example, cloud consumers can obtain cloud services through network access devices, such
as computers, laptops, mobile phones, mobile Internet devices (MIDs), etc.
The distribution of cloud services is normally provided by network and telecommunication
carriers or a transport agent , where a transport agent refers to a business organization that
provides physical transport of storage media such as high-capacity hard drives.
Note that a cloud provider will set up SLAs with a cloud carrier to provide services consistent
with the level of SLAs offered to cloud consumers, and may require the cloud carrier to provide
dedicated and secure connections between cloud consumers and cloud providers.
The Cloud Provider and Cloud Consumer share the control of resources in a cloud system. As
21
illustrated in Figure, different service models affect an organization s control over the
computational resources and thus what can be done in a cloud system.
The figure shows these differences using a classic software stack notation comprised of the
application, middleware, and OS layers.
This analysis of delineation of controls over the application stack helps understand the
responsibilities of parties involved in managing the cloud application.
SaaS
PaaS
SaaS
PaaS
IaaS
IaaS
• A cloud infrastructure may be operated in one of the following deployment models: public cloud,
private cloud, community cloud, or hybrid cloud. The differences are based on how exclusive the
computing resources are made to a Cloud Consumer.
Public Cloud
• A public cloud is built over the Internet and can be accessed by any user who has paid for the
service. Public clouds are owned by service providers and are accessible through a subscription.
• The callout box in top of Figure shows the architecture of a typical public cloud.
FIGURE Public, private, and hybrid clouds illustrated by functional architecture and
connectivity of representative clouds available by 2011.
• Many public clouds are available, including Google App Engine (GAE), Amazon Web Services
(AWS), Microsoft Azure, IBM Blue Cloud, and Salesforce.com’s Force.com. The providers of
the aforementioned clouds are commercial providers that offer a publicly accessible remote
interface for creating and managing VM instances within their proprietary infrastructure.
• A public cloud delivers a selected set of business processes.
• The application and infrastructure services are offered on a flexible price- peruse basis.
• A public cloud is one in which the cloud infrastructure and computing resources are made
available to the general public over a public network.
23
Figure 1 shows a partial view of the public cloud landscape, highlighting some of the primary
vendors in the marketplace.
24
● One of the main benefits that come with using public cloud services is near unlimited
scalability.
● The resources are pretty much offered based on demand. So any changes in activity level
can be handled very easily.
● This in turn brings with it cost effectiveness.
● Public cloud allows pooling of a large number of resources, users are benefiting from the
savings of large scale operations.
● There are many services like Google Drive which are offered for free.
● Finally, the vast network of servers involved in public cloud services means that it can benefit
from greater reliability.
Even if one data center was to fail entirely, the network simply redistributes the load among the
remaining enters making it highly unlikely that the public cloud would ever fail.
○ Easy scalability
○ Cost effectiveness
○ Increased reliability
● At the top of the list is the fact that the security of data held within a public cloud is a cause
for concern.
● It is often seen as an advantage that the public cloud has no geographical restrictions making
access easy from everywhere, but on the flip side this could mean that the server is in a
different country which is governed by an entirely different set of security and/or privacy
regulations.
● This could mean that your data is not all that secure making it unwise to use public cloud
services for sensitive data.
Community: Shared by two or more organizations with joint interests, such as colleges within a
university
25
A community cloud is similar to a public cloud except that its access is limited to a specific
community of cloud consumers. The community cloud may be jointly owned by the community
members or by a third-party cloud provider that provisions a public cloud with limited access. The
member cloud consumers of the community typically share the responsibility for defining and
evolving the community cloud (Figure 1).
Membership in the community does not necessarily guarantee access to or control of all the cloud's
IT resources. Parties outside the community are generally not granted access unless allowed by the
community.
The use of a private cloud can change how organizational and trust boundaries are defined and
applied. The actual administration of a private cloud environment may be carried out by internal
or outsourced staff.
A cloud service consumer in the organization's on-premise environment accesses a cloud service
hosted on the same organization's private cloud via a virtual private network.
With a private cloud, the same organization is technically both the cloud consumer and cloud
provider . In order to differentiate these roles:
▪ a separate organizational department typically assumes the responsibility for provisioning the
cloud (and therefore assumes the cloud provider role)
▪ departments requiring access to the private cloud assume the cloud consumer role
● While this can remove some of the scalability options, private cloud providers often offerwhat is
known as cloud bursting which is when non sensitive data is switched to a publiccloud to free up
private cloud space in the event of a significant spike in demand until such times as the private
cloud can be expanded.
27
○ Improved security
○ Greater control over the server
○ Flexibility in the form of Cloud Bursting
● The downsides of private cloud services include a higher initial outlay, although in the long term
many business owners find that this balances out and actual becomes more cost effective than
public cloud use.
● It is also more difficult to access the data held in a private cloud from remote locations due to the
increased security measures.
Hybrid
A hybrid cloud is a cloud environment comprised of two or more different cloud deployment
models. For example, a cloud consumer may choose to deploy cloud services processing sensitive
data to a private cloud and other, less sensitive cloud services to a public cloud. The result of this
combination is a hybrid deployment model.
An organization using a hybrid cloud architecture that utilizes both a private and public cloud.
Hybrid deployment architectures can be complex and challenging to create and maintain due to
the potential disparity in cloud environments and the fact that management responsibilities are
typically split between the private cloud provider organization and the public cloud provider.
28
Service Orchestration
• Service Orchestration refers to the composition of system components to support the Cloud
Providers activities in arrangement, coordination and management of computing resources in
order to provide cloud services to Cloud Consumers.
1.5.1 INFRASTRUCTURE-AS-A-SERVICE (IAAS)
• Cloud computing delivers infrastructure, platform, and software (application) as services, which
are made available as subscription-based services in a pay-as-you-go model to consumers.
• The services provided over the cloud can be generally categorized into three different models:
namely IaaS, Platform as a Service (PaaS), and Software as a Service (SaaS). All three models
allow users to access services over the Internet, relying entirely on the infrastructures of cloud
service providers.
• These models are offered based on various SLAs between providers and users. In a broad sense,
the SLA for cloud computing is addressed in terms of service availability, performance, and data
protection and security.
• Figure illustrates three cloud models at different service levels of the cloud. SaaS is applied at
the application end using special interfaces by users or clients.
30
FIGURE: The IaaS, PaaS, and SaaS cloud service models at different service levels.
• At the PaaS layer, the cloud platform must perform billing services and handle job queuing,
launching, and monitoring services. At the bottom layer of the IaaS services, databases, compute
instances, the file system, and storage must be provisioned to satisfy user demands.
Infrastructure as a Service
• This model allows users to use virtualized IT resources for computing, storage, and networking. In
short, the service is performed by rented cloud infrastructure. The user can deploy and run his
applications over his chosen OS environment.
• The user does not manage or control the underlying cloud infrastructure, but has control over
the OS, storage, deployed applications, and possibly select networking components. This IaaS
model encompasses storage as a service, compute instances as a service, and communication
as a service.
• The Virtual Private Cloud (VPC) in Example shows how to provide Amazon EC2 clusters and S3
storage to multiple users. Many startup cloud providers have appeared in recent years. GoGrid,
FlexiScale, and Aneka are good examples. Table summarizes the IaaS offerings by five public cloud
providers. Interested readers can visit the companies’ web sites for updated information.
Public Cloud Offerings of IaaS
GoGrid Each instance has 1–6 CPUs, 0.5–8 GB REST. Java. Xen, Linux,
of memory, and 30-480 GB of storage. PHP, Python, Windows
Ruby
Rackspace Each instance has a four-core CPU, 0.25– REST, Python, Xen, Linux
Cloud 16 GB of memory, and 10-620 GB PHP, Java, C#,
of storage. .NET
Flexi Scale Each instance has 1–4 CPUs, 0.5–16 GB of web console Xen, Linux,
in the UK memory, and 20-270 GB of storage. Windows
Joyent Cloud Each instance has up to eight CPUs, 0.25– No specific OS-level
32 GB of memory, and 30-480 GB API, SSH, virtualization,
of storage. Virtual/Min Open Solaris
Example
• A user can use a private facility for basic computations. When he must meet a specific workload
requirement, he can use the Amazon VPC to provide additional EC2 instances or more storage
(S3) to handle urgent applications.
• Figure shows VPC which is essentially a private cloud designed to address the privacy concerns
of public clouds that hamper their application when sensitive data and software are involved.
• Amazon EC2 provides the following services: resources from multiple data centers globally
distributed, CL1, web services (SOAP and Query), web- based console user interfaces, access to
VM instances via SSH and Windows,
32
99.5 percent available agreements, per-hour pricing, Linux and Windows OSes, and automatic
scaling and load balancing.
• VPC allows the user to isolate provisioned AWS processors, memory, and storage from
interference by other users. Both autoscaling and elastic load balancing services can support
related demands. Autoscaling enables users to automatically scale their VM instance capacity up
or down. With auto- scaling, one can ensure that a sufficient number of Amazon EC2 instances
are provisioned to meet desired performance. Or one can scale down the VM instance capacity to
reduce costs, when the workload is reduced.
1.5.2 PLATFORM-AS-A-SERVICE (PAAS) AND SOFTWARE-AS- A-SERVICE (SAAS)
SaaS is often built on top of the PaaS, which is in turn built on top of the IaaS.
• To be able to develop, deploy, and manage the execution of applications using provisioned
resources demands a cloud platform with the proper software environment. Such a platform
includes operating system and runtime library support.
• This has triggered the creation of the PaaS model to enable users to develop and deploy their
user applications. Table highlights cloud platform services offered by five PaaS services.
• The platform cloud is an integrated computer system consisting of both hardware and software
33
infrastructure. The user application can be developed on this virtualized cloud platform using
some programming languages and software tools supported by the provider (e.g., Java, Python,
.NET).
• The user does not manage the underlying cloud infrastructure. The cloud provider supports user
application development and testing on a well-defined service platform. This PaaS model
enables a collaborated software development platform for users from different parts of the
world. This model also encourages third parties to provide software management, integration,
and service monitoring solutions.
Example
As web applications are running on Google’s server clusters, they share the same capability with
many other users. The applications have features such as automatic scaling and load balancing
which are very convenient while building web applications. The distributed scheduler mechanism
can also schedule tasks for triggering events at specified times and regular intervals.
Figure shows the operational model for GAE. To develop applications using GAE, a development
environment must be provided.
Google provides a fully featured local development environment that simulates GAE on the developer’s
computer. All the functions and application logic can be implemented locally which is quite similar to
traditional software development. The coding and debugging stages can be performed locally as well.
After these steps are finished, the SDK provided provides a tool for uploading the user’s application to
Google’s infrastructure where the applications are actually deployed. Many additional third-party
capabilities, including software management, integration, and service monitoring solutions, are also
provided.
34
User
s
HTTP HTTP
reques respons
User
interfac
e
Google load
balance
• This refers to browser-initiated application software over thousands of cloud customers. Services
and tools offered by PaaS are utilized in construction of applications and management of their
deployment on resources offered by IaaS providers. The SaaS model provides software
applications as a service.
• As a result, on the customer side, there is no upfront investment in servers or software licensing.
On the provider side, costs are kept rather low, compared with conventional hosting of user
applications.
• Customer data is stored in the cloud that is either vendor proprietary or publicly hosted to support
PaaS and IaaS. The best examples of SaaS services include Google Gmail and docs, Microsoft
SharePoint, and the CRM software from Salesforce.com. They are all very successful in
promoting their own business or are used by thousands of small businesses in their dayto-day
operations.
• Providers such as Google and Microsoft offer integrated IaaS and PaaS services, whereas others
such as Amazon and GoGrid offer pure IaaS services and expect third-party PaaS providers such
as Manjrasoft to offer application development and deployment services on top of their
infrastructure services. To identify important cloud applications in enterprises, the success
stories of three real-life cloud applications are presented in Example 3.6 for HTC, news media,
and business transactions. The benefits of using cloud services are evident in these SaaS
applications.
35
Example
1. To discover new drugs through DNA sequence analysis, Eli Lily Company has used Amazon’s
AWS platform with provisioned server and storage clusters to conduct high-performance
biological sequence analysis without using an expensive supercomputer. The benefit of this
IaaS application is reduced drug deployment time with much lower costs.
2. The New York Times has applied Amazon’s EC2 and S3 services to retrieve useful pictorial
information quickly from millions of archival articles and newspapers. The New York Times has
significantly reduced the time and cost in getting the job done.
3. Pitney Bowes, an e-commerce company, offers clients the opportunity to perform B2B
transactions using the Microsoft Azure platform, along with .NET and SQL services. These
offerings have significantly increased the company’s client base.
1.5.3 Mashup of Cloud Services
• At the time of this writing, public clouds are in use by a growing number of users. Due to the
lack of trust in leaking sensitive data in the business world, more and more enterprises,
organizations, and communities are developing private clouds that demand deep customization.
• An enterprise cloud is used by multiple users within an organization. Each user may build some
strategic applications on the cloud, and demands customized partitioning of the data, logic, and
database in the metadata representation. More private clouds may appear in the future.
• Based on a 2010 Google search survey, interest in grid computing is declining rapidly. Cloud
mashups have resulted from the need to use multiple clouds simultaneously or in sequence.
• For example, an industrial supply chain may involve the use of different cloud resources or
services at different stages of the chain. Some public repository provides thousands of service
APIs and mashups for web commerce services. Popular APIs are provided by Google Maps,
Twitter, YouTube, Amazon eCommerce, Salesforce.com, etc.
1.Frontend:
Frontend of the cloud architecture refers to the client side of cloud computing system. Means it
36
contains all the user interfaces and applications which are used by the client to access the cloud
computing services/resources. For example, use of a web browser to access the cloud platform.
• Client Infrastructure – Client Infrastructure is a part of the frontend component. It contains
the applications and user interfaces which are required to access the cloud platform.
• In other words, it provides a GUI( Graphical User Interface ) to interact with the cloud.
2.Backend:
Backend refers to the cloud itself which is used by the service provider. It contains the resources
as well as manages the resources and provides security mechanisms. Along with this, it includes
huge storage, virtual applications, virtual machines, traffic control mechanisms, deployment
models, etc.
1. Application –
Application in backend refers to a software or platform to which client accesses. Means it
provides the service in backend as per the client requirement.
2. Service –
Service in backend refers to the major three types of cloud based services like SaaS, PaaS and
IaaS. Also manages which type of service the user accesses.
3. RuntimeCloud-
Runtime cloud in backend provides the execution and Runtime platform/environment to the
Virtual machine.
4. Storage –
Storage in backend provides flexible and scalable storage service and management of stored
data.
5. Infrastructure –
Cloud Infrastructure in backend refers to the hardware and software components of cloud like
it includes servers, storage, network devices, virtualization software etc.
37
6. Management –
Management in backend refers to management of backend components like application,
service, runtime cloud, storage, infrastructure, and other security mechanisms etc.
7. Security –
Security in backend refers to implementation of different security mechanisms in the backend
for secure cloud resources, systems, files, and infrastructure to end-users.
8. Internet –
Internet connection acts as the medium or a bridge between frontend and backend and
establishes the interaction and communication between frontend and backend.
9. Database– Database in backend refers to provide database for storing structured data, such
as SQL and NOSQL databases. Example of Databases services include Amazon RDS,
Microsoft Azure SQL database and Google CLoud SQL.
10. Networking– Networking in backend services that provide networking infrastructure for
application in the cloud, such as load balancing, DNS and virtual private networks.
11. Analytics– Analytics in backend service that provides analytics capabillities for data in the
cloud, such as warehousing, bussness intellegence and machine learning.
An internet cloud is envisioned as public cluster of servers provisioned on demand to perform collective web
services or distributed applications using data centre resources
bandwidth.
• The scale of the cloud architecture can be easily expanded by adding more servers and
enlarging the network connectivity accordingly.
• System reliability can benefit from this architecture.Data can be put in to multiple locations.
• Goal of virtualization is to centralize administrative tasks while improving scalability and
workloads.
• Cloud support web 2.0 applications
Massive data Processing Internet search and web services often require
massive data processing,especially to support
personalized services.
Distributed storage Large scale storage of public records and public
archive information which demands distributed
storage over cloud
Licensing and billing services License management AND BILLING SERVICES
which greatly benefit all kinds of cloud services
in utility computing
• Infrastructure,
• Platform
• Application
• These three development layers are implemented with virtualization and standardization of
hardware and software resources provisioned in the cloud. The services to public, private, and
hybrid clouds are conveyed to users through networking support over the Internet and intranets
involved.
• It is clear that the infrastructure layer is deployed first to support IaaS services. This
infrastructure layer serves as the foundation for building the platform layer of the cloud for
supporting PaaS services. In turn, the platform layer is a foundation for implementing the
application layer for SaaS applications.
• Different types of cloud services demand application of these resources separately. The
infrastructure layer is built with virtualized compute, storage, and network resources.
• The platform layer is for general-purpose and repeated usage of the collection of software
resources. This layer provides users with an environment to develop their applications, to test
operation flows, and to monitor execution results and performance. The platform should be able
to assure users that they have scalability, dependability, and security protection.
• In a way, the virtualized cloud platform serves as a “system middleware” between the
infrastructure and application layers of the cloud.
40
FIGURE: Layered architectural development of the cloud platform for IaaS, PaaS, and
SaaS applications over the Internet.
• The application layer is formed with a collection of all needed software modules for SaaS
applications. Service applications in this layer include daily office management work, such as
information retrieval, document processing, and calendar and authentication services.
• The application layer is also heavily used by enterprises in business marketing and sales, consumer
relationship management (CRM), financial transactions, and supply chain management.
• In general, SaaS demands the most work from the provider, PaaS is in the middle, and IaaS
demands the least.
• For example, Amazon EC2 provides not only virtualized CPU resources to users, but also
management of these provisioned resources. Services at the application layer demand more work
from providers. The best example of this is the Salesforce.com CRM service, in which the
provider supplies not only the hardware at the bottom layer and the software at the top layer, but
also the platform and software tools for user application development and monitoring.
1.6.3 Market-Oriented Cloud Architecture
• Cloud providers consider and meet the different QoS parameters of each individual consumer as
negotiated in specific SLAs. To achieve this, the providers cannot deploy traditional system-
centric resource management architecture. Instead, market-oriented resource management is
necessary to regulate the supply and demand of cloud resources to achieve market equilibrium
between supply and demand.
• The designer needs to provide feedback on economic incentives for both consumers and
providers. The purpose is to promote QoS-based resource allocation mechanisms. In addition,
41
clients can benefit from the potential cost reduction of providers, which could lead to a more
competitive market, and thus lower prices.
• Figure shows the high-level architecture for supporting market-oriented resource allocation in a
cloud computing environment.
• This cloud is basically built with the following entities: Users or brokers acting on user’s behalf
submit service requests from anywhere in the world to the data center and cloud to be processed.
The SLA resource allocator acts as the interface between the data center/cloud service provider
and external users/brokers. It requires the interaction of the following mechanisms to
support SLA-oriented resource management. When a service request is first submitted the
service request examiner interprets the submitted request for QoS requirements before
determining whether to accept or reject the request.
• The request examiner ensures that there is no overloading of resources whereby many service
requests cannot be fulfilled successfully due to limited resources. It also needs the latest status
information regarding resource availability (from the VM Monitor mechanism) and workload
processing (from the Service Request Monitor mechanism) in order to make resource allocation
decisions effectively.
• Then it assigns requests to VMs and determines resource entitlements for allocated VMs. The
Pricing mechanism decides how service requests are charged.
• For instance, requests can be charged based on submission time (peak/off- peak), pricing rates
(fixed/changing), or availability of resources (supply/ demand). Pricing serves as a basis for
managing the supply and demand of computing resources within the data center and facilitates
in prioritizing resource allocations effectively.
• The Accounting mechanism maintains the actual usage of resources by requests so that the
final cost can be computed and charged to users. In
addition, the maintained historical usage information can be utilized by the Service Request
Examiner and Admission Control mechanism to improve resource allocation decisions.
• The VM Monitor mechanism keeps track of the availability of VMs and their resource
entitlements. The Dispatcher mechanism starts the execution of accepted service requests on
allocated VMs. The Service Request Monitor mechanism keeps track of the execution progress
of service requests.
• Multiple VMs can be started and stopped on demand on a single physical machine to meet
accepted service requests, hence providing maximum flexibility to configure various partitions
of resources on the same physical machine to different specific requirements of service requests.
• In addition, multiple VMs can concurrently run applications based on different operating
system environments on a single physical machine since the VMs are isolated from one another
on the same physical machine.
1.64 Quality of Service Factors
• The data center comprises multiple computing servers that provide resources to meet service
demands. In the case of a cloud as a commercial offering to enable crucial business operations
of companies, there are critical QoS parameters to consider in a service request, such as time,
cost, reliability, and trust/security.
• In short, there should be greater importance on customers since they pay to access services in
clouds. In addition, the state of the art in cloud computing has no or limited support for dynamic
negotiation of SLAs between participants and mechanisms for automatic allocation of resources
to multiple competing requests. Negotiation mechanisms are needed to respond to alternate
offers protocol for establishing SLAs.
• Commercial cloud offerings must be able to support customer-driven service management based
on customer profiles and requested service requirements. Commercial clouds define
computational risk management tactics to identify, assess, and manage risks involved in the
execution of applications with regard to service requirements and customer needs.
• The cloud also derives appropriate market-based resource management strategies that
encompass both customer-driven service management and computational risk management to
sustain SLA-oriented resource allocation.
• The system incorporates autonomic resource management models that effectively self-manage
changes in service requirements to satisfy both new service demands and existing service
obligations, and leverage VM technology to dynamically assign resource shares according to service
requirements.
43
• The management of a cloud service by a single company is often the source of single points of
failure. To achieve HA, one can consider using multiple cloud providers. Even if a company has
multiple data centers located in different geographic regions, it may have common software
infrastructure and accounting systems. Therefore, using multiple cloud providers may provide
more protection from failures.
• Another availability obstacle is distributed denial of service (DDoS) attacks. Criminals threaten
to cut off the incomes of SaaS providers by making their services unavailable. Some utility
computing services offer SaaS providers the opportunity to defend against DDoS attacks by
using quick scale-ups.
• Software stacks have improved interoperability among different cloud platforms, but the APIs
itself are still proprietary. Thus, customers cannot easily extract their data and programs from
one site to run on another.
• The obvious solution is to standardize the APIs so that a SaaS developer can deploy services and
data across multiple cloud providers. This will rescue the loss of all data due to the failure of a
single company.
• In addition to mitigating data lock-in concerns, standardization of APIs enables a new usage
model in which the same software infrastructure can be used in both public and private clouds.
Such an option could enable “surge computing,” in which the public cloud is used to capture the
extra tasks that cannot be easily run in the data center of a private cloud.
• Current cloud offerings are essentially public (rather than private) networks, exposing the system
to more attacks. Many obstacles can be overcome immediately with well-understood
technologies such as encrypted storage, virtual LANs, and network middleboxes (e.g., firewalls,
44
packet filters).
• For example, you could encrypt your data before placing it in a cloud. Many nations have laws
requiring SaaS providers to keep customer data and copyrighted material within national
boundaries.
• Traditional network attacks include buffer overflows, DoS attacks, spyware, malware, rootkits,
Trojan horses, and worms. In a cloud environment, newer attacks may result from hypervisor
malware, guest hopping and hijacking, or VM rootkits.
• Another type of attack is the man-in-the-middle attack for VM migrations. In general, passive
attacks steal sensitive data or passwords. Active attacks may manipulate kernel data structures
which will cause major damage to cloud servers.
Challenge 3—Unpredictable Performance and Bottlenecks
• Multiple VMs can share CPUs and main memory in cloud computing, but I/ O sharing is
problematic. For example, to run 75 EC2 instances with the STREAM benchmark requires a
mean bandwidth of 1,355 MB/second. However, for each of the 75 EC2 instances to write 1
GB files to the local disk requires a mean disk write bandwidth of only 55 MB/second. This
demonstrates the problem of I/O interference between VMs. One solution is to improve I/O
architectures and operating systems to efficiently virtualize interrupts and I/O channels.
• Internet applications continue to become more data-intensive. If we assume applications to be
“pulled apart” across the boundaries of clouds, this may complicate data placement and
transport. Cloud users and providers have to think about the implications of placement and traffic
at every level of the system, if they want to minimize costs. This kind of reasoning can be seen
in Amazon’s development of its new CloudFront service. Therefore, data transfer bottlenecks
must be removed, bottleneck links must be widened, and weak servers should be removed.
Challenge 4—Distributed Storage and Widespread Software Bugs
• The database is always growing in cloud applications. The opportunity is to create a storage
system that will not only meet this growth, but also combine it with the cloud advantage of scaling
arbitrarily up and down on demand. This demands the design of efficient distributed SANs.
• Data centers must meet programmers’ expectations in terms of scalability, data durability, and
HA. Data consistence checking in SAN-connected data centers is a major challenge in cloud
computing.
• Large-scale distributed bugs cannot be reproduced, so the debugging must occur at a scale in the
production data centers. No data center will provide such a convenience. One solution may be a
reliance on using VMs in cloud computing. The level of virtualization may make it possible to
capture valuable information in ways that are impossible without using VMs. Debugging over
simulators is another approach to attacking the problem, if the simulator is well designed.
45
• The pay-as-you-go model applies to storage and network bandwidth; both are counted in terms
of the number of bytes used. Computation is different depending on virtualization level. GAE
automatically scales in response to load increases and decreases; users are charged by the cycles
used.
• AWS charges by the hour for the number of VM instances used, even if the machine is idle. The
opportunity here is to scale quickly up and down in response to load variation, in order to save
money, but without violating SLAs.
• Open Virtualization Format (OVF) describes an open, secure, portable, efficient, and extensible
format for the packaging and distribution of VMs. It also defines a format for distributing
software to be deployed in VMs. This VM format does not rely on the use of a specific host
platform,virtualization platform, or guest operating system. The approach is to address virtual
platform-agnostic packaging with certification and integrity of packaged software. The package
supports virtual appliances to span more than one VM.
• OVF also defines a transport mechanism for VM templates, and can apply to different
virtualization platforms with different levels of virtualization. In terms of cloud standardization,
we suggest the ability for virtual appliances to run on any virtual platform. We also need to enable
VMs to run on heterogeneous hardware platform hypervisors. This requires hypervisor-
agnostic VMs. We also need to realize cross-platform live migration between x86 Intel and AMD
technologies and support legacy hardware for load balancing. All these issue are wide open for
further research.
Challenge 6—Software Licensing and Reputation Sharing
• Many cloud computing providers originally relied on open source software because the licensing
model for commercial software is not ideal for utility computing.
• The primary opportunity is either for open source to remain popular or simply for commercial
software companies to change their licensing structure to better fit cloud computing. One can
consider using both pay-for-use and bulk-use licensing schemes to widen the business coverage.
• One customer’s bad behavior can affect the reputation of the entire cloud. For instance,
blacklisting of EC2 IP addresses by spam-prevention services may limit smooth VM installation.
• An opportunity would be to create reputation-guarding services similar to the “trusted e-mail”
services currently offered (for a fee) to services hosted on smaller ISPs. Another legal issue
concerns the transfer of legal liability. Cloud providers want legal liability to remain with the
customer, and vice versa. This problem must be solved at the SLA level.