Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
18 views80 pages

Unit3 For Students

Uploaded by

iamankitappy116
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views80 pages

Unit3 For Students

Uploaded by

iamankitappy116
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 80

Cloud Computing Model

• STANDARD CLOUD MODEL:


• The concept of cloud computing surfaced several years back, the
implementation of the concept has been possible through different
phases of remodeling and renovation over the years.
• Computing technology has relied on few institutions like
• National Institute of Standards and Technology (NIST) of United States,
• the International Organization for Standardization (ISO),
• the European Telecommunications Standards Institute (ETSI) and else.
• A number of initiatives have been taken up by these institutions and
others like Cloud Security Alliance (CSA) to publish guidelines
focusing to set up some standard model for cloud computing.
The NIST Model
• Cloud computing statement by the NIST:
• “Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool
of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be
rapidly provisioned and released with minimal management effort or service provider interaction. This cloud
model is comprised of five essential characteristics, three service models, and four deployment models”

• After analyzing this definition, we have the following salient points:


• Cloud computing is a model and not a technology.
• Cloud computing enables the users’ access pools of computing resources via network.
• The resources are shared among users and made available on-demand.
• The prime benefit is the ease of use with very little management tensions for the users.
• The third point says that no user can hold any resource exclusively unless required for computational or
associated tasks. Computing resources are delivered to a user as and when required and any user need not to
own resources exclusively to use them.

• The last point states that the whole thing will basically be managed by a third party referred as provider party
and users will simply use it without the responsibility of managing it.
Deployment and Service Models
• The NIST model of cloud computing separates cloud computing in two categories.
• One category is based on the operational or deployment pattern of the cloud : It focusses on the access
boundary and location of the cloud establishment. The access boundary defines the purpose of using the
cloud to some extent. There are four categories of cloud deployment:
• public cloud,
• private cloud,
• community cloud and
• hybrid cloud.
• Second category is based on service delivery: This model describes the type of computing service that is
offered to users by the service provider. There are three prime categories of service delivery models,
• Infrastructure as a Service (IaaS),
• Platform as a Service (PaaS)
• Software as a Service (SaaS).
• Apart from cloud deployment and service models, the NIST model
mentions five essential characteristics of cloud computing which are
broad network access, rapid elasticity, measured service, on-demand
self service and resource pooling. Figure below represents the NIST
cloud model.
• The model is a generic one and not tied to any specific reference
implementation or vendor’s product. In addition to this, the model
defines the actors, standard activities and functions associated with the
cloud computing
The Reference Architecture

• The NIST cloud reference architecture is a logical extension to the


NIST cloud computing definition.
• The reference architecture of NIST does not model system architecture
of any particular cloud.
• Rather it intends to simplify the conception of the operational details
of cloud computing.
• The architecture focusses on ‘what’ cloud services need to provide but
not ‘how to’ do that.

• The diagram depicts a generic high-level architecture and represents
an actor or role-based model.
• The five major actors of the model are
• cloud consumer,
• cloud provider,
• cloud broker,
• cloud auditor and
• cloud carrier.
• Along with the actors, the model also identifies their activities and
functions. This helps in understanding the responsibilities of the
actors.
Actors and their roles
• The NIST cloud computing model describes five major actors as shown in Figure above. These actors play key
roles in the cloud computing business. Each actor in the reference model is actually an entity; that is, either a
person or an organization. The entities perform some tasks by participating in transactions or processes.

• Cloud Consumer: According to the definition of NIST, ‘The cloud consumer is the principal stakeholder for
the cloud computing service. A cloud consumer represents a person or an organization that maintains a
business relationship with, and uses the service from a cloud provider.’ The cloud consumer uses cloud service
and may be billed for the service by the provider.

• Cloud Provider: According to NIST, ‘A cloud provider is a person or an organization; it is the entity being
responsible for making a service available to interested parties. A Cloud Provider acquires and manages the
computing infrastructure required for providing the services,...’. Here the interested parties who want service
from cloud provider are the consumers.

• Cloud Auditor: The cloud services provided by cloud provider to the cloud consumer must comply to some
pre-agreed policies and regulations in terms of performance, security etc. The verification of these agreed
conditions can be performed by employing a third-party auditor. The cloud auditor is a party who can conduct
independent assessment of cloud services and report it accordingly.
• Cloud Broker: According to NIST, ‘A cloud broker is an entity that manages the use, performance, and
delivery of cloud services and negotiates the relationships between cloud providers and cloud consumers.’
Consumers can avoid the responsibilities of those complex tasks by requesting services from brokers instead
of consuming services from providers directly.

• Cloud Carrier: Cloud computing services are delivered from cloud provider to cloud consumer either
directly or via some cloud broker. Cloud carrier acts as an agent in this delivery process. They are the
organizations who provide the connectivity and transport facility of services through their network.
• The role of each actor can be played by a single person; by a group of people or an organization. The actors work in close
association with each other.
• Figure below exhibits the relations among different actors of NIST cloud computing model. The four actors cloud
consumer, cloud provider, cloud auditor and cloud broker interacts via the fifth actor, the cloud carrier.
• Cloud carriers are depicted through pipeline symbols in Figure below.
• A cloud consumer may directly request for service to a cloud provider. The provider then delivers the requested services to
the consumer. This communication between consumer and provider occurs through the carriers as shown through the
pipeline numbered as 1.
• Instead of contacting a cloud provider directly, a cloud consumer also has the option of requesting for services to some
cloud broker.
• Cloud broker usually integrates the required services from provider and delivers it to the consumer.
• The carriers involved in this communication and delivery are shown through the pipelines numbered as 2 and 3.
• In this case, the actual cloud providers remain invisible to the cloud consumers.
• The role of cloud broker has been elaborated in Figure below. Here, the broker has been linked with two providers. In such
scenario, the cloud broker may create a new service by combining services of those two providers.
• For independent assessment of operations and other measures, the auditor needs to interact with the cloud provider, cloud
consumer, and cloud broker too.
• The carriers for these interactions are shown through pipeline paths numbered as 6, 4 and 5 respectively.
Exploring the Cloud Provider
• According to NIST model, cloud provider takes care of five types of activities-
• Service deployment: Service deployment decides the deployment model
• Service orchestration: According to the NIST document, service orchestration refers to the ‘composition
of system components to support the cloud providers’ activities in arrangement, coordination and
management of computing resources in order to provide cloud services to cloud consumers
• Service management: takes care of the functions needed for the management and operation of cloud
services. There are three modules of cloud service management as business support,
provisioning/configuration and portability/interoperability.
• Management of security: Security management in the NIST reference architecture refers towards
developing a secure and reliable system. It means protecting the system and its information from
unauthorized access.
• Privacy: Privacy management aims to keep personal or sensitive information secret and saves them from
revealing out.
Service Orchestration
• According to the NIST document, service orchestration refers to the ‘composition of system components to
support the cloud providers’ activities in arrangement, coordination and management of computing resources
in order to provide cloud services to cloud consumers.’
• Service orchestration has three layers in it and each layer represents a group of system components that cloud
provider needs to deliver the services.
• At the top, there is the service layer. Here, cloud provider puts interfaces that enables the service consumers to access
various computing services. Thus the access interfaces for different types of cloud services (SaaS, PaaS and IaaS) are
represented in this layer.

• The middle layer is the resource abstraction and control layer. At this layer, the abstraction of physical resources are
implemented (through the software). Access to any hardware resources goes through this layer and the layer secures
the system by controlling resource allocation and access. It also integrates underlying physical resources and monitors
the resource usage by the consumers. This layer is also responsible for resource pooling and dynamic allocation of
resources.

• The physical resource layer is the lowest layer in the stack that houses all of the physical computing resources.
Hardware resources include computers (with processor and memory components), storage components (hard disks),
network entities (routers, firewalls, switches, network links and interfaces) and other physical computing devices.
Apart from hardware resources, this layer also includes the facilities for the computing infrastructure which includes
power supply, ventilation, cooling, communications and other aspects of a physical plant.
CLOUD DEPLOYMENT MODELS
• Cloud services can be arranged or deployed in a number of ways. The
deployment choice depends on the requirements of the consumer
organization. The deployment model describes the utility of a cloud and
also specifies its access boundary. The model also indicates the relative
location of the cloud with respect to the location of consumer organization.
The NIST definition mentions about four common deployment models as
• Public,
• Private,
• Community and
• Hybrid deployments.
• All of the clouds fall under either of these four categories.
Public Cloud
• The public cloud deployment model provides the widest range of access to consumers among all cloud
deployments.
• Anyone who subscribes it gets open access to this cloud facility. The consumer can either be an individual
user or a group of people representing some organization or an enterprise.
• Public cloud is also referred as external cloud as physical location-wise it remains external or off-premises
and the consumers can then remotely access the service.
• A public cloud is hosted and managed by some computing vendors who establishes data centers to provide the
service to consumers.
• The consumers under this cloud deployment model are entirely free from any tensions of infrastructure
administration and system management related issues.
• But, at the same time they (consumers) would have low degree of control over the cloud.
• Amazon Web Services, Google Cloud, Microsoft Azure and Salesforce.com are some of the popular public
clouds.
• Public cloud deployment promotes multi-tenancy at its highest degree.

• Same physical computing resource can be shared among multiple unrelated consumers. This provides major
advantages as it becomes possible for a single cloud vendor to serve a large number of consumers. When a
large number of consumers dispersed around the world share resources from data center of a single vendor
that automatically increases resource utilization rates and decreases vendor’s cost of service delivery.

• Thus for the consumers, the key benefit of using public cloud is its financial advantage.

• Being large in volume and business, they can afford state-of-the-art technology and skilled people.

• This ensures better quality of service.

• Through this model, consumers can access potentially superior service at a lower cost. Since different
consumers (from different parts of the world) have variable workload demands during a course of a day,
week, month or year, a cloud provider can always support loads efficiently during high demand
Private Cloud
• The private cloud deployment does not provide open access to all.
• It is mainly for organizational use and access to a private cloud deployment is restricted for general public.
• Private cloud is also referred as internal cloud since it is built to serve internal purpose of the organizations.
• While public clouds are equally useful for both individual users and organizations, private cloud generally
serves the purposes of organizations only.
• For high-security and critical systems, like systems of defense organizations, private cloud is the suggested
approach.
• While a public cloud cannot physically reside at any consumer’s location (physical boundary), private clouds
may reside
• inside consumer organization’s premises (on-premises) : physically reside under consumer organization’s own
physical as well as inside the network boundary.
• outside (off-premises) at any neutral location: reside outside organization’s own network boundary but remains under
the control or supervision of the consumer organization. Off-premises private clouds consumer organization itself or
they (the consumer) may outsource the responsibility to some other computing vendor
Community Cloud
• The community cloud deployment model allows access to a number of organizations or consumers belonging to a
community and the model is built to serve some common and specific purpose. It is for the use of some community of
people or organizations who share common concerns in business functionalities, security requirements etc. This model
allows sharing of infrastructure and resources among multiple consumers belonging to a single community and thus
becomes cheaper compared to a private cloud.

• Community cloud deployment can be on-premises or off-premises. Physically it may reside on any community member’s
premises or it may be located in some external location. Like private cloud, this cloud can also be governed by some
participating organization(s) (of the community) or can be outsourced to some external computing vendor.

• This cloud deployment may be identified as a generalized form of private cloud. While a private cloud is accessible only to
one consumer, one community cloud is used by multiple consumers of a community. Thus, this deployment model supports
multi-tenancy although not in the same degree as public cloud which allows multiple tenants not related with each other.
Thus, the tenancy model of community cloud falls in between that of private cloud and public cloud.

• The goal of community cloud deployment is to provide the benefits of public cloud, like multi-tenancy, pay-per-use billing
etc. to its consumers along with added level of privacy and security like the private cloud. One familiar example of
community cloud is some services launched by government of a country with the purpose of providing cloud services to
national agencies. The agencies are consumers in this case belonging to a single community (the government).
Hybrid Cloud
• A hybrid cloud is generally created by combining private or community deployment with public cloud
deployment together. This deployment model helps businesses to take advantage of private or community
cloud by storing critical applications and data. There at the same time, it provides the cost benefit by keeping
shared data and applications on the public cloud. Figure below demonstrates a hybrid cloud model combining
public cloud with on-premises private cloud.
• In practice, the hybrid cloud can be formed by combining two elements from a set of five different cloud
deployments as on-premises private cloud, off-premises private cloud, on-premises community cloud, off-
premises community cloud and public cloud, where one among the first four deployments is combined with
the last one (public cloud). Physical locations of all of the different cloud deployments have been shown
CHOOSING THE APPROPRIATE DEPLOYMENT MODEL
• The choice of appropriate cloud deployment depends on several factors. It largely
depends on the business needs and also on the size and IT maturity of consumer
organization.
• For general users, any reputed public cloud service is a good option.
• The issue regarding the appropriate choice of cloud deployment mainly stands before
organizations (and communities also).
• Any reputed public cloud can be a choice for them but private (or community)
deployment becomes the likely option when concern is about the privacy of sensitive or
importance of some vital business-related data.
• Even in case of setting up, an in-house cloud, organization (or community) must consider
the capability of their in-house technical team; otherwise they have the choice of
outsourcing the (private or community cloud) service.
• While outsourcing, the expertise or reputation of the service provider has to be verified.
Budget is another important issue.
• The cost of migration into cloud and the total cost of ownership have to be
considered before selecting a deployment.
• Generally, for a critical application that has security issues, a private or
hybrid cloud model may suit well.
• On the other hand, for a general application, the public cloud may serve the
purpose better.
• It is also critical to understand the business goals of consumer organization
based on many functional as well as other non-functional requirements.
• For instance, the cost of computing and consumer’s control over the
computing environment directly change with the choice of deployment
model. Following section discusses about two such aspects.
Economies of Scale
• In the study of Economics, the ‘economies of scale’ means the cost
advantages that enterprises use to obtain due to size or volume of their
businesses.
• Cloud economy mainly depends on the number of consumers of a
cloud deployment along with the level of permissible multitenancy of
resources.
• Among different cloud deployments, as public cloud fully supports
multi-tenancy and are generally consumed by large number of
consumers, the vendors can offer services at cheaper rates.
• At the other end of the pole the private cloud lies which does not
support multi-tenancy and is used by single enterprise or organization.
• Thus, it does not provide the cost benefit like public clouds because of
economies of scale. The community cloud and hybrid cloud
deployments stay in between these two in terms of economy of scale.
Consumer’s Authority
• Consumer’s authority or control over a cloud computing environment varies with choice of cloud deployment.
• Consumers can have maximum control over a private cloud deployment.
• In case of private cloud, a single consumer or enterprise remains the owner of the whole thing.
• In off-premises private cloud, although the management of the cloud is outsourced to some third-party
vendor, consumer holds the ultimate control over the cloud environment.
• Consumers’ control over cloud deployment is minimum in public cloud environment.
• There, the service provider is an independent body who holds authority over its cloud and hence, consumers
hold very little control over the environment.
• Consumers can only use the service and control their part having limited functionalities.
SERVICE DELIVERY MODELS
• Three categories of computing services that people consume from the days of traditional computing are:
• Infrastructure Service
• Platform Service
• Software Application Service

• Cloud computing talks about delivering these facilities to consumers as computing services through network/internetwork.
The benefit for the consumers is that they can avail these facilities over Internet anytime, as much as required, sitting at
their own locations in a cost effective manner. They only need to have a simple and suitable access device (like PC, laptop,
tablet, mobile etc.) to access these services. Using these simple devices anyone can access any kind of computing
infrastructure, platform or software application on payment-as-per-actual usage basis.

• Cloud computing offers computing infrastructure, platform and application delivered ‘as-a service’. Those services are
considered as primary cloud computing services and are referred to as:
• Infrastructure-as-a-Service (IaaS)
• Platform-as-a-Service (PaaS)
• Software-as-a-Service (SaaS)
• These services are generally pronounced as ‘i-a-a-s’, ‘pass’, and ‘saas’ respectively. They are
the driving forces behind the growth of cloud computing. Clubbed together these three service
models are commonly referred as SPI (Service-Platform-Infrastructure) model.

• Service layer is the topmost layer of cloud service orchestration over the resource abstraction and
control layer. The service layer includes three major cloud services as SaaS, PaaS and IaaS. The
PaaS layer resides over IaaS and SaaS layer resides over PaaS. In this layered architecture, the
service of a higher layer is built upon the capabilities offered by the underlying layers.
Infrastructure-as-a-Service
• Cloud computing allows access to computing resources in a virtualized environment popularly referred as ‘the
Cloud’.
• Infrastructure-as-a-Service delivers virtualized-hardware (not physical, but simulated software) resources to
consumers known as virtual resources or virtual components.
• It provides the facility of remotely using virtual processor, memory, storage and network resources to the
consumers.
• These virtual resources can be used just like physical (hardware) resources to build any computing setup (like
virtual machine or virtual network). For this reason, IaaS is also referred as Hardware-as-a-Service (HaaS).
• IaaS is the bottommost layer of cloud computing service model.
• It is a computing solution where the complexities and expenses for managing the underlying hardware are
outsourced to some cloud service providers.
• Consumers can access these virtual hardware resources on-demand and any time from any location over the
network.
• They can build computers (virtual computers) using those virtual (or virtualized) hardware components and
can even install operating systems and other software over that system.
• Major computing vendors like Amazon, Google, GoGrid, RackSpace provide IaaS facility.
• All of these vendors offer virtualized hardware resources of different types.
• Apart from offering resource components separately for building any computing setup, the IaaS
vendors generally offer custom made virtual machines (made of those virtual components) for
consumers.
• For example, Amazon EC2 and Google Compute Engine are popular server environments.
• Consumers can install OS and start working over these servers.
• Other than virtual machine, the storage is a very common IaaS offering.
• Amazon S3 is a popular storage service available as IaaS.
Platform-as-a-Service
• In computing, platform means the underlying system on which software applications can be installed (and
also developed).
• A computing platform comprises hardware resources, operating system, middleware (if required) and runtime
libraries.
• Application programs are also installed over this platform.
• Application development and deployment in traditional computing require the users’ participation in
managing hardware, operating system, middleware, web servers and other components.
• For instance, users must install appropriate framework (like J2EE, .NET) before working in any application
platform.
• PaaS facility, on the other hand, relieves users from all these tensions and delivers ready-made platform to
consumers via internetwork/Internet.
• PaaS component stack, in addition, provides application (development and deployment) platform over IaaS
component stack.
• A PaaS provider not only delivers fully-managed application development and deployment environment but
also takes care of the lower level (infrastructure level) resource management and provisioning.
• PaaS comes with IaaS capability integrated into it.
• Thus, PaaS is created by adding additional layers of software over IaaS.

• With the use of PaaS, collaborative application development becomes easier where multiple users can work
from different geographical locations.

• PaaS also reduces the total cost of ownership (TCO) as computing platform becomes available on rent basis.

• There are many PaaS offerings available in market. Google App Engine, Microsoft Azure Platform, GoGrid
Cloud Center, Force.com are very popular among them.

• Open-source PaaS offerings are also available in the market. Cloud foundry is one such which is developed
by VMware.

• One problem with PaaS model is that it fixes the developed applications with the platform. This causes
portability problem. For instance, application developed on Google App Engine using any programming
language (supported by Google PaaS) uses Google’s APIs, and hence, it cannot be run over PaaS facility of
other vendors.
PaaS–IaaS Integration
• PaaS layer must integrate with underlying IaaS for seamless access to hardware resources.
• Such integration is carried out using the application program interface (APIs) that an IaaS layer provides to
the PaaS developers.
• APIs are set of the functions and protocols which can be used to build the applications.
• IaaS developers build and offer these APIs along with their respective services so that PaaS facility can be
developed above it.
Software-as-a-Service
• Software-as-a-Service (SaaS) is a way of delivering application as a service over the network/ Internet that users can
directly consume without the tension of installing or configuring an application.
• In traditional computing, consumers had to pay not only the software licensing fee but also spend a large portion of their
budget in setting up the infrastructure and platform over which the application would run.
• SaaS eliminates this problem and promises easier as well as a cheaper way of using application.
• SaaS is hosted by SaaS vendors and delivered to the consumers over network/Internet.
• Unlike traditional packaged applications that users install on their own computing setup, SaaS vendors run it in their data
centers.
• Customers do not need to buy software licenses or any additional computing resources to support the application and can
access applications against some rental fee on usage basis.
• SaaS applications are sometimes referred as web-based software, or hosted software.
• SaaS is built by adding layers over PaaS component stack
• It is the facility of using applications administered and delivered by service provider over a cloud infrastructure.
• In SaaS model, everything is managed by vendor including application upgrade or updates; even the data and application
acts upon are also managed (storage in database or file etc.) by SaaS.
• Users can access the applications through a thin client interface (usually a browser) from any location. ’
• There are also many popular SaaS offerings for general users in the market today like GoogleApps, Microsoft Office 365
and else.
SaaS–PaaS Integration
• The integration between PaaS and SaaS is supported by runtimes, a topmost component
of PaaS stack.
• Runtimes are offered by PaaS to support different execution environments.
• These environments may include Java, .NET, Python, Ruby etc. PaaS consumers can
directly run applications over these runtime environments without any system
management stress.
• Architectural Design Challenges
Challenge 1:Service Availability
• Service Availability in Cloud might be affected because of
• Single Point Failure
• Depending on single service provider might result in failure.
• In case of single service providers, even if company has multiple data centres located in different
geographic regions, it may have common software infrastructure and accounting systems.
• Solution:
• Multiple cloud providers may provide more protection from failures and they provide High Availability
(HA)
• Multiple cloud Providers will rescue the loss of all data.
• Distributed Denial of service (DDoS) attacks.
• Cyber criminals, attack target websites and online services and makes services unavailable to users.
• DDoS tries to overwhelm (disturb) the services unavailable to user by having more traffic than the
server or network can accommodate.
• Solution:
• Some SaaS providers provide the opportunity to defend against DDoS attacks by using quick scale-ups.
• Customers cannot easily extract their data and programs from one site to run on another.
• Solution:
• Have standardization among service providers so that customers can deploy (install) services and data
across multiple cloud providers.

• Data Lock-in
• Data Lock-in is a situation in which a customer using service of a provider cannot be moved to another
service provider because technologies used by a provider will be incompatible with other providers?
• This makes a customer dependent on a vendor for services and makes customer unable to use service of
another vendor.
• Solution:
• Have standardization (in technologies) among service providers so that customers can easily move from
a service provider to another.
Challenge 2: Data Privacy and Security Concerns

• Cloud services are prone to attacks because they are accessed through internet.
• Security is given by
• Storing the encrypted data in to cloud.
• Firewalls, filters.
• Cloud environment attacks include
• Guest hopping
• Hijacking
• VM rootkits.
• Guest Hopping: Virtual machine hyper jumping (VM jumping) is an attack method that exploits (make use of)
hypervisor’s weakness that allows a virtual machine (VM) to be accessed from another.
• Hijacking: Hijacking is a type of network security attack in which the attacker takes control of a
communication
• VM Rootkit: is a collection of malicious (harmful) computer software, designed to enable access to a
computer that is not otherwise allowed.
• A man-in-the-middle (MITM) attack is a form of eaves dropping (Spy) where communication between two
users is monitored and modified by an unauthorized party.
• Man-in-the-middle attack may take place during VM migrations [virtual machine (VM) migration - VM is
moved from one physical host to another host].
• Passive attacks steal sensitive data or passwords.
• Active attacks may manipulate (control) kernel data structures which will cause major damage to cloud
servers.
Challenge 3: Unpredictable Performance and
Bottlenecks
• Multiple VMs can share CPUs and main memory in cloud computing, but I/O sharing is
problematic.
• Internet applications continue to become more data-intensive (handles huge amount of
data).
• Handling huge amount of data (data intensive) is a bottleneck in cloud environment.
• Weak Servers that does not provide data transfers properly must be removed from cloud
Challenge 4: Distributed Storage and Widespread Software
Bugs
• The database is always growing in cloud applications.
• There is a need to create a storage system that meets this growth.
• This demands the design of efficient distributed SANs (Storage Area Network of Storage devices).
• Data centres must meet
• Scalability
• Data durability
• HA(High Availability)
• Data consistence Bug refers to errors in software. Debugging must be done in data centres.
Challenge 5: Cloud Scalability, Interoperability and
Standardization Cloud Scalability
• Scalable

• Cloud resources are scalable. Cost increases when storage and network bandwidth scaled(increased)

• Interoperability

• In the context of cloud computing, interoperability should be viewed as the capability of public cloud
services, private cloud services, and other diverse systems within the enterprise to understand each
other’s application and service interfaces, configuration, forms of authentication and authorization, data
formats, etc. in order to work with each other.
• For interoperability, there are many challenges associated with cloud computing. In general, the
interfaces and APIs of cloud services are not standardized and different providers use different APIs for
what are otherwise comparable cloud services.
• Standardization
• Cloud standardization, should have ability for virtual machine to run on any virtual
platform.
• Open Virtualization Format (OVF). A packaging standard developed by the
Distributed Management Task Force (DMTF) that is designed to address the
portability and deployment of virtual machines.
• Open Virtualization Format (OVF) describes an open, secure, portable, efficient, and
extensible format for the packaging and distribution of VMs.
• OVF defines a transport mechanism for VM, that can be applied to different
virtualization platforms
Challenge 6: Software Licensing and Reputation Sharing

• Cloud providers can use both pay-for-use and bulk-use licensing schemes to
widen the business coverage.
• Cloud providers must create reputation-guarding services similar to the “trusted e-
mail” services
• Cloud providers want legal liability to remain with the customer, and vice versa.
Cloud storage
• Cloud Storage is a mode of computer data storage in which digital data is stored on servers in off-
site locations. The servers are maintained by a third-party provider who is responsible for hosting,
managing, and securing data stored on its infrastructure. The provider ensures that data on its
servers is always accessible via public or private internet connections.

• Cloud Storage enables organizations to store, access, and maintain data so that they do not need to
own and operate their own data centers, moving expenses from a capital expenditure model to
operational. Cloud Storage is scalable, allowing organizations to expand or reduce their data
footprint depending on need.
How does cloud storage work?
• Like on-premise storage networks, cloud storage uses servers to save data; however, the data is sent to servers
at an off-site location. Most of the servers you use are virtual machines hosted on a physical server. As your
storage needs increase, the provider creates new virtual servers to meet demand.

• Typically, you connect to the storage cloud either through the internet or a dedicated private connection, using
a web portal, website, or a mobile app. The server with which you connect forwards your data to a pool of
servers located in one or more data centers, depending on the size of the cloud provider’s operation.

• As part of the service, providers typically store the same data on multiple machines for redundancy. This way,
if a server is taken down for maintenance or suffers an outage, you can still access your data.
Cloud storage is available in private, public and hybrid
clouds.
• Public storage clouds: In this model, you connect over the internet to a storage cloud that’s maintained by a
cloud provider and used by other companies. Providers typically make services accessible from just about any
device, including smartphones and desktops and let you scale up and down as needed.

• Private cloud storage: Private cloud storage setups typically replicate the cloud model, but they reside within
your network, leveraging a physical server to create instances of virtual servers to increase capacity. You can
choose to take full control of an on-premise private cloud or engage a cloud storage provider to build a
dedicated private cloud that you can access with a private connection. Organizations that might prefer private
cloud storage include banks or retail companies due to the private nature of the data they process and store.

• Hybrid cloud storage: This model combines elements of private and public clouds, giving organizations a
choice of which data to store in which cloud. For instance, highly regulated data subject to strict archiving and
replication requirements is usually more suited to a private cloud environment, whereas less sensitive data
(such as email that doesn’t contain business secrets) can be stored in the public cloud. Some organizations use
hybrid clouds to supplement their internal storage networks with public cloud storage.
What are the types of cloud storage?
Object storage
• Organizations have to store a massive and growing amount of unstructured data, such as
photos, videos, machine learning (ML), sensor data, audio files, and other types of web
content, and finding scalable, efficient, and affordable ways to store them can be a
challenge. Object storage is a data storage architecture for large stores of unstructured
data. Objects store data in the format it arrives in and makes it possible to customize
metadata in ways that make the data easier to access and analyze. Instead of being
organized in files or folder hierarchies, objects are kept in secure buckets that deliver
virtually unlimited scalability. It is also less costly to store large data volumes.

• Applications developed in the cloud often take advantage of the vast scalability and
metadata characteristics of object storage. Object storage solutions are ideal for building
modern applications from scratch that require scale and flexibility, and can also be used to
import existing data stores for analytics, backup, or archive.
• File storage
• File-based storage or file storage is widely used among applications and stores data in a
hierarchical folder and file format. This type of storage is often known as a network-
attached storage (NAS) server with common file level protocols of Server Message Block
(SMB) used in Windows instances and Network File System (NFS) found in Linux.
• Block storage
• Enterprise applications like databases or enterprise resource planning (ERP) systems often
require dedicated, low-latency storage for each host. This is analogous to direct-attached
storage (DAS) or a storage area network (SAN). In this case, you can use a cloud storage
service that stores data in the form of blocks. Each block has its own unique identifier for
quick storage and retrieval.
Pros of cloud storage
• Off-site management: Your cloud provider assumes responsibility for maintaining and protecting the stored
data. This frees your staff from tasks associated with storage, such as procurement, installation,
administration, and maintenance. As such, your staff can focus on other priorities.

• Quick implementation: Using a cloud service accelerates the process of setting up and adding to your
storage capabilities. With cloud storage, you can provision the service and start using it within hours or days,
depending on how much capacity is involved.

• Cost-effective: As mentioned, you pay for the capacity you use. This allows your organization to treat cloud
storage costs as an ongoing operating expense instead of a capital expense with the associated upfront
investments and tax implications.

• Scalability: Growth constraints are one of the most severe limitations of on-premise storage. With cloud
storage, you can scale up as much as you need. Capacity is virtually unlimited.

• Business continuity: Storing data offsite supports business continuity in the event that a natural disaster or
terrorist attack cuts access to your premises.
Cons of cloud storage
• Security: Security concerns are common with cloud-based services. Cloud storage providers try to secure
their infrastructure with up-to-date technologies and practices, but occasional breaches have occurred,
creating discomfort with users.

• Administrative control: Being able to view your data, access it, and move it at will is another common
concern with cloud resources. Offloading maintenance and management to a third party offers advantages but
also can limit your control over your data.

• Latency: Delays in data transmission to and from the cloud can occur as a result of traffic congestion,
especially when you use shared public internet connections. However, companies can minimize latency by
increasing connection bandwidth.

• Regulatory compliance: Certain industries, such as healthcare and finance, have to comply with strict data
privacy and archival regulations, which may prevent companies from using cloud storage for certain types of
files, such as medical and investment records. If you can, choose a cloud storage provider that supports
compliance with any industry regulations impacting your business.
Why is cloud storage important?

• Cloud storage delivers cost-effective, scalable storage.


• You no longer need to worry about running out of capacity, maintaining storage area
networks (SANs), replacing failed devices, adding infrastructure to scale up with demand,
or operating underutilized hardware when demand decreases.
• Cloud storage is elastic, meaning you scale up and down with demand and pay only for
what you use.
• It is a way for organizations to save data securely online so that it can be accessed
anytime from any location by those with permission.
• Cost effectiveness: With cloud storage, there is no hardware to purchase, no storage to provision,
and no extra capital being used for business spikes. You can add or remove storage capacity on
demand, quickly change performance and retention characteristics, and only pay for storage that
you actually use.

• Increased agility: With cloud storage, resources are only a click away. You reduce the time to
make those resources available to your organization from weeks to just minutes. This results in a
dramatic increase in agility for your organization

• Faster deployment: When development teams are ready to begin, infrastructure should never slow
them down. Cloud storage services allow IT to quickly deliver the exact amount of storage needed,
whenever and wherever it's needed.
• Efficient data management: By using cloud storage lifecycle management policies, you
can perform powerful information management tasks including automated tiering or
locking down data in support of compliance requirements.

• Virtually unlimited scalability: Cloud storage delivers virtually unlimited storage


capacity, allowing you to scale up as much and as quickly as you need. This removes the
constraints of on-premises storage capacity.

• Business continuity: Cloud storage providers store your data in highly secure data
centers, protecting your data and ensuring business continuity. Cloud storage services are
designed to handle concurrent device failure by quickly detecting and repairing any lost
redundancy.
Storage as a Service
• Storage as a Service or STaaS is cloud storage that you rent from a Cloud Service Provider (CSP) and that
provides basic ways to access that storage.
• Enterprises, small and medium businesses, home offices, and individuals can use the cloud for multimedia
storage, data repositories, data backup and recovery, and disaster recovery.
• There are also higher-tier managed services that build on top of STaaS, such as Database as a Service, in
which you can write data into tables that are hosted through CSP resources.
• The key benefit to STaaS is that you are offloading the cost and effort to manage data storage infrastructure
and technology to a third-party CSP.
• This makes it much more effective to scale up storage resources without investing in new hardware or taking
on configuration costs.
• You can also respond to changing market conditions faster.
• With just a few clicks you can rent terabytes or more of storage, and you don’t have to spin up new storage
appliances on your own.
How Does Storage as a Service Work?
• Some STaaS offerings can be rented based on quantity, others are rented based on a service level agreement
(SLA).
• SLAs help establish and reinforce conditions for using data storage, such as uptime and read/write access
speed.
• The storage you choose will typically depend on how often you intend to access the data.
• Cold data storage is data that you leave alone or access infrequently, whereas warm or hot data is accessed
regularly and repeatedly.
• Pricing by quantity tends to be more cost efficient but isn’t intended to support fast and frequent access for
day-to-day business productivity.
• For hot or warm data, an SLA will be crucial to leveraging data storage in support of current projects or
ongoing processes.
• Many CSPs make it easy to onboard and upload data into their STaaS infrastructure for little to no cost at all.
• However, there may be hidden fees and it can be extremely costly to migrate or transfer your data to a
different cloud platform.
Examples
• Google Docs allows users to upload documents, spreadsheets, and presentations to Google’s data servers.
Those files can then be edited using a Google application.
• Web email providers like Gmail, Hotmail, and Yahoo! Mail store email messages on their own servers. Users
can access their email from computers and other devices connected to the Internet.
• Flickr and Picasa host millions of digital photographs. Users can create their own online photo albums.
• YouTube hosts millions of user uploaded video files.
• Hostmonster and GoDaddy store files and data for many client web sites.
• Facebook and MySpace are social networking sites and allow members to post pictures and other content.
That content is stored on the company’s servers.
• MediaMax and Strongspace offer storage space for any kind of digital data.
Cloud storage provider
• There are hundreds of cloud store providers every day.
• This is simply a listing of what some of the big players have to offer and anyone can use it as a
starting guide to determine if their services match user’s needs.
• Amazon and Nirvanix are the current industry top dogs, but many others are in the field, including
some well known names.
• Google offers cloud storage solution called GDrive.
• EMC is readying a storage solution and
• IBM already has a number of cloud storage options called Blue Cloud.
S3

• The well known cloud storage service is Amazon’s Simple Storage Service (S3), which is
launched in 2006.
• Amazon S3 is designed to make web scale computing easier for developers.
• Amazon S3 provides a simple web services interface that can be used to store and
retrieve any amount of data, at any time, from anywhere on the Web.
• It gives any developer access to the same highly scalable data storage infrastructure that
Amazon uses to run its own global network of web sites.
• The service aims to maximize benefits of scale and to pass those benefits on to
developers.

• Amazon S3 is intentionally built with a minimal feature set that includes the following
functionality:
• Write,
• read, and
• delete objects containing from 1 byte to 5 gigabytes of data each.
• The number of objects that can be stored is unlimited.
• Each object is stored and retrieved via a unique developer assigned key.
• Objects can be made private or public and rights can be assigned to specific users.
• Uses standards based REST and SOAP interfaces designed to work with any Internet
development toolkit.
Design Requirements :
Amazon built S3 to fulfill the following design requirements:
• Scalable: Amazon S3 can scale in terms of storage, request rate and users to support an
unlimited number of web-scale applications.
• Reliable: Store data durably with 99.99 percent availability. Amazon says it does not
allow any downtime.
• Fast: Amazon S3 was designed to be fast enough to support high-performance
applications. Server-side latency must be insignificant relative to Internet latency.
• Inexpensive: Amazon S3 is built from inexpensive commodity hardware components.
• Simple: Building highly scalable, reliable, fast and inexpensive storage is difficult.
Design Principles: Amazon used the following principles of distributed
system design to meet Amazon S3 requirements:
• Decentralization: It uses fully decentralized techniques to remove scaling bottlenecks and single points of
failure.
• Autonomy: The system is designed such that individual components can make decisions based on local
information.
• Local responsibility: Each individual component is responsible for achieving its consistency. This is never the
burden of its peers.
• Controlled concurrency: Operations are designed such that no or limited concurrency control is required.
• Failure toleration: The system considers the failure of components to be a normal mode of operation and
continues operation with no or minimal interruption.
• Controlled parallelism: Abstractions used in the system are of such granularity that parallelism can be used to
improve performance and robustness of recovery or the introduction of new nodes.
• Symmetry: Nodes in the system are identical in terms of functionality, and require no or minimal node
specific configuration to function.
• Simplicity: The system should be made as simple as possible, but no simpler.
How Amazon S3 works?
• Bucket
• Data, in S3, is stored in containers called buckets. Each bucket will have its own set of policies and
configurations. This enables users to have more control over their data. Bucket Names must be unique. Can
be thought of as a parent folder of data. There is a limit of 100 buckets per AWS account. But it can be
increased if requested by AWS support.
• Objects
• Fundamental entity type stored in AWS S3.You can store as many objects as you want to store. The maximum
size of an AWS S3 bucket is 5TB. It consists of the following:
• Key.
• Version ID.
• Value.
• Metadata.
• Sub resources.
• Access control information.
• Tags.
• S3 Versioning
• Versioning means always keeping a record of previously uploaded files in S3. Points to Versioning are not enabled by default. Once
enabled, it is enabled for all objects in a bucket. Versioning keeps all the copies of your file, so, it adds cost for storing multiple
copies of your data. For example, 10 copies of a file of size 1GB will have you charged for using 10GBs for S3 space. Versioning is
helpful to prevent unintended overwrites and deletions. Objects with the same key can be stored in a bucket if versioning is enabled
(since they have a unique version ID).
• Bucket policy
• A document for verifying the access to S3 buckets from within your AWS account, controls which services and users have what
kind of access to your S3 bucket. Each bucket has its own Bucket Policies.
• Access control lists (ACLs)
• A document for verifying access to S3 buckets from outside your AWS account. An ACL is specific to each bucket. You can utilize
S3 Object Ownership, an Amazon S3 bucket-level feature, to manage who owns the objects you upload to your bucket and to enable
or disable ACLs.
• Lifecycle Rules
• This is a cost-saving practice that can move your files to AWS Glacier (The AWS Data Archive Service) or to some other S3 storage
class for cheaper storage of old data or completely delete the data after the specified time.
• Key
• The key, in S3, is a unique identifier for an object in a bucket. For example in a bucket ‘ABC’ your GFG.java file is stored at
javaPrograms/GFG.java then ‘javaPrograms/GFG.java’ is your object key for GFG.java.
• Null Object
• Version ID for objects in a bucket where versioning is suspended is null. Such objects may be referred to as null objects.
Features of Amazon S3

• Durability: AWS claims Amazon S3 to have a 99.999999999% of durability (11 9’s). This means the
possibility of losing your data stored on S3 is one in a billion.
• Availability: AWS ensures that the up-time of AWS S3 is 99.99% for standard access.
• Note that availability is related to being able to access data and durability is related to losing data
altogether.
• Server-Side-Encryption (SSE): AWS S3 supports three types of SSE models:
• SSE-S3: AWS S3 manages encryption keys.
• SSE-C: The customer manages encryption keys.
• SSE-KMS: The AWS Key Management Service (KMS) manages the encryption keys.
• File Size support: AWS S3 can hold files of size ranging from 0 bytes to 5 terabytes. A 5TB limit on file size
should not be a blocker for most of the applications in the world.
• Infinite storage space: Theoretically AWS S3 is supposed to have infinite storage space. This makes S3
infinitely scalable for all kinds of use cases.
• Pay as you use: The users are charged according to the S3 storage they hold.
What are the S3 Storage Classes?

• AWS S3 provides multiple storage types that offer different performance and features and different cost
structures.
• Standard: Suitable for frequently accessed data, that needs to be highly available and durable.
• Standard Infrequent Access (Standard IA): This is a cheaper data-storage class and as the name suggests,
this class is best suited for storing infrequently accessed data like log files or data archives. Note that there
may be a per GB data retrieval fee associated with the Standard IA class.
• Intelligent Tiering: This service class classifies your files automatically into frequently accessed and
infrequently accessed and stores the infrequently accessed data in infrequent access storage to save costs. This
is useful for unpredictable data access to an S3 bucket.
• One Zone Infrequent Access (One Zone IA): All the files on your S3 have their copies stored in a minimum
of 3 Availability Zones. One Zone IA stores this data in a single availability zone. It is only recommended to
use this storage class for infrequently accessed, non-essential data. There may be a per GB cost for data
retrieval.
• Reduced Redundancy Storage (RRS): All the other S3 classes ensure the durability of 99.999999999%.
RRS only ensures 99.99% durability. AWS no longer recommends RRS due to its less durability. However, it
can be used to store non-essential data.
Advantages of S3 Bucket

• Scalability: Amazon S3 can be scalable horizontally which makes it handle a large amount of data. It can be
scaled automatically without human intervention.

• High availability: S3 bucket is famous for its high availability nature you can access the data whenever you
required it from any region. It offers a Service Level Agreement (SLA) guaranteeing 99.9% uptime.

• Data Lifecycle Management: You can manage the data which is stored in the S3 bucket by automating the
transition and expiration of objects based on predefined rules. You can automatically move the data to the
Standard-IA or Glacier, after a specified period.

• Integration with Other AWS Services: You can integrate the S3 bucket service with different services in the
AWS like you can integrate with the AWS lambda function where the lambda will be triggered based upon the
files or objects added to the S3 bucket.
Accessing Amazon S3

• You can work with an AWS S3 bucket by using any one of the following methods
• AWS Management Console
• You can access the AWS S3 bucket using the AWS management console which is a
web-based user interface. You need to create an AWS account and login to the console
and from there you can choose the S3 bucket option.
• AWS Command Line Interface
• You can configure the S3 bucker by using the AWS CLI by using scripts that can
perform the AWS S3 tasks

You might also like