Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
19 views44 pages

Unit III (Cloud Computing)

doc

Uploaded by

SUH Ail
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views44 pages

Unit III (Cloud Computing)

doc

Uploaded by

SUH Ail
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

How to select the right cloud service provider

When it comes to selecting a cloud provider, the requirements you have and evaluation criteria
you use will be unique to your organization. However, there are some common areas of focus
during any service provider assessment.

We have grouped these into 8 sections to help you effectively compare suppliers and select a
provider that delivers the value and benefits your organization expects from the cloud.

1. Certifications & Standards


2. Technologies & Service Roadmap
3. Data Security, Data Governance and Business policies
4. Service Dependencies & Partnerships
5. Contracts, Commercials & SLAs
6. Reliability & Performance
7. Migration Support, Vendor Lock in & Exit Planning
8. Business health & Company profile

1. Certifications & Standards : Providers that comply with recognised standards and quality
frameworks demonstrate an adherence to industry best practices and standards. While
standards may not determine which service provider you choose, they can be very helpful in
shortlisting potential suppliers. For instance, if security is a priority, look for suppliers
accredited with certifications like ISO 27001 or the government’s Cyber Essentials Scheme.

Extract from CIF E-learning module 8 – Cloud Service provider selection.


There are multiple standards and certifications available. The image above illustrates some of the
more common organisations that provide standards, certifications and good practice guidance.

More generally, look out for structured processes, effective data management, good knowledge
management and service status visibility. Also understand how the provider plans to resource
and support continuous adherence to these standards.

2. Technologies & Service Roadmap: Technologies: Make sure the provider’s platform and
preferred technologies align with your current environment and/or support your cloud
objectives.

Does the provider’s cloud architectures, standards and services suit your workloads and
management preferences? Assess how much re-coding or customisation you may have to do to
make your workloads suitable for their platforms.

Many service providers offer comprehensive migration services and even offer assistance in the
assessment and planning phases. Ensure you have a good understanding of the support on offer
and map this against project tasks and decide who will do what. Often service providers have
technical staff that can fill skills gaps in your migration teams.

However, some large scale public cloud providers offer limited support and you may need
additional 3rd party support to fill the skills gaps: ask the platform provider for recommended
3rd party partners that have experience and extensive knowledge of the target platform.

Service roadmap: Ask about the provider’s roadmap of service development – How do they
plan to continue to innovate and grow over time? Does their roadmap fit your needs in the long
term?

Important factors to consider are commitments to specific technologies or vendors, and how
interoperability is supported. Also can they demonstrate similar deployments to the ones you are
planning? For SaaS providers in particular a features, service and integration roadmap is highly
desirable.

Depending on your particular cloud strategy, you may also want to evaluate the overall portfolio
of services that providers can offer. If you plan to use separate best of breed services from a
broad mix of provider then this is not as relevant, but if your preference is to use only a few key
cloud service providers, it is important for those service providers to offer a good range of
compatible services.
3. Data Governance and security: Data management
You may already have a data classification scheme in place that defines types of data according
to sensitivity and/or policies on data residency. At the very least you should be aware of
regulatory or data privacy rules governing personal data.

With that in mind, the location your data resides in, and the subsequent local laws it is subject to,
may be a key part of the selection process. If you have specific requirements and obligations, you
should look for providers that give you choice and control regarding the jurisdiction in which
your data is stored, processed and managed. Cloud service providers should be transparent about
their data centre locations but you should also take responsibility for finding this information out.

If relevant, assess the ability to protect data in transit through encryption of data moving to or
within the cloud. Also, sensitive volumes should be encrypted at rest, to limit exposure to
unapproved administrator access. Sensitive data in object storage should be encrypted, usually
with file/folder or client/agent encryption.

Look to understand the provider’s data loss and breach notification processes and ensure they are
aligned with your organisation’s risk appetite and legal or regulatory obligations.

The CIF Code of Practice framework has some useful guidance to help identify relevant security
and data governance policies and processes as part of a provider assessment.

Information security: Ensure you assess the cloud provider’s levels of data and system security,
the maturity of security operations and security governance processes. The provider’s
information security controls should be demonstrably risk-based and clearly support your own
security policies and processes.

Ensure user access and activity is auditable via all routes and get clarity on security roles and
responsibilities as laid out in the contacts or business policies documentation.

If they are compliant with standards like the ISO 27000 series, or have recognised certifications,
check that they are valid and get assurances of resource allocation, such as budget and headcount
to maintain compliance to these frameworks.

Ask for internal security audit reports, incident reports and evidence of remedial actions for any
issues raised.

4. Service Dependencies & Partnerships: Vendor relationships


Service providers may have multiple vendor relationships that are important to understand.
Assessing the provider’s relationship with key vendors, their accreditation levels, technical
capabilities and staff certifications, is a worthwhile exercise. Do they support multivendor
environments and can they give good examples.

Think about whether the services offered fit into a larger ecosystem of other services that might
compliment or support it. If you are choosing a SaaS CRM for instance – are there existing
integrations with finance and marketing services? For PaaS – is there a cloud marketplace from
which to buy complimentary services that are preconfigured to integrate effectively on the same
platform?

Subcontractors and service dependencies: It’s also important to uncover any service
dependencies and partnerships involved in the provision of the cloud services. For example, SaaS
providers will often build their service on existing IaaS platforms, so it must be clear how and
where the service is being delivered.

In some cases there maybe a complex network of connected components and subcontractors that
all play a part in delivering a cloud service. It’s vital to ensure the provider discloses these
relationships and can guarantee the primary SLAs stated across all parts of the service, including
those not directly under its control. You should also look to understand limitations of liability and
service disruption policies related to these subcomponents.

In general, think twice before considering providers with a long chain of subcontractors.
Especially with mission critical business processes or data governed by data privacy regulations.

The Code of Practice requires explicit clarification of service dependencies and the implications
on SLAs, accountability and responsibility.

5. Contracts, Commercials & SLAs: Contracts & SLAs


Cloud agreements can appear complex, and this isn’t helped by a lack of industry standards for
how they are constructed and defined. For SLAs in particular, many jargon-happy cloud
providers are still using unnecessarily complicated, or worse, deliberately misleading language.

This is being addressed to some degree with the latest revision of the ISO standards for Service
level agreements ISO/IEC 19086-1:2016, this revision is a useful framework to use when
assessing providers’ agreements.

In general, agreements range from out of the box “terms and conditions”, agreed online, through
to individually negotiated contracts and SLAs.
The size of CSP vs the customer is a factor here. Smaller CSPs are more likely to enter into
negotiations but may be more likely to agree custom terms that they might not be able to support.
Always challenge providers that are prepared to offer flexible terms to provide details on how
they plan to support this variation, who is responsible for this variation and what are the
processes used to govern this variation.

We cover contracts, SLAs and cloud law in module 10 of our online training programme. Key
factors to consider with contracts are:

Service delivery
Look for a clear definition of the service and deliverables. Get clarity on the roles and
responsibilities relating to the service (delivery, provisioning, service management, monitoring,
support, escalations, etc.) and how that is distributed between customer and provider. How is
service accessibility and availability managed and assured (Maintenance, incident remediation,
disaster recovery, etc.). How do these policies fit with your requirements?

Data policies and protection


Assess a provider’s security policies and data management policies particularly relating to data
privacy regulations. Ensure there are sufficient guarantees around data access, data location and
jurisdiction, confidentiality and usage /ownership rights. Scrutinise backup and resilience
provisions. Review data conversion policies to understand how transferable data maybe if you
decide to leave.
Business terms
There are a myriad of terms covered in the training module and your circumstances will dictate
which are important, but key considerations include:

Contractual and service governance, including to what extent the provider can unilaterally
change the terms of service or contract.
What are the policies on contract renewals and exit or modification notice periods.
What insurance policies, guarantees and penalties are included and what caveats accompany
them. And to what extent is the provider willing to expose their organisation to auditing
operations and compliance to policies.

Legal Protections
Specific terms relating to Indemnification, Intellectual property rights, Limitation of liability and
warranties should be standard terms in providers’ contracts. However, the parameters relating to
each should be scrutinised. Typically these protections are often the most hotly contended as
customers look to limit their exposure to potential data privacy claims following a breach and at
the same time providers look to limit their liability in cases of claims.

Service level agreements


SLAs should contain 3 major components:

● Service level objectives


● Remediation policies and penalties/incentives related to these objectives
● Exclusions and caveats.
Service level objectives (SLOs) typically cover: accessibility, service availability (usually uptime
as a percentage), service capacity (what is the upper limit in terms of users, connections,
resources, etc.), response time and elasticity (or how quickly changes can be accommodated).
There are often others depending on how terms are distributed between contract and SLA.

Look for SLOs that are relevant, explicit, measurable and unambiguous. They should also be
auditable if possible and clearly articulated in the service level agreement.

SLAs should also specify how issues should be identified and resolved, by who and in what time
period. They will also specify what compensation is available and the processes for logging and
claiming, as well as listing terms that limit the scope of the SLA and list exclusions and caveats.

Close scrutiny of these terms is important, as often service credit calculations are complex – ask
for worked examples or better still give all shortlist providers the same imaginary downtime
scenario and compare the difference in compensation.
Cloud commercials
Each cloud service provider has a unique bundle of services and pricing models. Different
providers have unique price advantages for different products. Typically, pricing variables are
based on the period of usage with some providers allowing for by the minute usage as well as
discounts for longer commitments.

The most common model for SaaS based products is on a per user, per month basis though there
may be different levels based on storage requirements, contractual commitments or access to
advanced features.

PaaS and IaaS pricing models are more granular, with costs for specific resources or ‘resource
sets’ consumption. Aside from financial competitiveness look for flexibility in terms of resource
variables but also in terms of speed to provision and de provision.

Application Architecture that allows you to scale different workload elements independently
means you can use cloud resources more efficiently. You may find that your ability to fine tune
scalability is affected by the way your cloud service provider packages its services and you'll
want to find a provider that matches your requirements in this regard.

6. Reliability & Performance


There are several methods you can use to measure the reliability of a service provider.

First, check the performance of the service provider against their SLAs for the last 6-12 months.
Some service providers publish this information, but others should supply it if asked.

Don’t expect perfection: downtime is inevitable and every cloud provider will experience it at
some point. It’s how the provider deals with that downtime that counts. Ensure the monitoring
and reporting tools on offer are sufficient and can integrate into your overall management and
reporting systems.

Ensure your chosen provider has established, documented and proven processes for dealing with
planned and unplanned downtime. They should have plans and processes in place documenting
how they plan to communicate with customers during times of disruption including timeliness,
prioritisation and severity level assessment of issues. Be aware of remedies and liability
limitations offered by the cloud provider when service issues arise.

Disaster recovery: Look to understand the provider’s disaster recovery provisions, processes
and their ability to support your data preservation expectations (inc. recovery time objectives).
This should include criticalness of data, data sources, scheduling, backup, restore, integrity
checks, etc.

Roles and responsibilities, escalation processes and who has the burden of proof, all must be
clearly documented in the service agreement. This is vital, as in many cases, your team may be
responsible for implementing some of these processes.

Consider purchasing additional risk insurance if the costs associated with recovery are not
covered by the provider’s umbrella terms and conditions.

7. Migration Support, Vendor Lock in & Exit Planning


Vendor lock-in, is a situation in which a customer using a product or service cannot easily
transition to a competitor. Vendor lock-in is usually the result of proprietary technologies that are
incompatible with those of competitors. However, it can also be caused by inefficient processes,
or contract constraints, among other things.

Cloud services that rely heavily on bespoke or unique proprietary components may impact your
portability to other providers or in-house operations. This is especially true if applications have
to be re-architected in order to run on a service provider platform.

Avoid the risk of vendor lock in by ensuring your chosen provider has minimal use of
proprietary technology or you minimise the use of services that limit your ability to migrate or
transition away.

Extract from CIF E-learning module 8 – Cloud Service provider selection


Ideally select value added services that have competitive and comparable alternatives in the
market and put policies in place to periodically review the options to minimise lock-in risk.

Also be wary of “enhancement creep”, where service providers modify configurations, policies,
technologies etc, and in doing so introduce lock-in factors as part of your service.

Finally, while there are some compelling benefits in working with one or a few key providers
you should balance these benefits with the risks of becoming too entangled with any one
supplier.

Exit provisions
Similarly, ensure you have a clear exit strategy in place at the start of your relationship. Moving
away from a CSP’s service isn’t always an easy or smooth transition, so it’s worth finding out
about their processes before signing a contract.

Furthermore, consider how you’ll access your data, what state it will be in and for how long the
provider will keep it.

8. Business health & Company profile


Assessing the technical and operational capabilities of a potential supplier is obviously
important, but take time to consider the financial health and profile of your shortlisted providers.

The most compatible or most competitive cloud service is immaterial if the provider doesn’t
have a sound business. Make sure your main providers are a good fit for the long term.

As Microsoft say in their short guide on provider selection: ‘The provider should have a track
record of stability and be in a healthy financial position with sufficient capital to operate
successfully over the long term”. If a service provider gets into trouble it may not have the
financial resources to refund your losses, regardless of good intentions and contract assurances.

Try and establish if the organisation has had any past legal issues, has been, or is being sued and
how they respond to legal challenges - ask directly or do your own research.

Ask about any planned corporate changes, mergers and acquisitions, or business aspirations.

Get a good handle on the competitive position and aspirations of the provider, use analyst
profiles, online reviews and market research to get a sense of their market status.

Sometimes looking at the history of the management team via networks like LinkedIn can be
very revealing – do previous roles show consistent performance and good corporate governance.
What type of customers do they have and what markets do they count as important – vertical
emphasis may prompt investment in valuable niche offerings.

Amazon Web Services (AWS)


AWS (Amazon Web Services) is a comprehensive, evolving cloud computing platform provided
by Amazon that includes a mixture of infrastructure as a service (IaaS), platform as a service
(PaaS) and packaged software as a service (SaaS) offerings. AWS services can offer an
organization tools such as compute power, database storage and content delivery services.

AWS launched in 2006 from the internal infrastructure that Amazon.com built to handle its online
retail operations. AWS was one of the first companies to introduce a pay-as-you-go cloud
computing model that scales to provide users with compute, storage or throughput as needed.

AWS offers many different tools and solutions for enterprises and software developers that can be
used in data centers in up to 190 countries. Groups such as government agencies, education
institutions, nonprofits and private organizations can use AWS services.

How AWS works


AWS is separated into different services; each can be configured in different ways based on the
user's needs. Users should be able to see configuration options and individual server maps for an
AWS service.

More than 100 services comprise the Amazon Web Services portfolio, including those for
compute, databases, infrastructure management, application development and security. These
services, by category, include:

● Compute ● Monitoring
● Storage databases ● Security
● Data management ● Governance
● Migration ● Big data management
● Hybrid cloud ● Analytics
● Networking ● Artificial intelligence (AI)
● Development tools ● Mobile development
● Management

AWS pricing models and competition


AWS offers a pay-as-you-go model for its cloud services, either on a per-hour or per-second basis.
There is also an option to reserve a set amount of compute capacity at a discounted price for
customers who prepay in whole, or who sign up for one- or three-year usage commitments.

If potential customers can’t afford the costs, then AWS Free Tier is another possible avenue for
using AWS services. AWS Free Tier allows users to gain first-hand experience with AWS services
for free; they can access up to 60 products and start building on the AWS platform. Free Tier is
offered in three different options: always free, 12 months free and trials.

Applications of AWS services


Amazon Web services are widely used for various computing purposes like:

● Web site hosting ● Companies using AWS


● Application hosting/SaaS hosting ● Instagram
● Media Sharing (Image/ Video) ● Netflix
● Mobile and Social Applications ● Twitch
● Content delivery and Media ● LinkedIn
Distribution ● Facebook
● Storage, backup, and disaster ● Turner Broadcasting: $10 million
recovery ● Zoopla
● Development and test environments ● Smugmug
● Academic Computing ● Pinterest
● Search Engines ● Dropbox
● Social Networking

Advantages of AWS
Following are the pros of using AWS services:
● AWS allows organizations to use the already familiar programming models, operating
systems, databases, and architectures.
● It is a cost-effective service that allows you to pay only for what you use, without any up-
front or long-term commitments.
● You will not require to spend money on running and maintaining data centers.
● Offers fast deployments
● You can easily add or remove capacity.
● You are allowed cloud access quickly with limitless capacity.
● Total Cost of Ownership is very low compared to any private/dedicated servers.
● Offers Centralized Billing and management
● Offers Hybrid Capabilities
● Allows you to deploy your application in multiple regions around the world with just a
few clicks
Disadvantages of AWS
● If you need more immediate or intensive assistance, you’ll have to opt for paid support
packages.
● Amazon Web Services may have some common cloud computing issues when you move
to a cloud. For example, downtime, limited control, and backup protection.
● AWS sets default limits on resources which differ from region to region. These resources
consist of images, volumes, and snapshots.
● Hardware-level changes happen to your application which may not offer the best
performance and usage of your applications.

Best practices of AWS

● You need to design for failure, but nothing will fail.


● It’s important to decouple all your components before using AWS services.
● You need to keep dynamic data closer to compute and static data closer to the user.
● It’s important to know security and performance tradeoffs.
● Pay for computing capacity by the hourly payment method.
● Make a habit of a one-time payment for each instance you want to reserve and to receive
a significant discount on the hourly charge.

Introduction to Microsoft Azure

Azure is Microsoft’s cloud platform, just like Google has it’s Google Cloud and Amazon has it’s
Amazon Web Service or AWS.000. Generally, it is a platform through which we can use
Microsoft’s resource. For example, to set up a huge server, we will require huge investment,
effort, physical space and so on. In such situations, Microsoft Azure comes to our rescue. It will
provide us with virtual machines, fast processing of data, analytical and monitoring tools and so
on to make our work simpler. The pricing of Azure is also simpler and cost-effective. Popularly
termed as “Pay As You Go”, which means how much you use, pay only for that.

Azure History: Microsoft unveiled Windows Azure in early October 2008 but it went to live
after February 2010. Later in 2014, Microsoft changed its name from Windows Azure to
Microsoft Azure. Azure provided a service platform for .NET services, SQL Services, and many
Live Services. Many people were still very skeptical about “the cloud”. As an industry, we were
entering a brave new world with many possibilities. Microsoft Azure is getting bigger and better
in the coming days. More tools and more functionalities are getting added. It has two releases as
of now. It's a famous version of Microsoft Azure v1 and later Microsoft Azure v2. Microsoft
Azure v1 was more like JSON script driven then the new version v2, which has interactive UI for
simplification and easy learning. Microsoft Azure v2 is still in the preview version.
How Azure can help in business?: Azure can help in our business in the following ways-

● Capital less: We don’t have to worry about the capital as Azure cuts out the high cost of
hardware. You simply pay as you go and enjoy a subscription-based model that’s kind to
your cash flow. Also, to set up an Azure account is very easy. You simply register in
Azure Portal and select your required subscription and get going.
● Less Operational Cost: Azure has low operational cost because it runs on its own
servers whose only job is to make the cloud functional and bug-free, it’s usually a whole
lot more reliable than your own, on-location server.
● Cost Effective: If we set up a server on our own, we need to hire a tech support team to
monitor them and make sure things are working fine. Also, there might be a situation
where the tech support team is taking too much time to solve the issue incurred in the
server. So, in this regard it is way too pocket-friendly.
● Easy Backup and Recovery options: Azure keeps backups of all your valuable data. In
disaster situations, you can recover all your data in a single click without your business
getting affected. Cloud-based backup and recovery solutions save time, avoid large up-
front investment and roll up third-party expertise as part of the deal.
● Easy to implement: It is very easy to implement your business models in Azure. With a
couple of on-click activities, you are good to go. There are even several tutorials to make
you learn and deploy faster.
● Better Security: Azure provides more security than local servers. Be carefree about your
critical data and business applications. As it stays safe in the Azure Cloud. Even, in
natural disasters, where the resources can be harmed, Azure is a rescue. The cloud is
always on.
● Work from anywhere: Azure gives you the freedom to work from anywhere and
everywhere. It just requires a network connection and credentials. And with most serious
Azure cloud services offering mobile apps, you’re not restricted to which device you’ve
got to hand.
● Increased collaboration: With Azure, teams can access, edit and share documents
anytime, from anywhere. They can work and achieve future goals hand in hand. Another
advantage of Azure is that it preserves records of activity and data. Timestamps are one
example of Azure's record keeping. Timestamps improve team collaboration by
establishing transparency and increasing accountability.

Microsoft Azure Services: Some following are the services of Microsoft Azure offers:

● Compute: Includes Virtual Machines, Virtual Machine Scale Sets, Functions for
serverless computing, Batch for containerized batch workloads, Service Fabric for
microservices and container orchestration, and Cloud Services for building cloud-based
apps and APIs.
● Networking: With Azure you can use variety of networking tools, like the Virtual
Network, which can connect to on-premise data centers; Load Balancer; Application
Gateway; VPN Gateway; Azure DNS for domain hosting, Content Delivery Network,
Traffic Manager, ExpressRoute dedicated private network fiber connections; and
Network Watcher monitoring and diagnostics
● Storage: Includes Blob, Queue, File and Disk Storage, as well as a Data Lake Store,
Backup and Site Recovery, among others.
● Web + Mobile: Creating Web + Mobile applications is very easy as it includes several
services for building and deploying applications.
● Containers: Azure has a property which includes Container Service, which supports
Kubernetes, DC/OS or Docker Swarm, and Container Registry, as well as tools for
microservices.
● Databases: Azure has also includes several SQL-based databases and related tools.
● Data + Analytics: Azure has some big data tools like HDInsight for Hadoop Spark, R
Server, HBase and Storm clusters
● AI + Cognitive Services: With Azure developing applications with artificial intelligence
capabilities, like the Computer Vision API, Face API, Bing Web Search, Video Indexer,
Language Understanding Intelligent.
● Internet of Things: Includes IoT Hub and IoT Edge services that can be combined with
a variety of machine learning, analytics, and communications services.
● Security + Identity: Includes Security Center, Azure Active Directory, Key Vault and
Multi-Factor Authentication Services.
● Developer Tools: Includes cloud development services like Visual Studio Team
Services, Azure DevTest Labs, HockeyApp mobile app deployment and monitoring,
Xamarin cross-platform mobile development and more.

Google Cloud Platform


Google Cloud Platform (GCP) is a suite of cloud computing services provided by Google. It is a
public cloud computing platform consisting of a variety of services like compute, storage,
networking, application development, Big Data, and more, which run on the same cloud
infrastructure that Google uses internally for its end-user products, such as Google Search, Photos,
Gmail and YouTube, etc. The services of GCP can be accessed by software developers, cloud
administrators and IT professionals over the Internet or through a dedicated network connection.

Why Google Cloud Platform? : Google Cloud Platform is known as one of the leading cloud
providers in the IT field. The services and features can be easily accessed and used by the
software developers and users with little technical knowledge. Google has been on top amongst
its competitors, offering the highly scalable and most reliable platform for building, testing and
deploying the applications in the real-time environment.
Apart from this, GCP was announced as the leading cloud platform in the Gartner's IaaS Magic
Quadrant in 2018. Gartner is one of the leading research and advisory company. Gartner organized
a campaign where Google Cloud Platform was compared with other cloud providers, and GCP
was selected as one of the top three providers in the market. Most companies use data centers
because of the availability of cost forecasting, hardware certainty, and advanced control. However,
they lack the necessary features to run and maintain resources in the data center. GCP, on the other
side, is a fully-featured cloud platform that includes:
● Capacity: Sufficient resources for easy scaling whenever required. Also, effective
management of those resources for optimum performance.
● Security: Multi-level security options to protect resources, such as assets, network and
OS -components.
● Network Infrastructure: Number of physical, logistical, and human-resource-related
components, such as wiring, routers, switches, firewalls, load balancers, etc.
● Support: Skilled professionals for installation, maintenance, and support.
● Bandwidth: Suitable amount of bandwidth for peak load.
● Facilities: Other infrastructure components, including physical equipment and power
resources.
● Therefore, Google Cloud Platform is a viable option for businesses, especially when the
businesses require an extensive catalog of services with global recognition.

Benefits of Google Cloud Platform


● Best Pricing: Google enables users to get Google Cloud hosting at the cheapest rates. The
hosting plans are not only cheaper than other hosting platforms but also offer better features
than others. GCP provides a pay-as-you-go option to the users where users can pay
separately only for the services and resources they want to use.
● Work from Anywhere: Once the account is configured on GCP, it can be accessed from
anywhere. That means that the user can use GCP across different devices from different
places. It is possible because Google provides web-based applications that allow users to
have complete access to GCP.
● Private Network: Google has its own network that enables users to have more control over
GCP functions. Due to this, users achieve smooth performance and increased efficiency
over the network.
● Scalable: Users are getting a more scalable platform over the private network. Because
Google uses fiber-optic cables to extend its network range, it is likely to have more
scalability. Google is always working to scale its network because there can be any amount
of traffic at any time.
● Security: There is a high number of security professionals working at Google. They always
keep trying to secure the network and protect the data stored on servers. Additionally,
Google uses an algorithm that encrypts all the data on the Cloud platform. This gives
assurance to the users that their data is completely safe and secure from unauthorized
sources.
● Redundant Backup: Google always keeps backup of user's data with built-in redundant
backup integration. In case a user has lost the stored data, it's not a big problem. Google
always has a copy of the users' data unless the data is deleted forcefully. This adds data
integrity, reliability and durability with GCP.

Key Features of Google Cloud Platform


● On-demand services: Automated environment with web-based tools. Therefore, no human
intervention is required to access the resources.
● Broad network access: The resources and the information can be accessed from anywhere.
● Resource pooling: On-demand availability of a shared pool of computing resources to the
users.
● Rapid elasticity: The availability of more resources whenever required.
● Measured service: Easy-to-pay feature enables users to pay only for consumed services.

Working of Google Cloud Platform


When a file is uploaded on the Google cloud, the unique metadata is inserted into a file. It helps
identify the different files and track the changes made across all the copies of any particular file.
All the changes made by individuals get synchronized automatically to the main file, also called
a master file. GCP further updates all the downloaded files using metadata to maintain the
correct records.

Let's understand the working of GCP with a general example: Suppose that MS Office is
implemented on Cloud to enable several people to work together. The primary aim of using
cloud technology is to work on the same project at the same time. We can create and save a file
on the cloud once we install a plugin for the MS Office suite. This will allow several people to
edit a document at the same time. The owner can assign access to specific people to allow them
to download and start editing the document in MS Office.

Once users are assigned as an editor, they can use and edit the document's cloud copy as desired.
The combined, edited copy is generated that is known as the master document. GCP helps to
assign a unique URL to each specific copy of the existing document given to different users.
However, any of the authorized users' changes will be visible on all the copies of documents
shared over the cloud. In case multiple changes are made to the same document, then GCP
allows the owner to select the appropriate changes to keep.

Google Cloud Platform Services


Google provides a considerable number of services with several unique features. That is the
reason why Google Cloud Platform is continually expanding across the globe. Some of the
significant services of GCP are:
● Compute Services ● Security and Identity Management
● Networking ● Management Tools
● Storage Services ● Cloud AI
● Big Data ● IoT (Internet of Things)

Difference between AWS (Amazon Web Services), Google Cloud and Azure

Salesforce
Salesforce, Inc. is a famous American cloud-based software company that provides CRM
services. Salesforce is a popular CRM tool for support, sales, and marketing teams worldwide.
Salesforce services allow businesses to use cloud technology to better connect with partners,
customers, and potential customers. Using the Salesforce CRM, companies can track customer
activity, market to customers, and many more services.

A CRM platform helps you go deeper with all your metrics and data; you could also set up a
dashboard that showcases your data visually. In addition to this, you can also have personalized
outreach with automation. Another significant benefit is that a CRM platform can also improve
customer service's ability to help customers or a sales team's outreach efforts.

Salesforce Architecture
This tutorial will now briefly walk you through the Salesforce architecture. Here, you will be
acquainted with the different layers of the Salesforce architecture individually.
● Multi-tenant: Salesforce stores data in a single database schema. There can be a single
instance of a software server with multiple tenants. Speaking about a multi-tenant
architecture, there is a single shared application service to several clients. This makes it
cost-effective. On the contrary, in a single-tenant, the development and maintenance cost
must be entirely owned by one client. Hence the multi-tenant architecture is a boon.
● Metadata: Salesforce uses a metadata-driven development model. This allows developers
to only focus on building the application. This metadata-driven platform makes
customization and scaling up easy.
● API: Salesforce provides a powerful source of APIs. This helps in developing and
customizing the Salesforce1 Mobile App. Every feature of the Salesforce design has been
planned and implemented precisely.

Salesforce Services: Moving on, you will explore the Services offered by Salesforce:

● SAAS (Software As A Service): Here, you can directly obtain the built-in software and
make use of it.
● PAAS (Platform As A Service): PAAS offers you the framework and platform to build
your websites and apps.
● IAAS (Infrastructure As A Service): IAAS plays a vital role in Salesforce development,
although not very widely used.

Salesforce Cloud Services: The next topic is Salesforce Cloud Services. Here’s a list of the
Salesforce cloud services that are going to be highlighted in this tutorial on what is Salesforce.
Fig: Different Salesforce Cloud Services

● Sales Cloud: It is one of the most essential and popular products of Salesforce. It is a
CRM platform that allows you to manage your company's sales, marketing, and customer
support aspects. Sales Cloud gives you the status of the lead that will be helpful for sales
executives.
● Marketing Cloud: Marketing is crucial when it comes to running a business. Marketing
cloud lets you run campaigns, manage emails, messages, social media, content
management, data analytics, etc., with the help of a tracking system.
● Analytics Cloud: This enables users to create a highly visually appealing dashboard of
the available data. By doing so, you can get an in-depth understanding and analyze the
trends, business, and more.
● IoT Cloud: Salesforce IoT cloud is used when your company needs to handle the Internet
of Things (IoT) data. This platform can take vast volumes of data generated by various
IoT devices; following this, you get real-time responses.
● Salesforce App Cloud: You can use this service to develop custom apps that will run on
the Salesforce platform.
● Salesforce Service Cloud: Salesforce also helps you serve your customers. This is a
service platform for your organization’s support team. It provides features like case
tracking and social networking plug-in.

These were a few of the top cloud services offered by Salesforce. Due to its diverse options,
companies use Salesforce to assist with sales, marketing, and analysis.

Salesforce Applications
The next topic in this tutorial on what is Salesforce is about Salesforce applications. Here, you
will have a look at a few applications that make Salesforce popular.
● Customer Service: Salesforce provides excellent customer service from anywhere in the
world. It helps in resolving customer issues faster and improves support agent response
time. Salesforce allows you to unify email, social, phone, and chat support and helps
manage every channel from one view.
● Customize Data: Salesforce allows you to handle and customize different types of data. It
helps you track real-time analytics and enhance the customer experience.
● Flexible Data Reporting and Analysis: Salesforce allows flexible data reporting and
analysis. Here, sales representatives can create their reports to check the accounts they
haven’t worked on for a while.
● Understand Customer Data: The Salesforce tool makes you understand customer data,
identify their interests and perception. You can locate and re-engage inactive customers
and increase sales by tracking customer interaction.

The Top 7 Benefits of Salesforce


1. Better Time Management : Time management is a huge benefit of Salesforce and one of the
best ways to allow a business to grow and thrive. Thanks to comprehensive customer
information and useful planning resources, you have everything you need in one place. No more
time wasted searching through logs and files for important info.

With so much pertinent customer data, you can easily prioritize work for (and with) your clients
by streamlining the sales funnel so that leads are quickly transformed into customers.

Salesforce also has a calendar feature that makes it easy to plan projects, meetings, phone calls,
and more in one place. You’ll know what’s coming up and when.

2. Ultimate Accessibility : Since Salesforce is cloud software, it’s accessible anywhere and
everywhere you have access to the Internet. Whether you use your desktop, laptop, or
smartphone, Salesforce can be reached thanks to its app. This is important because many
business owners and team members travel frequently, be it nationally, internationally, or even
between cities.

Being able to reach your CRM tool through the protected cloud no matter where you are makes it
easier to access important files and stay updated on clients. Sensitive information is more secure
than it would be in a file cabinet or on a local server.

3. Increased Revenue: Without Salesforce, running a business in today’s world can cost you
money. On any given day, your team might produce a ton of data that has to be stored. Without
Salesforce, you’re most likely sorting through this data manually, and this is more time spent on
administrative work as opposed to building customer relationships.
When your time is tied up, it means you have less time to improve business, make connections,
and grow profits. Since the tool takes over these administrative duties and more, you’ll have
more time to devote to the business, which means more money in the long run.

4. Greater Customer Satisfaction: Similar to increased revenue, it’s safe to assume customers
are more satisfied when they interact with a business that knows their needs and the state of their
relationship with you (thanks to your CRM tool). Spend less time on administrative duties and
you’ll have more time to spend catering to your customers through a common platform.

Thanks to a highly efficient management system, you can serve your customers better by having
quicker access to their information, accounts, purchase history, and preferences.

This Salesforce benefit not only improves your relationship with your customers – it sets you up
for new customers too. When your current customers are happy with you, they’re more likely to
be an ambassador for you and tell their friends.

5. Simple Account Planning : Salesforce makes it simple to create plans for accounts. With all
the customer information you need readily accessible, you’ll have an easier time placing that info
into the correct accounts, and then making plans for those accounts for optimal results for the
customer.

Customers get products or services perfectly tailored to their needs, you stay organized, and you
adjust your time effectively for each client. As these accounts are created, stronger connections
are made with your clients by best meeting their needs, solving their problems, and keeping track
of trends.

6. Trusted Reporting: With so much data pouring into your business, it’s easy to become lost.
Salesforce keeps pertinent data organized and it helps you make sense of new data thanks to
trustworthy reporting.

Keep track of all the data your business collects from social media, website analytics, app
information, business software, and more. Reporting takes this mountain of information and sorts
it, analyzes it, and makes it actionable. With the accuracy of Salesforce tech, you know the
numbers are right and the readings can be trusted.

7. Improved Team Collaboration : Lastly, team collaboration is a major benefit of Salesforce.


The software allows you to connect and communicate with team members from anywhere thanks
to the “Chatter” feature. This lets you connect with individual team members or full groups and
chat about everything from your clients and their information to other work-related topics such
as territory and product/service details.
When the team is on the same page, your business is more cohesive and operates more
efficiently so that deadlines are met and sales are finalized.

Microsoft Azure stack

Microsoft Azure Stack is a true hybrid cloud computing solution which is an extension of Azure
allowing organizations to provide Azure services using their own on-premises data centers. The
data centers convert into a Public Cloud that runs Microsoft’s Azure Cloud platform.

The basic principle of the Azure Stack is to enable organizations to hold sensitive data and
information with their own data centers while giving the ability to reach the public cloud of Azure.
Similar to Azure, Azure Stack services run on top of Microsoft Hyper-V on Windows and use
Microsoft’s Networking and storage solution to function seamlessly.

The Microsoft Azure Stack is an appliance built only for specific server vendor’s hardware and
is distributed by those vendors to bring the Azure Cloud to organization’s on-premises data
centers. Today, most of the major hardware vendors such as Dell, Lenovo, HP, Cisco, and
Huawei support the Azure Stack with many more vendors getting approved regularly.

Purpose of Azure Stack


Most modern-day organization’s mandatory cloud requirement is to deliver IT power, services,
and resources that are flexible, highly scalable and at the same time extremely cost effective.
Implementing such a cloud-adaptive environment requires a high start-up cost to implement
which also brings many challenges. On the other hand, organizations that adopt the public cloud
such as Azure or AWS to overcome these problems also face difficulties of migrating the
workload seamlessly between the on-premises environment and the cloud.

In the past, organizations overcame such scenarios by creating a private cloud that connected to
the public cloud, but these private clouds require local development, configuration, and
maintenance of a complicated collection of software stacks. It makes the local data center much
more complex not guaranteeing that the system’s local software stacks are compatible with the
public and private clouds as well as accessing and managing the data.

Microsoft Azure Stack can be implemented to overcome these challenges. The Azure Stack
platform seamlessly integrates with the Azure environment extending to the local data center. It
provides the consistency required by developers to build and deploy a single application for both
the public and private cloud without building separate applications for each platform.

The Microsoft Azure Stack compromises with a wide variety of Azure services that can be
hosted on the on-premises data center such as Azure App Services, Azure Virtual Machines,
Azure Functions, and also provides services like Azure Active Directory to manage Azure Stack
Identities.

Azure Stack use cases


Azure Stack continuously fulfills a variety of roles. For example, businesses planning on next-
generation application development require the availability of a service similar to a public cloud
while being flexible enough to be tested on.

For Azure Stack use cases, Microsoft cites three primary scenarios:
1. Edge and disconnected solutions: Azure Stack allows the use of Azure cloud methods
without an internet connection on remote and mobile locations that have unreliable
network connectivity, for example, on ships and airplanes. Azure Stack customers can
use of this hybrid cloud technology to make decisions for analytical purposes.

2. Cloud applications that meet different regulatory requirements: This is one of the
leading selling points of Azure Stack for organizations that understand the value and
potential of cloud technology, but also need to meet regulatory and technical and non-
technical concerns. Based on business requirements, they may want to host different
instances of the same application on the public and private cloud using Azure Stack. Also,
Azure Stack offers cloud benefits while hosting computing assets within the on-premises
data centers.

3. Provide cloud application model to on-premises: Applications developed for the Azure
Stack environment can be deployed to Azure easily when required to scale beyond the
on-premises Azure Stack capabilities. Developers can design applications and workloads
to run specific tasks locally before moving them to the public cloud gathering details on
application performance and analytics. The DevOps process will effortlessly update and
extend the business legacy applications with Azure Stack.

Benefits of Azure Stack


Azure Stack along with Azure provides a variety of benefits such as:
1. Consistent Application Development: The application developers can maximize
productivity as there is no need to develop separate applications for the public and private
clouds since the same DevOps approach is followed for the hybrid cloud environment.
This allows Azure Stack customers to
● Use powerful automation tools such as Azure PowerShell extensions
● Embrace modern open source tools and visual studio to make advanced intelligent
business applications
● Rapidly build, deploy, and operate cloud designed applications that are portable
and consistent among hybrid cloud environments.
● Program in any programming language like Java, Python, Node.js, PHP, and even
use open source application platforms.
2. Azure Services for On-Premises: With Azure Services availability for on-premises,
businesses can adopt hybrid cloud computing and meet the businesses’ technical
requirements with the flexibility to choose the correct deployment that suits the business.
These Azure services provide:
● Azure IaaS services beyond any traditional virtualizations, such as the use of VM
scale sets that enables rapid deployments with flexible scaling sets to run modern
and complex workloads.
● Azure PaaS services to run highly productive Azure app services and Azure
functions in on-premises data centers.
● Common operational practices between Azure and Azure Stack, therefore, no
additional skills are needed to use the Azure stack environment to easily deploy and
operate Azure IaaS and PaaS services as they function on Azure.
● Build future-proof applications as Microsoft delivers innovative services to the
Azure Stack like Azure Marketplace applications within the Azure Stack.

3. Continuous Innovation: The Azure Stack is designed from the ground up to be


consistent with Azure. The Azure Stack has frequent updates to the platform meaning
that Microsoft prioritizes new features based on customer and business needs and delivers
those requirements as soon as possible.

Updates
There are two types of updates for the Azure Stack;
● Azure Capabilities to Azure Stack – Updates are released as soon as they are ready and
typically aren’t scheduled regularly. These include marketplace content and updates to
existing Azure Services that are deployed on the Azure Stack
● Azure Stack Infrastructure – Updates are released at regular time intervals since these
includes firmware, drivers, etc. The infrastructure updates are usually to improve the
operational excellence of the Azure Stack.

Introduction to OpenStack

It is a free open standard cloud computing platform that first came into existence on July 21′
2010. It was a joint project of Rackspace Hosting and NASA to make cloud computing more
ubiquitous in nature. It is deployed as Infrastructure-as-a-service(IaaS) in both public and private
clouds where virtual resources are made available to the users. The software platform contains
interrelated components that control multi-vendor hardware pools of processing, storage,
networking resources through a data center.
In OpenStack, the tools which are used to build this platform are referred to as “projects”. These
projects handle a large number of services including computing, networking, and storage
services. Unlike virtualization, in which resources such as RAM, CPU, etc are abstracted from
the hardware using hypervisors, OpenStack uses a number of APIs to abstract those resources so
that users and the administrators are able to directly interact with the cloud services.

OpenStack components
Apart from various projects which constitute the OpenStack platform, there are nine major
services namely Nova, Neutron, Swift, Cinder, Keystone, Horizon, Ceilometer, and Heat. Here is
the basic definition of all the components which will give us a basic idea about these
components.

● Nova (compute service): It manages the compute resources like creating, deleting, and
handling the scheduling. It can be seen as a program dedicated to the automation of
resources that are responsible for the virtualization of services and high-performance
computing.
● Neutron (networking service): It is responsible for connecting all the networks across
OpenStack. It is an API driven service that manages all networks and IP addresses.
● Swift (object storage): It is an object storage service with high fault tolerance capabilities
and it used to retrieve unstructured data objects with the help of Restful API. Being a
distributed platform, it is also used to provide redundant storage within servers that are
clustered together. It is able to successfully manage petabytes of data.
● Cinder (block storage): It is responsible for providing persistent block storage that is
made accessible using an API (self- service). Consequently, it allows users to define and
manage the amount of cloud storage required.
● Keystone (identity service provider): It is responsible for all types of authentications and
authorizations in the OpenStack services. It is a directory-based service that uses a central
repository to map the correct services with the correct user.
● Glance (image service provider): It is responsible for registering, storing, and retrieving
virtual disk images from the complete network. These images are stored in a wide range
of back-end systems.
● Horizon (dashboard): It is responsible for providing a web-based interface for OpenStack
services. It is used to manage, provision, and monitor cloud resources.
● Ceilometer (telemetry): It is responsible for metering and billing of services used. Also, it
is used to generate alarms when a certain threshold is exceeded.
● Heat (orchestration): It is used for on-demand service provisioning with auto-scaling of
cloud resources. It works in coordination with the ceilometer.
These are the services around which this platform revolves around. These services individually
handle storage, compute, networking, identity, etc. These services are the base on which the rest
of the projects rely on and are able to orchestrate services, allow bare-metal provisioning, handle
dashboards, etc.

Advantages of using OpenStack


● It boosts rapid provisioning of resources due to which orchestration and scaling up and
down of resources becomes easy.
● Deployment of applications using OpenStack does not consume a large amount of time.
● Since resources are scalable therefore they are used more wisely and efficiently.
● The regulatory compliances associated with its usage are manageable.

Disadvantages of using OpenStack


● OpenStack is not very robust when orchestration is considered.
● Even today, the APIs provided and supported by OpenStack are not compatible with
many of the hybrid cloud providers, thus integrating solutions becomes difficult.
● Like all cloud service providers OpenStack services also come with the risk of security
breaches.

How does OpenStack Work?


Basically, OpenStack is a series of commands which is called scripts. And these scripts are
packed into packages, which are called projects that rely on tasks that create cloud environments.
OpenStack relies on two other forms of software in order to construct certain environments:

● Virtualization means a layer of virtual resources basically abstracted from the hardware.
● A base OS that executes commands basically provided by OpenStack Scripts.
So, we can say all three technologies, i.e., virtualization, base operating system, and OpenStack
must work together.

As we know, the Horizon is an interface for the appliance environment. Anything that the user
wants to do should use the Horizon (Dashboard). The Dashboard is a simple graphical user
interface with multiple modules, where each module performs specific tasks.

All the actions in OpenStack work by the service API call. So, if you are performing any task, it
means you are calling a service API. Each API call is first validated by Keystone. So, you will
have to login yourself as a registered user with your login username and password before you
enter the OpenStack dashboard.

Once you successfully log in to the OpenStack dashboard, you will get many options to create
new instances, volumes, Cinder, and configure the network.

Instances are nothing but a virtual machine or environment. To generate a new VM, use the
'instances' option from the OpenStack dashboard. In these instances, you can configure your
cloud. Instances can be RedHat, OpenSUSE, Ubuntu, etc.

The formation of an instance is also an API call. You can configure network information in the
instances. You can connect these instances to the cinder instance or volume to add more services.

After the successful creation of an instance, you can configure it, you can access it through CLI,
and whatever data you want to add, you can do it. Even you can set up an instance to manage and
store the snapshots for future reference or backup purposes.

AWS Greengrass

AWS IoT Greengrass is software functionality that connects cloud capabilities to local devices.
This facilitates devices in data collection and analytics closer to the source (devices on IoT),
quicker response time, and secure communication over local networks, even when they are not
connected to the cloud. These devices are collectively known as a Greengrass group. Greengrass
groups are configured and defined from the cloud but do not “need” the cloud to connect and
communicate with each other.

In AWS IoT Greengrass, devices securely communicate over a local network and exchange data
without a connection to the cloud. AWS IoT Greengrass achieves this through a local pub/sub
message manager that can buffer messages even while there is no connectivity, thus preserving
messages to and fro from the cloud.

Building an AWS Greengrass:

Step 1: Establish a Greengrass core: Every group needs a Greengrass core to function. Central to
this group is a physical device, on which the Greengrass core software is installed. The core
software securely connects the device to the AWS. There can be only one core to a group.

Step 2: Building a group: Once the core is established, we can continue to add devices to the
group that are themselves on the cloud, or other devices which are AWS IoT provisioned, or
AWS Lambda functions – which are essentially simple programs that can process or respond to
data. Presently, a Greengrass group can contain up to 200 devices. A device can be a member of
up to 10 groups.

Step 3: Code, the group: Once deployed, the core and devices can communicate, even without a
connection.

IoT vs. AWS IoT Greengrass:

The IoT (Internet of things) is a network of connected devices called “Things” that are connected
to a cloud server (a fit-bit or a fridge or a thermal sensor could be a “thing”). Data aggregated
from these things on to the cloud server, data could be monitored in real-time and react/respond
immediately. The IoT differs from AWS IoT in two core competency areas –

● Connectivity: “Things” on the cloud server must always be connected to communicate,


relay data, and analyze metrics, which makes connectivity vital to establish control over
things on the server.
● Security: An essential aspect of any project irrespective of the platform, method, or
technology used. Though secure because it is a closed group, the IoT is susceptible to
data breaches.

How AWS IoT Greengrass addresses this:

AWS IoT Greengrass, in its basic architectural functionality, is built to communicate with and
without access to the cloud.
Further, AWS IoT Greengrass protects user data and maximize security with its solution through
secure authentication and authorization of devices. It has a secure connection in the local
network, and communication lines secure between local devices, and between devices and the
cloud. Device security credentials function in a group until they are revoked, and even if
connectivity to the cloud is disrupted, the devices can continue to communicate locally securely.

Benefits of AWS Greengrass:

• Real-time response: AWS IoT Greengrass devices can act locally on the data they generate.
While they use cloud capabilities for management, analytics, and storage, they can respond and
react immediately – almost in real-time, if required. The local resource access feature allows
AWS Lambda functions deployed on AWS IoT Greengrass Core devices to use local device
resources to access and process data locally.

• Offline operation: AWS IoT Greengrass devices can collect, process, and export data streams,
irrespective of the group being offline or online. Even if there is intermittent connectivity, when
the device reconnects, AWS IoT Greengrass synchronizes the data on the device with cloud
services, thus providing smooth functioning independent of connectivity.

• Secure communication: AWS IoT Greengrass secures, authenticates, and encrypts device data
sourced from both local and cloud communication. Without a proven identity, data cannot be
exchanged between devices or the device and the cloud.

• Simple device programming: Code development in the cloud, and deploying in onto “things”
through IoT or AWS lambda, or containers is simple since AWS IoT Greengrass enables for
locally executable lambda functions, instead of complex embedded software

• Low Cost: Better data analytics, at a lower cost compared to IoT, can be achieved through the
AWS IoT Greengrass, because it assimilates and aggregates data locally, and transmits “filtered”
data to the cloud. This reduces the amount of raw data transmitted to the cloud, minimizing cost,
and increasing the quality of the data you send to the cloud.

• Broader platform support: AWS IoT Greengrass is a flexible software, meaning any device that
supports the minimum hardware and software requirements can be put onto the Greengrass
group. The AWS IoT Greengrass is evolving even as this article is being read, and as technology
advances with time, more and more features will be added to it.

Greengrass component
A software module that is deployed to and runs on a Greengrass core device. All software that is
developed and deployed with AWS IoT Greengrass is modeled as a component. AWS IoT
Greengrass provides pre-built public components that provide features and functionality that you
can use in your applications. You can also develop your own custom components, on your local
device or in the cloud. After you develop a custom component, you can use the AWS IoT
Greengrass cloud service to deploy it to single or multiple core devices. You can create a custom
component and deploy that component to a core device. When you do, the core device
downloads the following resources to run the component:

● Recipe: A JSON or YAML file that describes the software module by defining
component details, configuration, and parameters.
● Artifact: The source code, binaries, or scripts that define the software that will run on
your device. You can create artifacts from scratch, or you can create a component using a
Lambda function, a Docker container, or a custom runtime.
● Dependency: The relationship between components that enables you to enforce automatic
updates or restarts of dependent components. For example, you can have a secure
message processing component dependent on an encryption component. This ensures that
any updates to the encryption component automatically update and restart the message
processing component.

Business Process Management


Every enterprise is made up of multiple divisions or cost centers with defined process regarding
how each does business. Business Process Management (BPM) is a continuous cycle of evaluating
and improving organizational processes. For example, consider the vacation approval process in
an HR department. Without BPM software (BPMS), a lot of email exchanges would have to occur.
With BPMS, however, vacation requests can be automated, thus improving document
management, increasing request speed and improving system efficiency. BPMS provides a
complete suit for modeling, managing and monitoring business processes.

The BPM Life Cycle


The BPM life cycle consists of four phases. These are listed below and outlined in Figure:
1. The design phase consists of identifying existing procedures and capturing these business
processes into process models.
2. The implementation phase deploys the results of the design phase. A BPMS package can
be used to house these processes.
3. The enactment phase is the runtime phase where the business processes are deployed into
production and monitored by a BPMS.
4. The evaluation phase monitors the information gathered through the enactment phase and
uses it to review the business process in action. Findings of the evaluation phase are input
for the next iteration of the life cycle.

Business Process as a Service (BPaaS)


Business process as a service, or BPaaS, is a type of business process outsourcing (BPO) delivered
based on a cloud services model. BPaaS is connected to other services, including SaaS, PaaS and
IaaS, and is fully configurable. BPaaS provides companies with the people, processes and
technology they need to operate as a pay-per-use service by making use of the availability and
efficiency of a cloud-based system. This approach to operations greatly reduces total cost of
ownership by providing an on-demand solution based on services needed as opposed to purchasing
a package deal tied into a single application.

Business benefits of Business Process as a Service (BPaaS)


BPaaS offers many business benefits, including:

● Product/service deliverability: From managing inventory to organizing email and


customer records, BPaaS helps companies facilitate the delivery of products and services
in an automated, streamlined way with help of cloud technologies. BPaaS is standardized
for use across industries and organizations, so it’s flexible and repeatable, resulting in
higher efficiency and, ultimately, better service and experience for customers.
● Cutting edge at reduced cost: BPaaS provides a business with the latest digital tools,
technologies, processes and talent to improve its efficiency, service and the customer
experience, without the large capital investment traditionally required. By implementing
BPaaS, companies can shift to a pay-per-use consumption model and reduce total cost of
ownership.
● Accommodates fluctuating business needs: BPaaS can scale on-demand when a company
experiences a peak workload. Due to its innate configurability applicable across multiple
business areas, and its interaction with other foundational cloud services like SaaS, the
service can make use of its cloud foundation to scale to accommodate large fluctuations
in business process needs.
Broad Approaches to Migrating into the Cloud

A cloud migration strategy is the plan an organization makes to move its data and applications
from an on-premises architecture to the cloud. Not all workloads benefit from running on cloud-
based infrastructure, so it is important to validate the most efficient way to prioritize and migrate
applications before going live. A systematic, documented strategy is crucial.

A cloud migration is when a company moves some or all of its data center capabilities into the
cloud, usually to run on the cloud-based infrastructure provided by a cloud service provider such
as AWS, Google Cloud, or Azure.As more and more companies have already transitioned to the
cloud, cloud migrations are increasingly taking place within the cloud, as companies migrate
between different cloud providers (known as cloud-to-cloud migration). But for those making the
initial foray to the cloud, there are a few critical considerations to be aware of, which we’ll take a
look at below.

What are the Main Benefits of Migrating to the Cloud?


Here are some of the benefits that compel organizations to migrate resources to the public cloud:
● Scalability - cloud computing can scale to support larger workloads and more users,
much more easily than on-premises infrastructure. In traditional IT environments,
companies had to purchase and set up physical servers, software licenses, storage and
network equipment to scale up business services.
● Cost - cloud providers take over maintenance and upgrades, companies migrating to the
cloud can spend significantly less on IT operations. They can devote more resources to
innovation - developing new products or improving existing products.
● Performance - migrating to the cloud can improve performance and end-user experience.
Applications and websites hosted in the cloud can easily scale to serve more users or higher
throughput, and can run in geographical locations near to end-users, to reduce network
latency.
● Digital experience - users can access cloud services and data from anywhere, whether they
are employees or customers. This contributes to digital transformation, enables an
improved experience for customers, and provides employees with modern, flexible tools.
● Scalability: Scalable enough to support various workloads and users. So it offers to expand
without impacting performance.
● Performance: Moving into cloud provides higher performance and customer satisfaction as
compared to traditional business processes.
● Productivity: As it manages the complexity of infrastructure, so improved productivity is
more focused with a continuous process of growing business.
● Flexibility: It allows to use the services flexibly as well as from any where and any time
cloud services can be accessed as per demand/need.
● Cost: Moving into cloud technology offers reduced cost in managing, operating, upgrading
and maintaining IT operations or infrastructure.
● Security: Security is a major concern which is taken care by cloud service providers.
● Profitability: As it follows pay per use model so it delivers a greater profitability to the
customers.
● Agility: It is flexible enough to go with rapid changes in technology and it provides
producing newer and advanced setup quickly as per requirement.
● Recovery: It provides backup and recovery solutions to businesses with less time and
upfront investment.

What are Common Cloud Migration Challenges?


Cloud migrations can be complex and risky. Here are some of the major challenges facing many
organizations as they transition resources to the cloud.
● Lack of Strategy: Many organizations start migrating to the cloud without devoting
sufficient time and attention to their strategy. Successful cloud adoption and
implementation requires rigorous end-to-end cloud migration planning. Each application
and dataset may have different requirements and considerations, and may require a
different approach to cloud migration. The organization must have a clear business case
for each workload it migrates to the cloud.
● Cost Management: When migrating to the cloud, many organizations have not set clear
KPIs to understand what they plan to spend or save after migration. This makes it
difficult to understand if migration was successful, from an economic point of view. In
addition, cloud environments are dynamic and costs can change rapidly as new services
are adopted and application usage grows.
● Vendor Lock-In: Vendor lock-in is a common problem for adopters of cloud
technology. Cloud providers offer a large variety of services, but many of them cannot be
extended to other cloud platforms. Migrating workloads from one cloud to another is a
lengthy and costly process. Many organizations start using cloud services, and later find
it difficult to switch providers if the current provider doesn't suit their requirements.
● Data Security and Compliance: One of the major obstacles to cloud migration is data
security and compliance. Cloud services use a shared responsibility model, where they
take responsibility for securing the infrastructure, and the customer is responsible for
securing data and workloads. So while the cloud provider may provide robust security
measures, it is your organization’s responsibility to configure them correctly and ensure
that all services and applications have the appropriate security controls. The migration
process itself presents security risks. Transferring large volumes of data, which may be
sensitive, and configuring access controls for applications across different environments,
creates significant exposure.
Once the IT department has fully addressed these risk factors, they can move on to plan the best
cloud migration approach to meet the company’s business objectives and requirements. While
there are a number of approaches used in the industry, below are the most broad:

● Lift and shift: This approach involves mapping the on-premises hardware and/or VMs to
similar resource-sized cloud instances. For example, if a company’s front-end application
server has 4 CPUs, 64GB of RAM, and 512GB of local storage, they would use a cloud
instance that matches that configuration as closely as possible. The challenges with this
approach is that on-premise solutions are typically over-provisioned with respect to
resources in order to meet peak loads as they lack the elastic, auto-scaling features of cloud.
This results in increased cloud costs, which may be fine if this is a short-term approach
● Refactor and re architect: In order to best maximize the features of cloud, such as auto-
scaling, migration can be the forcing function to take some time and re-architect the
application to be more performant and also keep the costs under control. It is also a good
time to re-evaluate technology choices, as a company may be able to switch some solutions
from more expensive commercial ones, to open-source or cloud-native offerings.
● Shelve and spend: This third approach involves retiring a monolithic on-premises
application and moving to a SaaS solution. An example of this would be an HCM (Human
Capital Management) application, which is often times a disparate set of code bases tied
together with a relational database, migrating to an offering such as Workday HCM. This
allows the modernisation of business logic and offloads the operational burden of the
service and infrastructure to the SaaS provider.
● While there are a number of hurdles and challenges to overcome when it comes to cloud
migration, these approaches can ensure that CIOs and CSOs take the best route in order to
capitalize on the benefits of moving to the cloud, while minimising risk at the same time.

The Seven-Step Model of Migration into a Cloud

1. ASSESSMENT
Migration starts with an assessment of the issues relating to migration, at the application, code,
design, and architecture levels. Moreover, assessments are also required for tools being used,
functionality, test cases, and configuration of the application. The proof of concepts for migration
and the corresponding pricing details will help to assess these issues properly.

2. ISOLATE
The second step is the isolation of all the environmental and systemic dependencies of the
enterprise application within the captive data center. These include library, application, and
architectural dependencies. This step results in a better understanding of the complexity of the
migration.

3. MAP
A mapping construct is generated to separate the components that should reside in the captive data
center from the ones that will go into the cloud.

4. RE-ARCHITECT
It is likely that a substantial part of the application has to be re-architected and implemented in the
cloud. This can affect the functionalities of the application and some of these might be lost. It is
possible to approximate lost functionality using cloud runtime support API.

5. AUGMENT
The features of cloud computing service are used to augment the application.

6. TEST
Once the augmentation is done, the application needs to be validated and tested. This is to be done
using a test suite for the applications on the cloud. New test cases due to augmentation and proof-
of-concepts are also tested at this stage.

7. OPTIMISE
The test results from the last step can be mixed and so require iteration and optimization. It may
take several optimizing iterations for the migration to be successful. It is best to iterate through
this seven step model as this will ensure the migration to be robust and comprehensive.

Efficient Steps for migrating to cloud

Migrating to a cloud environment can help improve operational performance and agility, workload
scalability, and security. From virtually any source, businesses can migrate workloads and quickly
begin capitalizing on the following hybrid cloud benefits:

Greater agility with IT resources on-demand, which enables companies to scale during unexpected
surges or seasonal usage patterns.
Reduced capital expenditure becomes possible by shifting from an operating expenses model to
pay-as-you-go approach.
Enhanced security with various options throughout the stack — from physical hardware and
networking to software and people.
Before embarking on the cloud migration process, use the steps below to gain a clear understanding
of what’s involved.

1. Develop a strategy
Before embarking on your journey to the cloud, clearly establish what you want to accomplish.
This starts with capturing baseline metrics of your IT infrastructure to map workloads to your
assets and applications. Having a baseline understanding of where you stand will help you establish
cloud migration key performance indicators (KPIs), such as page load times, response times,
availability, CPU usage, memory usage, and conversion rates.

Strategy development should be done early and in a way that prioritizes business objectives over
technology, and these metrics will enable measurement across a number of categories.

2. Identify the right applications


Not all apps are cloud friendly. Some perform better on private or hybrid clouds than on a public
cloud. Some may need minor tweaking while others need in-depth code changes. A full analysis
of architecture, complexity and implementation is easier to do before the migration rather than
after.

As you evaluate which applications to move to the cloud, keep these questions in mind:
● Which applications can be moved as-is, and which will require a redesign?
● If a redesign is necessary, what is the level of complexity required?
● Does the cloud provider have any services that allow migration without reconfiguring
workloads?
● What is the return on investment for each application you will be moving, and how long
will it take to achieve it?
● For applications where moving to the cloud is deemed cost-effective and secure, which
type of cloud environment is best — public, private, or multicloud?
● An analysis of your architecture and a careful look at your applications can help determine
what makes sense to migrate.

3. Secure the right cloud provider


A key aspect of optimization will involve selecting a cloud provider that can help guide the cloud
migration process during the transition and beyond. Some key questions to ask include:

● What tools, including third-party, do you have available to help make the process easier?
● What is its level of experience?
● Can it support public, private, and multi cloud environments at any scale?
● How can it help you deal with complex interdependencies, inflexible architectures, or
redundant and out-of-date technology?
● What level of support can it provide throughout the migration process?
● Moving to the cloud is not simple. Consequently, the service provider you select should
have proven experience that it can manage the complex tasks required to manage a cloud
migration at a global scale. This includes providing service-level agreements that include
milestone-based progress and results.

4. Maintain data integrity and operational continuity


Managing risk is critical, and sensitive data can be exposed during a cloud migration. Post-
migration validation of business processes is crucial to ensure that automated controls are
producing the same outcomes without disrupting normal operations.

5. Adopt an end-to-end approach


Service providers should have a robust and proven methodology to address every aspect of the
migration process. This should include the framework to manage complex transactions on a
consistent basis and on a global scale. Make sure to spell all of this out in the service-level
agreement (SLA) with agreed-upon milestones for progress and results.

6. Execute your cloud migration


If you’ve followed the previous steps carefully, this last step should be relatively easy. However,
how you migrate to the cloud will partially depend on the complexity and architecture of your
application(s) and the architecture of your data. You can move your entire application over, run a
test to see that it works, and then switch over your on-premises traffic. Alternatively, you can take
a more piecemeal approach, slowly moving customers over, validating, and then continuing this
process until all customers are moved to the cloud.

Risks in Cloud Migration and How You Can Avoid Them

1. Incompatibility of the Current Architecture: Many organizations find that the complex
nature of their present IT architecture is one of the prime risks they face during cloud migration.
It reduces the speed of their migration as they need to find individuals with relevant IT skills, so
they can make the whole architecture precise for the cloud at the required speed.

How to Avoid: To make the architecture ready for cloud migration, you need to hire a team of IT
experts that will fix tech debt, review the legacy architecture, make comprehensive documentation,
and measure interdependent parts.
In case you want to mix private and public clouds with on-premise assets for making a hybrid
environment, you should re-design your in-house IT infrastructure for reducing interoperability
issues and inconsistencies among several systems.

2. Extra Latency: Extra latency is one of the underrated risks in cloud migration. This can happen
when you access databases, apps, and services in the cloud. In case you have apps that need instant
responses, delays in some seconds can create major damage to your business. It not just can cause
disappointment and frustration but also affect your brand reputation.

How to Avoid: To get rid of latency problems, you should first understand its causes:
misconfigured QoS (Quality of Service) and the geographical distance between servers and client
devices. Many ways are there to solve latency problems:
● Divide traffic flows
● Optimize and localize the network
● Offload the internet at the end
● Build multi-cloud connectivity
● Connect with ecosystems and business partners for online business or data exchanges.
In case the aforesaid cloud migration strategies are costly for you or do not help, ponder keeping
such apps on-premise.

3. Complexity Around Security: According to study, the complexity around security is the major
cloud migration risk that maximum companies (57%) encounter followed by pricing and legacy
infrastructure.

Transferring data to the cloud brings many security risks, such as insider threats, accidental errors,
external attacks, malware, misconfigured servers, problems on the side of the cloud provider,
insecure APIs, contractual violations, compliance breaches, etc.

challenges with cloud adoption: A few companies already know these risks and take some
precautions for avoiding them. However, still many companies fail to do that. And as a result, they
struggle for fixing security problems because they are not enough skilled to do that.

As per report, 92% of respondents say they should improve cloud security skills, whereas 84%
confirmed they require adding employees for bridging the gap. Just 27% of respondents had
confidence in their capacity of identifying every cloud security alert.

How to Avoid: Leading cloud service providers like AWS and Azure offer security. They make
sure to protect your physical assets from unauthorized access. Maximum cloud vendors have a
great portfolio of compliance services incorporating FIPS, CJIS, HIPAA, DISA, ITAR, etc. They
spend much on security for safeguarding client data from cyber threats.

Moreover, they provide exclusive solutions to keep your client data secure while migrating to the
cloud. Nevertheless, you should hire an experienced security team and some trained DevOps
engineers who can make the required configurations and give assurance about the long-term data
security in the cloud:
● Allow multi-factor authentication
● Encrypt data assets in migration and at rest
● Establish user access policies
● Configure a firewall
● Train others on the ways of maintaining security in the cloud
● Execute required controls
● Set individual workloads apart for reducing every damage that could happen because of an
attacker

4. Inadequacy of Visibility and Control: Visibility in the public cloud is one of the major risks
in migrating to the cloud. It impacts to network and app functionality. In case you depend on your
on-premise data centers, take complete control over your resources incorporating data centers,
networks, and physical hosts.

However, when moving to external cloud services the responsibility for a few policies shifts to
cloud service providers based on the service type. As an outcome, the organization needs visibility
into public cloud workloads.

As per a current survey by Dimensional Research, 95% of responding companies say visibility
issues have caused network or app performance problems. And as per 38% of respondents,
inadequate visibility is the main factor in app outages, while 31% claim it in network outages.

How to Avoid: Now several tools can help you in the app and network performance monitoring.
Third-party security vendors and cloud service providers provide many solutions for that. Here are
a few demands for efficient monitoring tools:
● Automatic response to some kinds of threats and alerts
● A steep learning curve
● Scopes for configuring various types of alerts
● Strong analytics
● A monitoring solution must integrate simply with other solutions
● Fundamental monitoring capacities with no requirement for manual configuration
5. Wasted Cloud Costs: Cloud providers’ pricing models are flexible; however, sometimes hard
to understand particularly in case you are a newbie in this field. According to the estimation of
Gartner analysts Craig Lowery and Brandon Medford, as much as 70% of cloud costs get wasted.

You need to pay for compute, data transfer and storage in cloud computing. And every cloud
vendor provides several storage services, instance types, and transfer choices based on your price
needs, use case, and performance expectations.

Finding the right one can be difficult. Organizations that fail to find out what they require generally
waste their prices because they don’t utilize the chances they get.

How to Avoid: You should optimize your cloud costs. In case you are unaware of doing it, hire
experts to help you. Some common cloud cost optimization practices are as follows:
● Use discounts
● Erase underused instances
● Increase spot instances for serverless and stuff that do not need high availability
● Administer your workloads
● Spend on reserved instances
● Take benefit of autoscaling
● Check whether hosting in another region could lower costs
● Fix alerts to cross pre-decided spend thresholds
● Shift irregularly accessed storage to more inexpensive tiers

6. No Proper Cloud Migration Strategy: You should decide whether you choose a cloud
provider or deal with many cloud platforms. Every strategy has some advantages and
disadvantages. If you choose a cloud provider, you may face the risk of vendor lock-in. Besides,
you can work with multiple cloud providers and balance workloads among many cloud platforms.

However, it is costlier and more complex as every provider offers various tools and services for
cloud management, but it provides freedom also. As per McAfee’s Cloud Adoption and Risk
Report, 78% of companies are presently utilizing both Azure and AWS for avoiding basic risks in
cloud migration.

Moreover, you need to determine what you will migrate to the cloud and what you will leave on
the on-premise datacenters. In case you choose a hybrid technique, plan according to that. Not all
parts of your infrastructure are perfect for migration. Don’t keep client’s data, financial records in
the public cloud.

Enterprise cloud strategy


In case you have sensitive data, store them on the on-premise data centers, and utilize a public
cloud platform for flexibility, compute strength, scalability, and connectivity.

How to Avoid: Don’t get hyped and rush into shifting your workloads quickly to a cloud platform
even if somebody recommends it. What is beneficial for an organization can become destructive
for your business. Without proper planning and a rock-solid cloud migration strategy, you may
end up with system failures and huge expenses.

7. Data Loss: Before migrating to the cloud, ensure to back up all your data, particularly the files
that you will move. During the cloud migration procedure, you may experience some problems
like missing, incomplete, or corrupted files. And in case you have backed up all your data, you can
easily rectify the errors by restoring the data to its previous condition.

How to Avoid: Anything from a security violation to a power outage at a data center may lead to
data loss. So, if you store the backups of databases in the cloud or on a server, you can restore all
the data fast.

Moreover, in case you utilize many cloud providers, you don’t have to be worried regarding the
sudden failure of the service of a specific provider. You always can distribute an independent
replica of your app on the infrastructure of another provider.

Make sure to configure the migrated data’s backup to save lots of time and money. Don’t forget
to backup your old system so you can find every missing file if required.

Risk Mitigation Planning for the Cloud

1. Cloud Service Failure: Cloud services are not immune to failure and going dark for any length
of time can damage an organization’s reputation. Therefore, it’s important that organizations
mitigate the risk of cloud service failure through disaster recovery and business continuity
planning.

2. Interception of Data: Data in-flight can be intercepted as it moves to or from the cloud. Risk
mitigation involves ensuring that all data transmissions are strongly encrypted and that data
transmission endpoints are authenticated to ensure that they are legitimate.

3. Unauthorized Use (Shadow IT): Some cloud providers make it easy for users to acquire new
services on demand without the consent of an organization’s internal IT team. Using software not
supported by an organization is known as “shadow IT,” and it can cause an increase in malware
infections.
4. Data Loss: It doesn’t always take an attack for data to be lost. Accidental deletion by the cloud
provider or a natural disaster, such as a fire or hurricane, can lead to the permanent loss of customer
data. The burden of avoiding data loss does not always fall solely on the provider's shoulders, so
it’s important to understand a provider’s availability and uptime percentages.

5. Reduced Visibility: Organizations lose some visibility and control in the cloud as a portion of
responsibility for the infrastructure moves to the provider. One of the risks of reduced visibility in
a public cloud environment is the ability to verify the secure deletion of data. This is because data
is spread over a number of different storage devices within the provider’s multi-tenancy
environment. The threat increases if an organization adopts a multi-cloud approach.

6. Legal Risks: It can be very difficult to achieve compliance using cloud architecture because
often an organization doesn’t know the location of data. The legal risks of maintaining compliance
with regulations such as HIPAA for healthcare, CJIS for government, or PCI for financial
transaction must be considered to mitigate risk of regulatory fines or lawsuits.

7. Geographic Location: Different geographic locations have their own set of laws (including
international, federal, state and local). Some organizations may need to limit the locations in which
cloud workloads are housed to avoid the legal requirements of certain jurisdictions.

8. Resource Exhaustion: When an organization does not manage its cloud resources effectively
and fails to prepare to automatically provision additional resources when needed, service can be
degraded and eventually, may not be available at all. Organizations can mitigate the risk of
resource exhaustion through proper capacity planning or by using a cloud provider offering instant
scalability.

9. Multi-tenancy Failure: Exploitation of vulnerabilities within public multi-tenancy


infrastructures that are shared with thousands or even millions of users can lead to failure to
maintain separation among tenants. This failure can be used by an attacker to gain access from one
organization's resource to another organization's assets or data.

10. IT Strain: Managing and operating in the cloud may require an organization’s internal IT team
to learn a new model and gain new skills in addition to maintaining their current responsibilities
for on-premise IT. This added complexity could also lead to security gaps in an agency's cloud and
on-premises implementations and compromise security.
Case Study for Cloud Migration

Spotify
Spotify (est. 2006) is a media services provider primarily focused on its audio-streaming platform,
which lets users search for, listen to, and share music and podcasts.

Migration Objective
Spotify’s leadership and engineering team agreed: The company’s massive in-house data centers
were difficult to provision and maintain, and they didn’t directly serve the company’s goal of being
the “best music service in the world.” They wanted to free up Spotify’s engineers to focus on
innovation. They started planning for migration to Google Cloud Platform (GCP) in 2015, hoping
to minimize disruption to product development, and minimize the cost and complexity of hybrid
operation.

Migration Strategy and Results


Spotify invested two years pre-migration in preparing, assigning a dedicated Spotify/Google cloud
migration team to oversee the effort. Ultimately, they split the effort into two parts, services and
data, which took a year apiece. For services migration, engineering teams moved services to the
cloud in focused two-week sprints, pausing on product development. For data migration, teams
were allowed to choose between “forklifting” or rewriting options to best fit their needs.
Ultimately, Spotify’s on-premise to cloud migration succeeded in increasing scalability while
freeing up developers to innovate.

Key Takeaways
● Gaining stakeholder buy-in is crucial. Spotify was careful to consult its engineers about
the vision. Once they could see what their jobs looked like in the future, they were all-in
advocates.
● Migration preparation shouldn’t be rushed. Spotify’s dedicated migration team took the
time to investigate various cloud strategies and build out the use case demonstrating the
benefits of cloud computing to the business. They carefully mapped all dependencies. They
also worked with Google to identify and orchestrate the right cloud strategies and solutions.
● Focus and dedication pay huge dividends. Spotify’s dedicated migration team kept
everything on track and in focus, making sure everyone involved was aware of past
experience and lessons already learned. In addition, since engineering teams were fully
focused on the migration effort, they were able to complete it more quickly, reducing the
disruption to product development.

You might also like