Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
7 views44 pages

Cloud Computing Unit1 Updated

Uploaded by

csecgirls0203
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views44 pages

Cloud Computing Unit1 Updated

Uploaded by

csecgirls0203
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

Cloud Compu ng Unit-1

Server: A server is a hardware device or software that processes requests sent over a network
and replies to them. A client is the device that submits a request and waits for a response from
the server. The computer system that accepts requests for online files and transmits those files
to the client is referred to as a “server” in the context of the Internet.

Cloud: it is just a big building filled with computers or servers. These buildings were very
huge with joint servers which runs applications, storing data, data processing for users, web
hosting and so on. they can be accessed on through the internet.
Computing: it is the type of computing service which helps cloud application or users to
access cloud services through internet.
what is cloud computing?
“cloud computing is the delivery of computing services—including servers, storage,
databases, networking, software, analytics, and intelligence—over the Internet (“the cloud”)
to offer faster innovation, flexible resources, and economies of scale. You typically pay only
for cloud services you use, helping you lower your operating costs, run your infrastructure
more efficiently, and scale as your business needs change.”
It is refers to being storing data and applications on cloud and run on cloud rather than on our
local computer or server.

cloud computing gives you the ability to remotely work on and transform data (for example,
coding an application remotely).

Companies who runs or maintains these cloud are cloud providers. These companies were
third party providers that controls the work load of the companies by proving cloud based
services.
There were different types of cloud departments:
i. Cloud providers
ii. Cloud consumers
iii. Cloud auditors
iv. Cloud brokers
v. Cloud carrier
Cloud Compu ng Unit-1

Cloud storage vs cloud computing:


Cloud storage is a storge is simply a data storage or share medium which helps us to store
our data and retrieve whenever we required from any where ex: google drive.
Cloud computing- it is an computing service which consists of large group of interconnected
computers i.e., personal or network servers that can be accessed through public or private
networks. And it is a technology that uses the internet or central remote servers to maintain
data and application connected to it.

Now a days, 99% applications were running under the cloud due to it’s scaliability and
security with low costs and less maintaince.
Cloud Compu ng Unit-1

Why cloud computing?


Man of the people have questions why cloud computing? What if can’t be maintain our own
servers?
Compared to our own servers in cloud we can get server with less cost and also save building
power, maintenance cost, server deploying space.
Reliability: Responsible for data back up it will keep backup in another data center.
Scalability: it consists of scenario called pay as use method. Which will particularly helps
business to control their spending budget for servers, storage when they required for
particular duration of time.

Scalable computing over the internet:

 Scalability is the ability to grow and withstand the pressure of developing or


increasing production without being held back by such factors as structure or
resources. A system that scales well can cope with a larger workload and at the same
time maintain or improve its effectiveness.
Cloud Compu ng Unit-1

 Scalability is the process of computing application or products(h/w or s/w) to continue


to function well when there is change in size or volume in order to meet user
requirements.
 Over the past 60 years, computing technology has undergone a series of platform and
environment changes. In this section, we assess evolutionary changes in machine
architecture, operating system platform, network connectivity, and application
workload. Instead of using a centralized computer to solve computational problems, a
parallel and distributed computing system uses multiple computers to solve large-
scale problems over the Internet. Thus, distributed computing becomes data-intensive
and network-centric
 Instead of using centralized computers parallel and distributed computing services
uses multiple computers to solve the problems of large scale over the internet. i.e.
when we used centralized server it consists of single computer with high capacity may
handle up to some limit of workload what if suddenly workload got increased? Then
distributed & parallel server computers play an important role these will do multi
tasking works and can easily handle the workload ex: cloud.
 There were different types of scailiabity in cloud
Vertical scaling- increasing capacity or power of the processor
Horizontal scaling- increasing number of machines or nodes or bulk servers to decrease
load on particular server.
Cloud Compu ng Unit-1

Hybrid or diagonal- combination of both vertical or horizontal scaling. Thus, distributed


computing has become data intensive and net- centric.
Cloud Compu ng Unit-1

Elasticity:
o Auto-Scaling: Automatically adjusting the number of computing resources
based on current demand. Auto-scaling policies can be set up to add or remove
instances based on metrics like CPU usage, memory usage, or request rate.
o On-Demand Provisioning: Resources are provisioned as needed and de-
provisioned when no longer required, ensuring cost-efficiency and resource
optimization.
Load Balancing:
o Traffic Distribution: Distributing incoming network traffic across multiple
instances or servers to ensure no single instance becomes overwhelmed. This
improves application performance and availability.
o Health Checks: Monitoring the health of instances and redirecting traffic
away from unhealthy or failing instances to maintain reliability.
Benefits of Scalable Computing
1. Performance Optimization:
o Responsive Performance: Applications can handle varying loads effectively,
ensuring consistent performance during peak times and minimizing latency.
2. Cost Efficiency:
o Pay-as-You-Go: Users only pay for the resources they use, scaling up during
high demand and scaling down during low demand to reduce costs.
3. Flexibility:
o Adaptability: Applications can adapt to changing workloads and user
demands without manual intervention or disruption.
4. High Availability:
o Fault Tolerance: Scaling across multiple instances and regions enhances
redundancy and fault tolerance, improving the availability of applications and
services.
Use Cases
1. E-Commerce: Online retailers experience varying traffic patterns, such as during
sales or promotions. Scalable computing ensures that the website can handle spikes in
traffic without performance degradation.
2. Media and Entertainment: Streaming services require significant computational
power to handle large numbers of simultaneous streams. Scalability ensures a smooth
user experience even during peak usage.
3. Data Analytics: Big data applications often need to process large volumes of data
quickly. Scalable computing can dynamically allocate resources to handle large-scale
data processing tasks.
Cloud Compu ng Unit-1

Scalable computing over the internet in cloud computing enables organizations to efficiently
manage and optimize their resources, ensuring that applications can meet user demands and
maintain performance levels without unnecessary costs.

Age of internet computing-


 Billions of people use the Internet every day. As a result, supercomputer sites and
large data centre’s must provide high-performance computing services to huge
numbers of Internet users concurrently. Because of this high demand, the Linpack
Benchmark for high-performance computing (HPC) applications is no longer optimal
for measuring system performance. The emergence of computing clouds instead
demands high-throughput computing (HTC) systems built with parallel and
distributed computing technologies
Age of internet computing mainly depends upon two computing techniques.
 High performance
 High throughput computing

High performance computing is that when a computer is running with a huge traffic we
need to attach a bulk server or bulk storage capacity. Will increase the performance in
terms of computing and not only increase the performance but also share the workload
between the servers and preventing the servers going down. Example – government
website while accessing results due to connecting bulk servers service will continuously
available for users. Ex: 8gb vs 16gb systems performance difference. Difference between
single and multicore processors
Cloud Compu ng Unit-1

High throughput computing (no of jobs done per second)-


This HTC can be used in parallel, cloud, distributed computing technology.
http is built-in with parallel and distributed computing techniques, this will upgrade data
centers using fast servers, storage systems and high bandwidth network.

Ex: if you have to process 100 video clips each one will take one hour then you need 100
hours of time to process to videos. If we have 100 laptops we can complete work in 1hour of
time ex: cloud computing
Cloud Compu ng Unit-1

On HPC Side:
 For many years, HPC systems emphasize the raw speed performance. The speed of
HPC systems has increased from Gflops in the early 1990s to now Pflops in 2010.
This improvement was driven mainly by the demands from scientific, engineering,
and manufacturing communities. For example, the Top 500 most powerful computer
systems in the world are measured by floating-point speed in Linpack benchmark
results. However, the number of supercomputer users is limited to less than 10% of all
computer users. Today, the majority of computer users are using desktop computers or
large servers when they conduct Internet searches and market-driven computing tasks.
 Super computers are gradually replaced by clusters and cooperative computers to
share computing resources.
 Cluster is collection of homogenous computing nodes that are physically connected
together .
 Hpc works on the principal of parallel computing

HTC side:
 Htc are connected to peer to peer(p2p) network
 P2p n/w will be worked on the principle of distributed systems. P2p n/w are formed
for distributed file sharing and content delivery applications.
 A p2p is built over many client machine. Peer machines are distributed globally in
nature.
 P2p, webservice and cloud computing are more focused on htc applications than on
hpc application.
 Cluster and p2p technologies lead to the development of computational and data
grids.
Cloud Compu ng Unit-1

Evolution of Internet Computing in Cloud Computing


1. Origins and Early Developments:
o Initial Concepts: The idea of cloud computing can be traced back to the
1960s with early concepts of time-sharing and utility computing. However, the
modern age of cloud computing began in the early 2000s.
o Birth of Public Cloud Services: Amazon Web Services (AWS) launched its
Elastic Compute Cloud (EC2) in 2006, providing scalable virtual servers over
the internet. This marked the beginning of widely accessible cloud computing.
2. Growth and Expansion:
o Expansion of Cloud Providers: Following AWS, other major cloud providers
like Microsoft Azure (2010), Google Cloud Platform (2011), and IBM Cloud
entered the market, expanding the range of cloud services available.
o Rise of Cloud Service Models: Cloud computing introduced three primary
service models:
▪ Infrastructure as a Service (IaaS): Provides virtualized computing
resources (e.g., AWS EC2, Google Compute Engine).
▪ Platform as a Service (PaaS): Offers a platform for developing and
deploying applications without managing infrastructure (e.g., Heroku,
Google App Engine).
▪ Software as a Service (SaaS): Delivers software applications over the
internet, often on a subscription basis (e.g., Salesforce, Microsoft
Office 365).
Cloud Compu ng Unit-1

Impact on Various Sectors


1. Healthcare: Cloud computing supports the storage and analysis of large volumes of
medical data, telemedicine, and health informatics.
2. Finance: Financial institutions use cloud services for data analysis, fraud detection,
and secure transactions.
3. Retail: E-commerce platforms leverage cloud computing for scalable infrastructure,
data analytics, and customer experience enhancement.
4. Education: Educational institutions use cloud-based tools for online learning,
collaboration, and administration.
The age of internet computing in cloud computing has revolutionized how technology is used
and consumed, providing scalable, flexible, and cost-effective solutions that have
transformed industries and enabled new business models. As technology continues to
advance, the capabilities and applications of cloud computing are likely to expand, driving
further innovation and digital transformation.

Scalable trends and new paradigms-

 New trends- rfid(radio frequency identification)


 Gps(global positioning systems)
 Iot- internet of things

Computing paradigms-
The high-technology community has argued for many years about the precise definitions of
centralized computing, parallel computing, distributed computing, and cloud computing. In
general, distributed computing is the opposite of centralized computing. The field of parallel
computing overlaps with distributed computing to a great extent, and cloud computing
overlaps with distributed, centralized, and parallel computing.
There are different types of computing paradigms-
 Centralized computing-This is a computing paradigm by which all computer
resources are centralized in one physical system. All resources (processors, memory,
and storage) are fully shared and tightly coupled within one integrated OS. Many data
centres and supercomputers are centralized systems, but they are used in parallel,
distributed, and cloud computing applications.
Cloud Compu ng Unit-1

Parallel computing
 In parallel computing all the processors are either tightly connected or loosely coupled
with centralized shared memory or loosely coupled with distributed memory.
 Inter process communication is accomplished through shared memory.
Cloud Compu ng Unit-1

Distributed computing
 Group of computers work together as to appear as single system or computer to end
user.
 A distributed system consists of multiple autonomous computers each have it’s own
private memory, all these computers will communicate through private computer
network.

 Distributed computing is a model in which components of a software system are


shared among multiple computers or nodes. Even though the software components
may be spread out across multiple computers in multiple locations, they're run as one
system. This is done to improve efficiency and performance. The systems on different
networked computers communicate and coordinate by sending messages back and
forth to achieve a defined task.
Cloud Compu ng Unit-1

 Distributed computing can increase performance, resilience and scalability, making it


a common computing model in database and application design.

 In Centralized computing we can do single task at a time but in distributed computing


we can do multiple tasks at a time this is the major difference b/w central and
distributed computing

 Ex: ATM Machines, wireless ad-hoc networks, Internet, Intranet.

Properties of distributed computing:

Fault tolerance- it means, when one or some nodes fails the whole system still work fine
expect performance.

Load sharing: All the nodes share the overall workload, and the failure of the some
system or some nodes increases the pressure to the rest of the systems.

Resource Sharing: A computer resource made available from one host toother hosts on
computer network.

Cluster computing
 Cluster means group of same type of nodes at same location formed as cluster.
Cloud Compu ng Unit-1

 A node – Either a single or a multiprocessor network having memory, input and


output functions and an operating system.
 Two or more nodes are connected on a single line or every node might be
connected individually through a LAN connection.
 Cluster computing is a collection of tightly or loosely connected computers that
work together so that they act as a single entity. The connected computers
execute operations all together thus creating the idea of a single system. The
clusters are generally connected through fast local area networks (LANs)

Types of clusters:
 High performance clusters,
 Load balance clusters
 High avalialibity clusters

Advantages of clusters:
 High performance
 Easy to manage
 Scalable
 Expandability
 Availability
 Flexibility

Disadvantages:
 High cost
 Problem in finding fault
 More space is req.
Cloud Compu ng Unit-1

Applications of Cluster Computing :


 Various complex computational problems can be solved.
 It can be used in the applications of aerodynamics, astrophysics and in data
mining.
 Weather forecasting.
 Image Rendering.
 Various e-commerce applications.
 Earthquake Simulation.
 Petroleum reservoir simulation.

Grid computing

 Grid computing is a distributed architecture that combines computer resources


from different locations to achieve a common goal. It breaks down tasks into
smaller subtasks, allowing concurrent processing.
 Grid computing is a group of networked computers that work together as a virtual
supercomputer to perform large tasks, such as analysing huge sets of data or weather
modelling. Through the cloud, you can assemble and use vast computer grids for
specific time periods and purposes, paying, if necessary, only for what you use to save
both the time and expense of purchasing and deploying the necessary resources
yourself. Also by splitting tasks over multiple machines, processing time is
significantly reduced to increase efficiency and minimize wasted resources.
 Unlike with parallel computing, grid computing projects typically have no time
dependency associated with them. They use computers that are part of the grid only
when idle, and operators can perform tasks unrelated to the grid at any time. Security
must be considered when using computer grids as controls on member nodes are
usually very loose. Redundancy should also be built in as many computers may
disconnect or fail during processing.

Utility computing
 In this service providers makes computing resources and infra structure available to
the users or customers as needed and charges them for specific usage rather than flat
rate. Utility computing is better computing technique than cloud.
 Ex IOT- makes fans, lights available when there is utility other times automatically it
will turnoff.

Cloud computing

 Utilization of remote servers hosted on internet to store, manage and process


the data rather than relaying on local servers or personal computers.
 An internet cloud can be centralized or distributed
 Cloud computing is the on-demand delivery of computations, storage,
applications, and other IT resources through a cloud services platform over the
Cloud Compu ng Unit-1

internet with pay-as-you-go business model. Today's Cloud computing


systems are built using fundamental principles and models of distributed
systems.
 Cloud computing applies both parallel and distributed computing or both
computing.
 Clouds can be built with physical & virtualized resources over large data
centre’s that are centralized & distributed.

Homogenous and heterogenous cloud:


 A heterogeneous cloud combines public and private cloud components from
multiple vendors at different levels like: Management tool from one vendor
and driving a hypervisor from another or a single management tool operating
various hypervisors.
 For example, you could start with a public cloud provider and then pair it with
a private cloud offering. Thus, the heterogeneous cloud model necessitates
businesses partnering with multiple vendors for their public and private
clouds, then integrating the various solutions to create a multi-cloud
environment.
How does the Heterogeneous Cloud Benefits Organizations?
 Using both private and public cloud architecture will give you more freedom
to change your architecture in the future (hybrid or multi-cloud capabilities).
 Using many different components from different vendors integrated to meet
business needs reduces the risk of vendor lock-in.
 A homogeneous cloud is one in which the same vendor provides all services.
In addition, that single vendor provides your public cloud access and any
private cloud you may have, whether on-premises or off-premises.
 Simply put, a homogeneous cloud is one in which a single vendor provides the
entire software stack, from the hypervisor (remote cloud provider) to various
intermediate management layers to the end-user portal.

How does Homogenous Cloud Benefits Organizations?


 Turnkey solution with “off-the-shelf” functionality
 Easier to install and configure,
 Easier in terms of operations and management, because the same company provides
the public and private portions, they are designed to work together.
 Disaster recovery, security, governance, and monitoring services are built-in and
available across both environments.
 Because the on-premises portion is delivered as drop-in hardware or a prebuilt rack, it
is less expensive.
 Talent requires skills that are unique to that provider.
Cloud Compu ng Unit-1

 Everything that Comes with Benefits has Drawbacks.

Drawbacks of Homogenous
 When something is simple to use, it is often more difficult to leave. Users put
themselves at the understanding of that vendor’s commercial and technical strategy by
giving a single vendor so much power. Leaving that vendor, for whatever reason,
becomes risky, costly, and challenging, particularly for security and governance
strategies.

Degree of parallelism-
 it is scalable over the internet refers the ability to perform multi-tasking or operations
concurrently to achieve higher performance, improved Efficiency and scailiability.
 Parallelism is utilized to divide a complex task into smaller, more manageable sub
task that can be executed simultaneously across multiple processing units.
There are multiple types of parallelism:
Bit-level parallelism-
 This type of parallelism is used in systems that are monolithic, expensive and are
based on bits.
 In this type of systems, bit level parallelism used to transforms bit level processing
into word level processing.
Instruction level parallelism-
 The processor executes multiple instructions simultaneously rather than one
instruction at a time,
 When processor evolved from 4to 64 bits instruction level came into existence
with which more than one instructions will be processed simultaneously or
concurrently.
 Ex: multi threading, pipeline, super scalar computing etc.
Data level parallelism
 It depends on hardware and complier support inorder to carry out its work efficiently.
 Data level parallelism is more popular than single instruction multiple data(SIMD)
and vector machine.
Task level parallelism
 It came to existence with the development of multi core processor and chip multi
processors.
 It is not preferred over other typed dop because it is complex in coding vs
compling.
Job level parallelism-
Cloud Compu ng Unit-1

 Granularity of processing increased in terms of processing is with development of


distributed computing.

The Internet of Things


The concept of the IoT was introduced in 1999 at MIT [40]. The IoT refers to the networked
interconnection of everyday objects, tools, devices, or computers. One can view the IoT as a
wireless network of sensors that interconnect all things in our daily life. These things can be
large or small and they vary with respect to time and place. The idea is to tag every object
using RFID or a related sensor or electronic technology such as GPS.

Technologies for network based systems-


 Multicore technology refers to the computer or processor that has more than one
logical cpu core, and that can physically execute multiple instructions at same time.
 Multithreading refers to executing more than one thread at same time.

Advances in CPU Processors


Advanced CPUs or microprocessor chips assume a multicore architecture with dual, quad,
six, or more processing cores. These processors exploit parallelism at ILP and TLP levels.
Processor speed growth is plotted in the upper curve in across generations of microprocessors
or CMPs.

Multicore and multithreading technologies:


 The growth of components and n/w technologies have increased and lead to the
development of hpc and htc systems.
 Processor speed is measured in mips(millimetres per second), network bandwidth is
measured in mbps(megabites per second) or gbps(giga bytes per second)
Cloud Compu ng Unit-1

Gpu computing to extrascale and beyond:


 Gpu(graphical processing unit) was marked by nvidia by 1999.
 Gpu an graphical co-processor or accelerator mounted on computer graphics or video
card.
 Gpu offloads the cpu for graphical task in video editing applications.
Cloud Compu ng Unit-1

 Gpu chips can process a minimum of 10 million polygons per second and are used in
modern computers.
 Cloud GPUs are computer instances that provide hardware acceleration for an
application without deploying GPUs on the user's local device, but rather using GPUs
on a cloud service. They provide more flexibility and bandwidth than the CPU's L1
cache, resulting in lower hardware costs and total cost of ownership.
 Some gpu features were also integrated into certain cpu’s.
 A modern gpu can be built with hunderds of core processors.
How gpu works?
 Previously gpu works as coprocessors attached to cpu’s
 Nvidia gpu has been upgraded to 128 cores on single chip
 Each core in gpu can handle upto 8threads of instrctions having upto 1024 threads
executed concurrently on a single gpu called massive parallelism
 Modern gpus areaccerlated tographics or video coding used in hpc systems to power
super computers with massive parallelism at multicore and multi threading methods.

 Gpu’s are designed to handle large number of floating points in parallel, used in
mobile phones, gaming consoles, pc, servers, embedded systems.

Gpu programming model-


Cloud Compu ng Unit-1

 Use of gpu along with cpu for massively parallel execution in hundreds and thousands
of processing core systems.

 Cpu is the conventional multicore processor with limited parallelism to exploit


 Gpu has many core architecture that has hunderds of simple processing cores
organised as multi processors.
 Cpu instructs gpu to perform massive data processing.
 Inorder to perform the operation bandwidth needs to be matched b/w onboard
mainmeory and on chip gpu memory.
 Disks and Storage Technology
 Beyond 2011, disks or disk arrays have exceeded 3 TB in capacity. The lower curve in
Figure shows the disk storage growth in 7 orders of magnitude in 33 years. The rapid
growth of flash memory and solid-state drives (SSDs) also impacts the future of HPC
and HTC systems. The mortality rate of SSD is not bad at all. A typical SSD can
handle 300,000 to 1 million write cycles per block. So the SSD can last for several
years, even under conditions of heavy write usage. Flash and SSD will demonstrate
impressive speedups in many applications.

Memory, storage, wide-area network:


Memory technology- the growth of memory technology have been increased from 1976 to
2011 by 16kb to 64gb average of 4times every year.
 This will effect the access time resulting in memory wall problem.
 Processing speed is directly proportional to memory when both of them increase in
terms of memory limits the performance of cpu.
Cloud Compu ng Unit-1

Storage technology-
 From past few years memory requirement for hard disk have been increased from
260mb to 250 gb during 1981 to 2004 and have extended to 3tb by end of 2011 also
known as Seagate barracuda xt.
 It is possible that hpc and htc can make use of ssd(solid state disk) along with flash
memory in feature they have the capacity of writing 30lakh to 10croces code per
block.
 They can improve the performance of the overall systems by speeding up the
application associated to it.
 One important issue in developing large systems is the increase in power requirement
due to inclusion of various chips.
 Flashes will be considered as disks and memory as cache.

Wide area networking-


 these were very large networks that connects a country, continents, world.
 In a report it was came to know that speed of ethernet have been increased from
10mbps to 1gbps from the year 1979 to 1999.
 Another report also stated that it will be increased to 1tbps to 10 tbps by 2024.
 This typically increases the network performance as the speed is increases every year.
 Gbps are getting common and every data centers were adopting these type of services
inorder to connect various clusters.

System-Area Interconnects
The nodes in small clusters are mostly interconnected by an Ethernet switch or a local area
network (LAN).
As Figure shows, a LAN typically is used to connect client hosts to big servers. A storage
area network (SAN) connects servers to network storage such as disk arrays. Network
attached storage (NAS) connects client hosts directly to the disk arrays. All three types of
networks often appear in a large cluster built with commercial network components.
Cloud Compu ng Unit-1

Virtual Machines and Virtualization Middleware


A virtual machine is representation of one or more computers on existing physical systems.
It acquires some space on the hard disk of the physical system and allows independent use of
OS of various version on single PC.
An vm can perform all tasks of OS such as dealing with every component of the physical
machine(mouse, keyboard etc).
Based on specificaitons of the physical machines such as the size of the hard disk, available
RAM and processor capacity, an virtual network can be created on single physical machine.

a) Physical machine: equitted with physical hardware ex: x86 architecture


running it’s installed windows OS.
Virtual machine:
Can be provisioned for any h/w system.
Built with virtual resource managed by guest OS to run a specific application.
Deploys a middleware layer called virtual machine monitor(vmm) b/w the vm’s and the host
platforms.
A conventional computer has a single OS image. This offers a rigid architecture that tightly
couples application software to a specific hardware platform. Some software running well on
one machine may not be executable on another platform with a different instruction set under
a fixed OS. Virtual machines (VMs) offer novel solutions to underutilized resources,
application inflexibility, software manageability, and security concerns in existing physical
machines.
Native vm: Installed with the use of vmm called hypervision.Guest OS should be Linux
system and the hypervision is xen system.
Native os is also called as bare metal vm because hyvision handles the bare hardware
i.e.,(cpu, memory and i/o) directly.
Hosted vm’s:
Vm’s run in non privileged mode.
The host os need not to be modified.
Virtual Machines
Cloud Compu ng Unit-1

The host machine is equipped with the physical hardware, as shown at the bottom of the
figure. An example is an x-86 architecture desktop running its installed Windows OS, as
shown in part (a) of the figure. The VM can be provisioned for any hardware system. The
VM is built with virtual resources managed by a guest OS to run a specific application.
Between the VMs and the host platform, one needs to deploy a middleware layer called a
virtual machine monitor (VMM). Figure shows a native VM installed with the use of a VMM
called a hypervisor in privileged mode. For example, the hardware has x-86 architecture
running the Windows system.
The guest OS could be a Linux system and the hypervisor is the XEN system developed at
Cambridge University. This hypervisor approach is also called bare-metal VM, because the
hypervisor handles the bare hardware (CPU, memory, and I/O) directly. Another architecture
is the host VM shown in Figure. Here the VMM runs in nonprivileged mode. The host OS
need not be modified. The VM can also be implemented with a dual mode, as shown in
Figure.
Part of the VMM runs at the user level and another part runs at the supervisor level. In this
case, the host OS may have to be modified to some extent. Multiple VMs can be ported to a
given hardware system to support the virtualization process. The VM approach offers
hardware independence of the OS and applications. The user application running on its
dedicated OS could be bundled together as a virtual appliance that can be ported to any
hardware platform. The VM could run on an OS different from that of the host computer.

VM Primitive Operations
The VMM provides the VM abstraction to the guest OS. With full virtualization, the VMM
exports a VM abstraction identical to the physical machine so that a standard OS such as
Windows 2000 or Linux can run just as it would on the physical hardware. Low-level VMM
operations are indicated by Mendel Rosenblum [41] and illustrated in Figure

First, the VMs can be multiplexed between hardware machines, as shown in Figure (a).
Cloud Compu ng Unit-1

Second, a VM can be suspended and stored in stable storage, as shown in Figure (b).
Third, a suspended VM can be resumed or provisioned to a new hardware platform, as shown
in Figure.
Finally, a VM can be migrated from one hardware platform to another, as shown in Figure.

Some of the virtual machine primitive operations are


Multiplexing: it is the technique of efficiency sharing and managing physical resources
among multiple virtual machines running on same host system.
Multiplexing allow the host to allocate resources like cpu time , memory, and I/O devices to
different vm in a way that maximises utilization and minimizes conflicts
Suspension(storage):
Suspension refers to the state in which a virtual machine is paused or frozen and its entire
state is saved to disk storage.
This allows the vm to be quickly resumed from where it left off, without needed to go
through the entire boot process
When a vm is suspended , its memory contents, cpu registers, and other relevant state
information are written to a storage location
This storage can be a physical disk , n/w storage deivce or another form of perisistant storage.

Provision(resume):
Provision or resuming refers to the process of starting a vm that was previously suspended,
paused or power off.
When you resume a vm, you are instructing the virtualization s/w to load the saved state of
the vm from the disk storage and bring it back to an active running state.
Provisioning or resuming a vm is useful for scenarios where you want to quickly continue
working with a vm without going through the entire boot process
It save the time and allows you to maintain the state of your applications and processors.
Life migration:
Live migration refers to the process of moving a running vm from one physical location to
other without disturbing it’s operations
This can be done to achieve various goals such as load balancing, hardware maintaince,
energy conservation, and optimizing resource utilization.
Life migration is a crucial feature in modern virtualization platforms and data centre’s.
Cloud Compu ng Unit-1

Life migration is a complex process that requires coordination between the source and
destination hosts, as well as sophisticated virtualization management software.

Virtual Infrastructures
Physical resources for compute, storage, and networking at the bottom of Figure are mapped
to the needy applications embedded in various VMs at the top. Hardware and software are
then separated. Virtual infrastructure is what connects resources to distributed applications. It
is a dynamic mapping of system resources to specific applications. The result is decreased
costs and increased efficiency and responsiveness. Virtualization for server consolidation and
containment is a good example of this.

Data Centre Virtualization for Cloud Computing


Cloud computing is built with commodity hardware and network devices.
Almost all cloud platforms choose x86 processors and low cost terabytes disks and giga bytes
ethernets are used to build data centre’s
Data centre design emphasis the performance/price ratio over speed performance alone
Storage and energy efficiency are more important than speed performance

Data Centre Growth and Cost Breakdown


A large data centre may be built with thousands of servers. Smaller data centers are typically
built with hundreds of servers. The cost to build and maintain data centre servers has
increased over the years.
According to a 2009 IDC report, typically only 30 percent of data centre costs goes toward
purchasing IT equipment (such as servers and disks), 33 percent is attributed to the chiller, 18
percent to the uninterruptible power supply (UPS), 9 percent to computer room air
conditioning (CRAC), and the remaining 7 percent to power distribution, lighting, and
transformer costs. Thus, about 60 percent of the cost to run a data centre is allocated to
management and maintenance. The server purchase cost did not increase much with time.
The cost of electricity and cooling did increase from 5 percent to 14 percent in 15 years.
Cloud Compu ng Unit-1

Convergence of technology:
Cloud computing is enabled by convergence of technologies in 4areas:
Hardware virtualization and multicore chips enable the existence of dynamic configuration in cloud.
Utility and grid computing lay the necessary foundation for cloud computing
SOA(service oriented architecture) and web 2.0 pushing the cloud another step forward
Automatic centre and data centre automation contribute to the rise of cloud computing.

SYSTEM MODELS FOR DISTRIBUTED AND CLOUD COMPUTING


Distributed and cloud computing systems are built over a large number of autonomous
computer nodes.
These node machines are interconnected by SANs, LANs, or WANs in a hierarchical manner.
With today’s networking technology, a few LAN switches can easily connect hundreds of
machines as a working cluster.
A WAN can connect many local clusters to form a very large cluster of clusters. In this sense,
one can build a massive system with millions of computers connected to edge networks.
Massive systems are considered highly scalable, and can reach web-scale connectivity, either
physically or logically.
Many National grids built in past decade were under utilized for lack of reliable middleware
and well coded application.
Classification of distributed computing systems are classified into 4groups
Cloud Compu ng Unit-1

Cluster computing: A computing cluster consists of interconnected stand-alone computers


which work cooperatively as a single integrated computing resource.

Cluster Architecture

The above figure shows the architecture of a typical server cluster built around a low-latency,
high bandwidth interconnection network. This network can be as simple as a SAN (e.g.,
Myrinet) or a LAN (e.g., Ethernet).
To build a larger cluster with more nodes, the interconnection network can be built with
multiple levels of Gigabit Ethernet, Myrinet, or InfiniBand switches. Through hierarchical
construction using a SAN, LAN, or WAN, one can build scalable clusters with an increasing
number of nodes. The cluster is connected to the Internet via a virtual private network (VPN)
gateway. The gateway IP address locates the cluster. The system image of a computer is
decided by the way the OS manages the shared cluster resources. Most clusters have loosely
coupled node computers. All resources of a server node are managed by their own OS. Thus,
most clusters have multiple system images as a result of having many autonomous nodes
under different OS control.
They are typically homogenous with distributed running UNIX/Linux
They are suited to hpc
Peer to peer n/w: consists of interconnected standalone computers which work cooperatively
as a single integrated computing resource.
The p2p architecture offers a distributed model of networked systems.
P2p systems:
In p2p systems, every node acts as a both client and server providing a part of the system
resources.
Peer machines are simple client systems connected to internet
All client systems act autonomously to join or leave the system freely
Cloud Compu ng Unit-1

In p2p system every node acts as both client and server


Peer acts autonomously to join or leave the network
No central coordination or central database is needed
No peer machine have global view of the entire p2p system
The system itself is self organized with distributed control
Unlike cluster, grid a p2p n/w doesn’t use dedicated interconnection network.

Computational and data grids


Grids are heterogenous clusters connected by high speed networks
They have centralized control and are server oriented with authentication security
They are suited to distributed super computing
The grid is constructed across LAN’s, wan’s and internet backbone at regional, national or
global scale.
the computers used in grid include servers, clusters & super computers.
Pc’s laptop’s and multiple devices can be used to acess the grid sytems
Cloud Compu ng Unit-1

Cloud computing- a cloud is a pool of virtualized computer resources


Workloads can be deployed an scaled out quickly through rapid provisioning of virtual
machine
Virtualizations of servers resources have enabled cost effectiveness and allowed cloud
systems to leverage low costs to benefit both users and providers
Cloud computing applies virtualization plaform with resources on demand by provising
hardware, software and data sets dynamically.
Cloud Compu ng Unit-1

PERFORMANCE, SECURITY, AND ENERGY EFFICIENCY


Performance of distributed computing system is measured by considering various aspects such
as processor speed, network bandwidth, response time etc.,
These can be classified into 2 classes. They are:
Metrices associated throughput
Metrices associated with system availability
Throughput is the total number of tasks can be done within a specific amount of time.
The metrics associated with it are mips(millions of instructions per second)
Some other metrics are terra floating-point operations(tflops) and transactions per seconds(tps)
Some of the terms in measuring system throughput include job response time and network
latency.
Cloud Compu ng Unit-1

Network latency is taken as time taken by data to travel from source to destination which is
usually measure in milliseconds.
Metrices associated with system availability include disk utilization(i.e the percentage of the
disk utilized by the system workstation utilization(i.e the total number of users actively
participating in the system and many more)).

Dimensions of scalability:
Scalability dimensions can be categorized as follows:
Size scalability:
It refers to the process of improving the system performance by updating or upgrading the
system with respect to the hardware.
Hardware can include processors, disk drivers, input/output devices etc.
It is typically measured in terms of total number of processors.
Cloud Compu ng Unit-1
Cloud Compu ng Unit-1
Cloud Compu ng Unit-1
Cloud Compu ng Unit-1
Cloud Compu ng Unit-1
Cloud Compu ng Unit-1
Cloud Compu ng Unit-1
Cloud Compu ng Unit-1

Security and energy efficiency in cloud computing


In cloud computing, security, energy efficiency, and operational efficiency are crucial
considerations that impact the overall effectiveness and sustainability of cloud services.
Here’s a detailed exploration of each aspect:
1. Security in Cloud Computing
A. Key Security Concerns:
1. Data Protection:
o Encryption: Protects data both in transit and at rest. Common practices
include using TLS/SSL for data in transit and AES for data at rest.
o Data Masking and Tokenization: Obscures sensitive data to prevent
unauthorized access.
2. Access Control:
o Identity and Access Management (IAM): Manages user identities and their
access permissions. Techniques include multi-factor authentication (MFA),
role-based access control (RBAC), and least privilege access.
o Single Sign-On (SSO): Allows users to access multiple services with a single
set of credentials.
3. Threat Detection and Response:
o Intrusion Detection Systems (IDS) and Intrusion Prevention Systems
(IPS): Monitor and protect against unauthorized access and attacks.
o Security Information and Event Management (SIEM): Provides real-time
analysis of security alerts and logs.
4. Compliance and Governance:
o Regulatory Compliance: Ensures adherence to laws and regulations such as
GDPR, HIPAA, and CCPA.
o Auditing and Reporting: Regularly reviews and reports on security practices
and incidents.
5. Network Security:
o Firewalls: Protect networks by filtering incoming and outgoing traffic based
on security rules.
o Virtual Private Networks (VPNs): Provide secure remote access to cloud
resources.
6. Disaster Recovery and Backup:
o Regular Backups: Ensures data is regularly backed up and can be restored in
case of data loss or corruption.
Cloud Compu ng Unit-1

o Disaster Recovery Plans: Establishes procedures for recovering data and


services in the event of a major failure.
B. Security Best Practices:
1. Adopt a Zero Trust Model:
o Continuous Verification: Validate every access request, regardless of origin.
o Micro-Segmentation: Segment networks and enforce security policies within
each segment.
2. Regular Security Audits:
o Vulnerability Assessments: Identify and address security weaknesses.
o Penetration Testing: Simulate attacks to test the effectiveness of security
measures.
3. Update and Patch Management:
o Regular Updates: Apply security patches and updates to mitigate
vulnerabilities.
2. Energy Efficiency in Cloud Computing
A. Importance of Energy Efficiency:
1. Environmental Impact:
o Carbon Footprint: Reducing energy consumption lowers greenhouse gas
emissions.
o Sustainable Practices: Enhances corporate social responsibility and
environmental stewardship.
2. Cost Reduction:
o Operational Costs: Lower energy consumption reduces electricity and
cooling costs.
o Resource Optimization: Efficient use of resources can lower overall
operational expenses.
B. Strategies for Improving Energy Efficiency:
1. Data Center Optimization:
o Cooling Efficiency: Implement advanced cooling techniques, such as hot and
cold aisle containment and free cooling.
o Energy-Efficient Hardware: Use energy-efficient servers, storage devices,
and networking equipment.
2. Virtualization and Consolidation:
Cloud Compu ng Unit-1

o Server Virtualization: Consolidate workloads onto fewer physical servers to


reduce power consumption.
o Storage Virtualization: Optimize storage utilization and reduce the number
of physical storage devices.
3. Renewable Energy Sources:
o Green Energy: Utilize renewable energy sources such as solar, wind, or
hydroelectric power for data centers.
o Energy Credits: Purchase renewable energy credits to offset non-renewable
energy usage.
4. Energy-Efficient Algorithms:
o Optimized Software: Use algorithms and code optimized for lower
computational and energy requirements.
o Load Balancing: Efficiently distribute workloads to prevent overloading and
energy wastage.
5. Power Management:
o Dynamic Voltage and Frequency Scaling (DVFS): Adjust power
consumption based on workload requirements.
o Power Usage Effectiveness (PUE): Measure and optimize the ratio of total
facility energy usage to the energy used by IT equipment.
3. Efficiency in Cloud Computing
A. Operational Efficiency:
1. Resource Management:
o Auto-Scaling: Automatically adjust the number of resources based on demand
to optimize performance and cost.
o Resource Scheduling: Schedule resources to run only when needed to avoid
unnecessary costs.
2. Cost Management:
o Cost Monitoring and Optimization: Use tools and practices to track and
manage cloud spending.
o Spot and Reserved Instances: Leverage cost-effective pricing models such as
spot instances and reserved capacity.
3. Performance Optimization:
o Content Delivery Networks (CDNs): Use CDNs to cache content and reduce
latency.
Cloud Compu ng Unit-1

o Load Balancing: Distribute traffic across multiple instances to enhance


performance and reliability.
B. Best Practices for Efficiency:
1. Automate Processes:
o Infrastructure as Code (IaC): Automate the provisioning and management
of cloud resources.
o Continuous Integration and Continuous Deployment (CI/CD): Streamline
development and deployment processes.
2. Regular Monitoring and Analysis:
o Performance Metrics: Monitor key performance indicators (KPIs) and
resource utilization to identify and address inefficiencies.
o Optimization Reports: Generate and review reports to optimize resource
allocation and usage.
3. Adopt a Cloud-Native Approach:
o Microservices Architecture: Build applications as loosely coupled
microservices to enhance scalability and efficiency.
o Serverless Computing: Use serverless functions to automatically scale and
pay only for actual usage.
Conclusion
Security, energy efficiency, and operational efficiency are vital components of effective cloud
computing. Security measures protect data and maintain compliance, energy efficiency
practices reduce environmental impact and costs, and operational efficiency ensures optimal
use of resources and performance. By focusing on these areas, organizations can achieve a
secure, sustainable, and high-performance cloud environment that meets their needs and
supports their strategic objectives.

You might also like