Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
27 views17 pages

Chapter 1

Cloud computing delivers various computing services over the internet, allowing users to access resources on demand. The evolution of cloud computing spans from mainframe and distributed systems to modern technologies like virtualization and utility computing. Distributed computing technologies enable multiple computers to work together, enhancing scalability, fault tolerance, and resource sharing capabilities.

Uploaded by

devnshah19
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views17 pages

Chapter 1

Cloud computing delivers various computing services over the internet, allowing users to access resources on demand. The evolution of cloud computing spans from mainframe and distributed systems to modern technologies like virtualization and utility computing. Distributed computing technologies enable multiple computers to work together, enhancing scalability, fault tolerance, and resource sharing capabilities.

Uploaded by

devnshah19
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

UNIT-1

what is cloud computing?


• Cloud computing is the delivery of computing services – like servers,
storage, databases, networking, software, analytics, and intelligence –
over the internet ("the cloud"). It allows users to access and use these
resources on demand, often with a pay-as-you-go model, instead of
owning and maintaining their own physical infrastructure.
Era/Evolution of Cloud Computing
• The phrase "Cloud Computing" was first introduced in the 1950s to
describe internet-related services, and it evolved from distributed
computing to the modern technology known as cloud computing.
• 1. Mainframe Computing(1950-1970)
• Mainframes which first came into existence in 1951 are highly powerful and reliable computing machines. These are
responsible for handling large data such as massive input-output operations. Even today these are used for bulk processing
tasks such as online transactions etc. These systems have almost no downtime with high fault tolerance. After distributed
computing, these increased the processing capabilities of the system. But these were very expensive. To reduce this cost,
cluster computing came as an alternative to mainframe technology.
• 2. Distributed Systems(1970-1980)
• Distributed System is a composition of multiple independent systems but all of them are depicted as a single entity to the
users. The purpose of distributed systems is to share resources and also use them effectively and efficiently. Distributed
systems possess characteristics such as scalability, concurrency, continuous availability, heterogeneity, and independence
in failures. But the main problem with this system was that all the systems were required to be present at the same
geographical location. Thus to solve this problem, distributed computing led to three more types of computing and they
were-Mainframe computing, cluster computing, and grid computing.
• 3. Cluster Computing(1980-1990)
• In 1980s, cluster computing came as an alternative to mainframe computing. Each machine in the cluster was connected
to each other by a network with high bandwidth. These were way cheaper than those mainframe systems. These were
equally capable of high computations. Also, new nodes could easily be added to the cluster if it was required. Thus, the
problem of the cost was solved to some extent but the problem related to geographical restrictions still pertained. To solve
this, the concept of grid computing was introduced.
• 4. Grid Computing(1990-2000)
• In 1990s, the concept of grid computing was introduced. It means that different systems were placed at entirely different
geographical locations and these all were connected via the internet. These systems belonged to different organizations
and thus the grid consisted of heterogeneous nodes. Although it solved some problems but new problems emerged as the
distance between the nodes increased. The main problem which was encountered was the low availability of high
bandwidth connectivity and with it other network associated issues. Thus. cloud computing is often referred to as
"Successor of grid computing".
• 5. Utility Computing(Late 1990-2000)
• Utility Computing is a computing model that defines service provisioning techniques for services
such as compute services along with other major services such as storage, infrastructure, etc
which are provisioned on a pay-per-use basis.
• 6. Virtualization(1980-Present)
• Virtualization was introduced nearly 40 years back. It refers to the process of creating a virtual
layer over the hardware which allows the user to run multiple instances simultaneously on the
hardware. It is a key technology used in cloud computing. It is the base on which major cloud
computing services such as Amazon EC2, VMware vCloud, etc work on. Hardware virtualization is
still one of the most common types of virtualization.
• 7. Web 2.0
• Web 2.0 is the interface through which the cloud computing services interact with the clients. It is
because of Web 2.0 that we have interactive and dynamic web pages. It also increases flexibility
among web pages. Popular examples of web 2.0 include Google Maps, Facebook, Twitter, etc.
Needless to say, social media is possible because of this technology only. It gained major
popularity in 2004.
• 8. Service Orientation
• A service orientation acts as a reference model for cloud computing. It supports low-cost, flexible,
and evolvable applications. Two important concepts were introduced in this computing model.
These were Quality of Service (QoS) which also includes the SLA (Service Level Agreement)
and Software as a Service (SaaS).
Parallel computing
• Parallel computing involves dividing a task into smaller subtasks that can be
executed simultaneously by multiple processors. This approach aims to
reduce processing time and increase efficiency. Key elements
include parallel programming, algorithms, memory management,
synchronization, and debugging techniques.
• Elements of Parallel computing
• 1. Parallel Programming: This is the foundation, involving writing code that
can be executed on multiple processors concurrently.
• 2. Parallel Algorithms: These are algorithms specifically designed to be
executed in parallel, taking advantage of multiple processors to solve
problems faster.
• 3. Memory Management: In parallel systems, managing memory access
and ensuring data consistency between different processors is crucial.
• 4. Synchronization: Mechanisms to ensure that different processors coordinate
their actions, preventing conflicts and ensuring data integrity.
• 5. Debugging: Identifying and fixing errors in parallel programs is a complex task,
as it involves debugging the interaction between multiple processes.
• 6. OpenMP: A widely used API for parallel programming, particularly for shared-
memory systems, making it easier to leverage parallelism in applications.
• 7. Concurrency: The ability of a program to handle multiple tasks simultaneously,
which is often a prerequisite for parallelism.
• 8. Multithreading: A common approach to parallelism where a program is divided
into multiple threads, which can run concurrently on different processor cores.
• 9. Asynchronous Programming: Allows processes to operate independently of the
main program, improving responsiveness and efficiency.
10. Types of Parallelism:
•Bit-level parallelism: Breaking down operations into smaller bits and processing them concurrently.
•Instruction-level parallelism: Executing multiple instructions within a single clock cycle.
•Data parallelism: Applying the same operation to multiple data points simultaneously.
•Task parallelism: Distributing different tasks to different processors.

11. Architectural Elements:


•Processors: The core of a parallel system, responsible for executing instructions.
•Memory: The storage for data and instructions, often organized into banks for concurrent access.
•Interconnects: The network that allows processors to communicate and exchange data.
•Software Stack: The software layers that manage the parallel system, including operating systems and libraries.

12. Architectures:
•Shared Memory: Processors share a common memory space, making it easier to access data but potentially
leading to bottlenecks.
•Distributed Memory: Processors have their own private memory, requiring explicit communication for data
sharing.
•Hybrid Memory: Combines shared and distributed memory architectures.
Distributed computing
• Distributed computing involves multiple computers working together
as a single system to solve a problem. Key elements include nodes
(individual computers), a network (communication infrastructure),
and potentially a distributed file system for shared data
storage. These components enable parallel processing, increased
performance, resilience, and scalability.
• Nodes:
• These are the individual computers or servers that make up the distributed system. Each node
runs its own instances of the applications and services, contributing to the overall system.
• Network:
• The network acts as the communication backbone, connecting all the nodes. It allows for the
exchange of data and coordination between the nodes. This can range from a local area network
(LAN) for geographically close nodes, to a wide area network (WAN) for geographically dispersed
systems.
• Middleware:
• In some distributed systems, middleware provides a programming model and abstracts away the
complexities of the underlying hardware, operating systems, and network.
• Shared Data/Database:
• Distributed systems often require a shared storage mechanism, which can be a distributed file
system (DFS) or a distributed database.
• Distributed Algorithms:
• These are the rules and procedures that govern how nodes communicate,
coordinate, and execute tasks within the system.
• Client-Server Architecture:
• This is a fundamental model where clients request services from servers.
• Peer-to-Peer Architecture:
• In this model, all nodes have equal capabilities and can act as both clients
and servers.
• Microservices Architecture:
• This is a design approach where the application is broken down into small,
independent services that communicate with each other.
• Transparency:
• A key characteristic of distributed systems is transparency, which aims to hide the complexities of the
distributed nature from the user or application.
• Scalability:
• Distributed systems are designed to be scalable, meaning they can handle increasing workloads and data
volumes by adding more nodes.
• Fault Tolerance:
• Distributed systems are often designed to be fault-tolerant, meaning they can continue operating even if
some nodes fail.
• Consistency:
• Distributed systems must manage data consistency across multiple nodes, ensuring that all nodes have an
accurate and up-to-date view of the data.

By combining these elements, distributed computing enables the creation of powerful and scalable systems
that can tackle complex problems.
Difference between Parallel Computing and
Distributed Computing:
S.NO Parallel Computing Distributed Computing

1. Many operations are performed simultaneously System components are located at different locations

2. Single computer is required Uses multiple computers

3. Multiple processors perform multiple operations Multiple computers perform multiple operations

4. It may have shared or distributed memory It have only distributed memory

5. Processors communicate with each other through bus Computer communicate with each other through message passing.

6. Improves the system performance Improves system scalability, fault tolerance and resource sharing capabilitie
Technologies for Distributed
Computing

• Distributed computing technologies


enable multiple computers to work
together as a single system to solve
complex problems. These technologies
include distributed file systems, peer-to-
peer networks, cloud computing, and
various frameworks like Hadoop and
Spark. They facilitate parallel processing,
data sharing, and resource management
across networked devices.
1. DistributedFile Systems:
•Concept: Allow multiple computers to access and manage a single, shared file system.
•Examples: Hadoop Distributed File System (HDFS), Google File System (GFS).
•Benefits: Fault tolerance, high availability, and improved performance for large datasets.
2. Peer-to-Peer Systems:
•Concept: Enable direct resource sharing (files, processing power) between individual computers (peers)
without relying on a central server.
•Examples: BitTorrent, some blockchain networks.
•Benefits: Scalability, resilience, and reduced reliance on centralized infrastructure.
3. Cloud Computing:
•Concept: Provides on-demand access to computing resources (servers, storage, databases) over the
internet.
•Examples: Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform.
•Benefits: Elastic scalability, cost-effectiveness, and access to a wide range of services.
4. Frameworks and Platforms:

•Apache Spark:
A fast, in-memory data processing engine for large-scale data analytics and machine learning.
•Apache Hadoop:
A framework for distributed storage and processing of large datasets using MapReduce.
• Message Queue Systems:
• Enable asynchronous communication between different parts of a distributed system (e.g.,
RabbitMQ, Kafka).
• Microservices Architectures:
• Applications are broken down into small, independent services that communicate with each other.
• Service Meshes:
• Manage communication and security between microservices.
5. Other Technologies:
•Containerization (Docker, Kubernetes):
Package applications and their dependencies into containers for consistent deployment across
different environments.
•Load Balancers:
Distribute network traffic across multiple servers to improve performance and availability.
•Real-time Systems:
Designed for processing data with strict timing constraints (e.g., control systems, financial trading
platforms).
•High-Speed Networking:
Technologies like optical networks and 5G enable faster communication between distributed
components.
•Software-Defined Networking (SDN):
Allows for flexible and programmable network configurations, optimizing resource allocation.

You might also like