1,2.
List the Various Computing Paradigm Distinctions
Computing Paradigm Distinctions: (a) Centralized Computing: All computer resources like
processors, memory and storage are centralized in one physical system. All of these are shared and
inter-connected and monitored by the OS.
(b) Parallel Computing: All processors are tightly coupled with centralized shared memory or loosely
coupled with distributed memory (parallel processing). Inter processor communication is achieved by
message passing. This methodology is known as parallel computing.
(c) Distributed Computing: A distributed system consists of multiple autonomous computers with
each device having its own private memory. They interconnect among themselves by the usage of a
computer network. Here also, information exchange is accomplished by message passing.
(d) Cloud Computing: An Internet Cloud of resources can either be a centralized or a distributed
computing system. The cloud applies parallel or distributed computing or both. Cloud can be built by
using physical or virtual resources over data centres. CC is also called as utility/ service/concurrent
computing.
3,4.Elaborate on the system models for distributed and cloud computing .
The table classifies distributed and cloud computing systems into four main types. Here's a simple
breakdown of each:
1. Computer Clusters
o A group of computers connected through high-speed networks (like LAN, WAN).
o They work together as a single system to process tasks efficiently.
o Example: Search engines like Google use clusters for quick searches.
2. Peer-to-Peer (P2P) Networks
o Computers (peers) connect directly to each other without a central server.
o Good for file sharing, messaging apps, and social networking.
o Example: File-sharing apps like BitTorrent, Skype.
3. Computational Grids
o Large networks where computers work together on big problems.
o Often used in scientific research, global computing tasks.
o Example: Grid computing projects like TeraGrid, ChinaGrid.
4. Cloud Platforms
o A network of virtualized servers offering computing power over the internet.
o Users pay for what they use, and resources are scaled dynamically.
o Example: Google Cloud, AWS, Microsoft Azure.
Each system has its advantages, from high-performance computing in clusters to easy access in the
cloud.
5,6. Comment on the Various Implementation Levels of Virtualization
Levels of Virtualization Implementation:
1. Instruction Set Architecture (ISA) Level:
At this level, virtualization is done by emulating the instruction set of the host machine.
For example, MIPS binary code can run on an x86 machine using ISA simulation, creating a
virtual ISA on the hardware.
2. Hardware Abstraction Level:
This type of virtualization occurs directly on the hardware. It creates a virtual hardware
environment that processes resources virtually. For example, the IBM Xen hypervisor runs
guest OS applications like Linux on virtualized hardware.
3. OS Level:
OS-level virtualization creates isolated containers on a single server, allowing multiple OS
instances to use hardware and software. These containers behave like separate servers,
making it useful in data centers to allocate resources to different users. It also enables
merging server hardware by moving resources between different containers or VMs.
4. Library Support Level:
This virtualization involves controlling the communication between applications and the
system via user-level libraries. Applications use APIs from these libraries instead of system
calls, allowing virtualization by intercepting these API calls.
5. User-App Level:
App-level virtualization, also known as process-level virtualization, creates virtual
machines at the application level. High-Level Language (HLL) virtual machines are often
used, allowing programs to run on an abstract machine defined by the virtualization layer
above the OS.
7,8. Illustrate the Xen Architecture with a Neat Sketch
Xen is a lightweight hypervisor that creates a virtual environment between hardware and the OS.
It does not include device drivers but allows guest OSes to access physical devices directly.
Companies like Citrix, Huawei, and Oracle offer commercial Xen hypervisors.
Xen has three core components: the hypervisor, kernel, and applications. Multiple guest OSes
run on the hypervisor, but Domain 0 (Dom0) controls the others (Domain U). Dom0 loads first at
boot, manages hardware, and allocates resources to guest domains.
Since Xen is Linux-based with C2 security level, its management VM (Dom0) has full control over
all VMs. If an attacker gains access to Dom0, they can control all VMs and the entire host system,
making security crucial.
VMs function like branches of a tree, allowing multiple instances at different points. They can
also roll back to previous states and rerun from the same point.
8. Summarize the Process of Virtualizing CPU, Memory, and I/O Devices
1. CPU Virtualization
A Virtual Machine (VM) duplicates a real system, where most instructions run directly on the
host processor. However, certain instructions need special handling and are classified as:
Privileged: Must run in a special mode; otherwise, they are trapped.
Control-Sensitive: Change system configurations.
Behavior-Sensitive: Act differently based on system conditions (e.g., high load).
For CPU virtualization, the VM runs in user mode, while the Virtual Machine Monitor (VMM)
runs in supervisor mode. When privileged instructions are executed, the VMM traps them to
maintain system stability. Not all CPUs support this.
Process in Xen:
1. A system call triggers an 80h interrupt, passing control to the OS kernel.
2. The kernel processes the call using an interrupt handler.
3. In Xen, the 80h interrupt in the guest OS also triggers an 82h interrupt in the hypervisor.
4. After the task completes, control returns to the guest OS.
2. Memory Virtualization
In traditional systems, the OS maps virtual memory to physical memory (PM) using page tables.
In a virtualized system:
Virtual Memory allows the OS to move data between RAM and disk storage when physical
memory is low.
Machine Memory is the maximum physical memory the host can allocate to the VM.
Modern processors use a Memory Management Unit (MMU) and Translation Look-aside Buffer
(TLB) to optimize virtual memory performance. In a virtual environment, memory virtualization
allocates physical RAM to VMs dynamically, with two stages:
1. Virtual memory to physical memory.
2. Physical memory to machine memory.
3. I/O Virtualization
I/O virtualization manages the routing of I/O requests between virtual devices and shared
physical hardware. There are three methods to implement I/O virtualization:
1. Full Device Emulation:
Emulates real-world devices within the VMM, replicating all functions like device
identification and interrupts. I/O requests are trapped and processed by the VMM.
2. Para-Virtualization:
To improve performance (since full emulation is slower), the guest OS frontend driver runs in
Domain-U, while the backend driver runs in Domain-0, managing real devices. This method
boosts performance but increases CPU overhead.
3. Direct I/O Virtualization:
Allows VMs to access devices directly, offering high performance with lower cost. Currently,
it is mostly used in mainframes.
9,10. Discuss in Detail About AWS
Amazon Web Services (AWS)
AWS is a leading cloud computing platform offering a wide range of services, such as computing
power, storage, databases, machine learning, and analytics.
Core AWS Services:
1. EC2 (Elastic Compute Cloud):
Provides scalable virtual servers for computing needs.
2. S3 (Simple Storage Service):
Offers scalable cloud storage for data and files.
3. ELB (Elastic Load Balancing):
Distributes incoming traffic across multiple servers for load balancing.
4. Lambda:
A serverless computing service that runs code in response to events without needing to
manage servers.
5. RDS (Relational Database Service):
Managed relational database service for databases like MySQL and PostgreSQL.
6. VPC (Virtual Private Cloud):
Enables users to create isolated network environments within AWS for enhanced security.
Key Features:
Scalability:
AWS can automatically scale resources up or down based on demand.
Security:
Includes encryption, IAM (Identity Access Management), and multi-factor authentication to
protect data.
Global Reach:
AWS has data centers worldwide, ensuring low-latency access and fault tolerance.
Cost-Effectiveness:
AWS uses a pay-as-you-go model, charging customers only for the resources they use.
Reserved instances are available for long-term savings.