Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
14 views13 pages

Comp 113 B

An Operating System (OS) is system software that manages hardware resources and provides a user interface for executing programs. It serves as an intermediary between users/applications and computer hardware, ensuring efficient resource management and process coordination. Key concepts discussed include types of operating systems, process states, memory hierarchy, and issues related to concurrent processing.

Uploaded by

jobnyayiemi6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views13 pages

Comp 113 B

An Operating System (OS) is system software that manages hardware resources and provides a user interface for executing programs. It serves as an intermediary between users/applications and computer hardware, ensuring efficient resource management and process coordination. Key concepts discussed include types of operating systems, process states, memory hierarchy, and issues related to concurrent processing.

Uploaded by

jobnyayiemi6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

a) Define the term Operating System (2 marks)

An Operating System (OS) is system software that acts as an intermediary between the user
and computer hardware. It manages hardware resources, provides a user interface, and
enables the execution of programs.

✅ Key Roles:

 Manages hardware (CPU, memory, devices)


 Provides system services and utilities
 Controls and coordinates processes

b) Interfacing Role of the Operating System (with Diagram) (10 marks)

Explanation:

The OS serves as a bridge/interface between:

 User/Applications and
 Computer Hardware

It ensures that software can run smoothly without needing to directly interact with hardware
devices.

Diagram:

sql
CopyEdit
+---------------------+
| Application/User |
+---------------------+

+---------------------+
| Operating System |
| (Interface Layer) |
+---------------------+
↓ ↓ ↓
CPU Memory I/O Devices

Functions of the OS in Interfacing:

1. Resource Management – Allocates CPU, memory, and devices efficiently.


2. Process Management – Schedules and executes processes.
3. File Management – Manages data storage and file access.
4. Device Management – Controls hardware components via device drivers.
5. User Interface – Provides CLI or GUI for user interaction.
c) Describe any two (2) types of Operating Systems (4 marks)

1. Batch Operating System:


o Executes a batch of jobs without user interaction.
o Jobs are grouped and processed sequentially.
o Example: IBM mainframe systems.
2. Real-Time Operating System (RTOS):
o Responds to inputs or events within a guaranteed time.
o Used in time-sensitive systems like industrial controls, medical devices.
o Example: VxWorks, QNX.

d) Four Characteristics of an Operating System Influencing Selection (KTDA


Factory) (7 marks)

1. Stability and Reliability:


o OS must be stable to handle continuous operations in a factory environment
without frequent crashes.
2. Security:
o Protect data, files, and systems from unauthorized access, which is crucial for
confidential production data.
3. Compatibility and Hardware Support:
o Must support KTDA’s current hardware and software applications to avoid
additional costs.
4. Ease of Use and User Interface:
o Should be user-friendly, especially for staff with varying levels of technical
knowledge.
5. Scalability and Performance (Optional if discussing a 5th):
o The OS should handle increased workload as factory operations grow.

e) Three Process States in an Operating System (6 marks)

1. Ready State:
o The process is loaded into memory and waiting to be assigned to the CPU.
2. Running State:
o The process is currently being executed by the CPU.
3. Waiting (Blocked) State:
o The process is waiting for an event (e.g., I/O completion) before it can resume
execution.
Define the Term Microprogramming (2 marks)

Microprogramming is a method of implementing the control unit of a computer’s CPU using


a sequence of microinstructions, stored in a special memory called control memory.

 Each microinstruction performs basic CPU operations (e.g., fetching, decoding,


executing).
 It defines how instructions at the machine level are executed by lower-level operations.

✅ Purpose: Simplifies the design of the control unit and allows easier modification of instruction
behavior.

b) Four Operations on a Process (8 marks)

An Operating System supports several operations on processes during their lifecycle:

1. Process Creation:
o The OS creates a new process using system calls like fork() (in Unix).
o New processes are typically spawned by existing processes (parent-child
relationship).
2. Process Scheduling:
o The OS selects which process will run next based on a scheduling algorithm.
o Ensures efficient use of CPU resources and fairness.
3. Process Termination:
o A process is terminated either normally (completion) or abnormally (error or
killed).
o The OS reclaims resources (memory, I/O, CPU time) used by the terminated
process.
4. Process Synchronization:
o Used to coordinate processes that share resources (e.g., memory, files).
o Prevents race conditions using semaphores, mutexes, or monitors.

c) Memory Hierarchy – Diagram and Explanation (10 marks)

Diagram of Memory Hierarchy:

diff
CopyEdit
+------------------+
| Registers | ← Fastest, Most Expensive, Smallest
+------------------+
| Cache |
+------------------+
| Main RAM |
+------------------+
| Magnetic Disk |
+------------------+
| Optical Storage |
+------------------+
| Tape Storage | ← Slowest, Cheapest, Largest
+------------------+

Explanation:

Memory hierarchy organizes storage based on speed, cost, and size. The closer the memory is to
the CPU, the faster and more expensive it is.

1. Registers:
o Inside the CPU.
o Fastest access, used for immediate computations.
2. Cache:
o Small, high-speed memory.
o Stores frequently used data from RAM.
3. Main Memory (RAM):
o Temporary storage for currently running programs.
o Larger and slower than cache, but faster than secondary storage.
4. Secondary Storage (e.g., HDD, SSD):
o Stores data and programs long-term.
o Much slower than RAM but has higher capacity.
5. Tertiary/Backup Storage (e.g., Tapes, Cloud):
o Used for archiving and backups.
o Very high capacity but very slow access.

✅ Purpose of Hierarchy: To balance speed and c

a) Define the Term Concurrent Processing (2 marks)

Concurrent processing refers to the execution of multiple processes or threads


simultaneously, allowing them to make progress independently or in parallel, depending on
the system’s resources (e.g., multi-core CPUs).

✅ Key Point:
It improves CPU utilization and system responsiveness, even on single-CPU systems via time-
sharing.

b) Difference Between Multiprogramming and Multiprocessing Environments (4


marks)
Feature Multiprogramming Environment Multiprocessing Environment
Multiple programs are kept in memory and Two or more CPUs working
Definition
CPU switches between them together to execute processes
Increase system performance and
Objective Maximize CPU utilization
reliability
Multiple CPUs executing processes
Execution One CPU, multiple processes in memory
simultaneously
Concurrency
Pseudo-concurrent (uses time-sharing) True parallel execution
Type

c) Operating System Organization (With Diagram) (8 marks)

Operating Systems can be organized into layers or modules, each responsible for different
services.

Diagram:

sql
CopyEdit
+------------------------+
| User Interface | ← Shells/GUI
+------------------------+
| System Utilities | ← File mgmt, editors
+------------------------+
| System Calls/API | ← Interface between apps & OS
+------------------------+
| Kernel |
| - Process Mgmt |
| - Memory Mgmt |
| - I/O Control |
| - File System |
+------------------------+
| Hardware Layer | ← CPU, RAM, Devices
+------------------------+

Explanation of Layers:

1. User Interface:
o Allows user interaction (e.g., command line, GUI).
2. System Utilities:
o Tools and programs for managing the system.
3. System Call Interface (API):
o Allows applications to request OS services.
4. Kernel:
o Core of the OS; manages:
 Processes and scheduling
 Memory allocation
 I/O and file systems
 Device drivers
5. Hardware:
o The physical components of the system.

✅ This layered organization promotes modularity, maintainability, and security.

d) Concepts of Signal, Fork, and Pipe in Process Control (6 marks)

1. Signal:

 A signal is a software interrupt sent to a process to notify it of events (e.g., termination,


illegal memory access).
 Common signals:
o SIGINT: Interrupt from keyboard (Ctrl + C)
o SIGKILL: Forcefully kill a process

✅ Used for inter-process communication (IPC) and handling exceptions.

2. Fork:

 fork() is a system call used to create a new process by duplicating the calling process.
 The child process is an exact copy of the parent, with a unique PID.

✅ Used in process creation in UNIX/Linux systems.

3. Pipe:

 A pipe is a method of communication between processes, where the output of one


process becomes the input of another.

✅ Example:

bash
CopyEdit
ls | grep "txt"

 Here, ls sends output to grep via a pipe.

✅ Pipes support unidirectional data flow and are used in producer-consumer models.
a) Define the Term Distributed Processing Environment (2 marks)

A Distributed Processing Environment refers to a computing setup in which processing


power and data are distributed across multiple interconnected computers or nodes, which
coordinate to perform tasks.

✅ Key Features:

 Nodes communicate over a network.


 Each node may have its own OS and memory.
 Appears to users as a single coherent system.

Examples: Cloud computing, grid computing, client-server models.

b) Concept of Process Hierarchies (4 marks)

In operating systems, process hierarchies refer to the parent-child relationships between


processes.

How it works:

 A parent process creates a child process using system calls like fork().
 The child inherits resources (memory, files) from the parent.
 This forms a tree structure where processes can create sub-processes.

Example (Unix-like systems):

arduino
CopyEdit
init (PID 1)
├── Process A
│ ├── Process B
│ └── Process C
└── Process D

Benefits:

 Organizes process management.


 Enables control and termination of related processes.

c) Four Problems in Concurrent Processes (KRA Implementation) (8 marks)

When KRA implements concurrent systems, they may face the following issues:
1. Race Conditions:

 Occur when two or more processes access shared data simultaneously.


 Results depend on the sequence of execution.
 Example: Two tax records being updated at once causing data corruption.

2. Deadlocks:

 Processes wait for resources held by each other, resulting in a complete standstill.
 Example: Process A waits for a database lock held by Process B, while B waits for A’s
resource.

3. Starvation:

 A low-priority process waits indefinitely as higher-priority processes are continuously


given resources.
 Can cause delays in critical operations.

4. Critical Section Problems:

 Issues arise when multiple processes enter their critical section simultaneously, leading
to inconsistent data.
 Solution: Use of synchronization mechanisms (e.g., mutexes, semaphores).

d) Management of Deadlocks by an Operating System (6 marks)

The Operating System can handle deadlocks in four main ways:

1. Deadlock Prevention:

 Design the system so that one of the four Coffman conditions for deadlock (mutual
exclusion, hold-and-wait, no preemption, circular wait) is eliminated.
 Example: Require processes to request all resources at once.

2. Deadlock Avoidance:

 Use algorithms like Banker's Algorithm to check in advance if granting a resource will
lead to a deadlock.
 Only grant resources if the system remains in a safe state.

3. Deadlock Detection and Recovery:

 Allow deadlocks to occur, then detect them using a Resource Allocation Graph or wait-
for graph.
 Once detected, recover by:
o Killing one or more processes
o Preempting resources
o Rolling back processes

4. Deadlock Ignorance (Ostrich Algorithm):

 The OS assumes that deadlocks are rare and does nothing.


 Used in systems where the cost of handling deadlocks is higher than their impact (e.g.,
desktop OS).

Definitions (2 marks each)

i) Deadlock (2 marks)

A deadlock is a situation in an operating system where two or more processes are unable to
proceed because each is waiting for a resource held by the other.

✅ Key Characteristics:

 Circular waiting
 Mutual exclusion
 No preemption
 Hold and wait

ii) Critical Section (2 marks)

A critical section is a segment of code in a process where shared resources (like variables,
files, or hardware) are accessed. Only one process at a time should execute in its critical
section to avoid data inconsistency.

✅ Ensures mutual exclusion in concurrent systems.

iii) File Descriptor (2 marks)

A file descriptor is a non-negative integer used by the operating system to uniquely identify
an open file (or I/O resource) in a process.

✅ Examples in UNIX:
 0: Standard Input (stdin)
 1: Standard Output (stdout)
 2: Standard Error (stderr)

iv) Banker's Algorithm (2 marks)

The Banker's Algorithm is a deadlock avoidance algorithm that checks whether a system is in
a safe state before allocating requested resources to a process.

✅ Named after the banking system: ensures that the system has enough resources (like cash) to
fulfill future needs without running into deadlock.

v) Four Principles of the Critical Section Problem (8 marks)

1. Mutual Exclusion:
o Only one process can enter its critical section at a time.
o Prevents data corruption or inconsistent updates.
2. Progress:
o If no process is in the critical section, one of the waiting processes must be
allowed to enter, and the decision should not be postponed indefinitely.
3. Bounded Waiting:
o A limit must exist on the number of times other processes can enter their critical
sections before a process waiting to enter gets a chance.
4. No Assumptions about Speed or Number of CPUs:
o The solution must work regardless of CPU speeds or the number of processors.
o Ensures that the algorithm is valid in both uniprocessor and multiprocessor
systems.

b) Interrupt Processing (4 marks)

An interrupt is a signal sent to the CPU by hardware or software indicating an event that needs
immediate attention.

Interrupt Processing Steps:

1. Interrupt Signal:
o Generated by I/O devices, timers, or programs.
2. CPU Suspends Execution:
o Current execution is paused and context (registers, PC) is saved.
3. Control Transfers to Interrupt Handler:
oThe OS executes a predefined interrupt service routine (ISR).
4. Return to Previous Task:
o After handling, the CPU restores the previous context and resumes execution.

Define the term Memory (1 mark)

Memory in the context of computer systems refers to the hardware component or storage
space where the data and instructions required by programs and the operating system are
temporarily stored during execution.

 Types: RAM (Random Access Memory), Cache Memory.

b) Four (4) Page Replacement Strategies (4 marks)

Page replacement strategies determine how the operating system manages pages when memory
is full. When a new page needs to be loaded and memory is full, one of the following strategies
is used to decide which page to remove:

1. FIFO (First-In, First-Out):


o The oldest page in memory is replaced first.
o Simple but can lead to poor performance due to the "Belady's Anomaly"
(increased page faults with more memory).
2. LRU (Least Recently Used):
o Replaces the page that has not been used for the longest time.
o This approach assumes that recently used pages are more likely to be used again.
3. Optimal (OPT):
o Replaces the page that will not be used for the longest period of time in the future.
o The most efficient strategy, but it is difficult to implement since future
reference is unknown.
4. Clock Algorithm:
o A practical approximation of LRU.
o Pages are organized in a circular buffer with a hand pointing to the oldest page. If
the page's reference bit is 1, the bit is cleared, and the hand moves to the next
page; if the bit is 0, the page is replaced.

c) Differentiate Between Memory Management and File Management (2 marks)

Aspect Memory Management File Management


Refers to the management of primary Refers to the organization, storage,
Definition memory (RAM), including allocating retrieval, and management of data in
and deallocating memory for processes. files on storage devices (e.g., hard drives).
Aspect Memory Management File Management
Focuses on file organization (e.g.,
Ensures efficient use of memory and
Focus directories), access control, and file
avoids memory conflicts.
storage.
File allocation, access permissions,
Example Paging, segmentation, heap management.
directory structures.

d) Objectives of File Management for Airtel KE Co. Ltd (4 marks)

When Airtel KE upgrades their operating system to improve file management, they may focus
on the following objectives:

1. Efficient Data Storage:


o Ensure data is stored efficiently in files, minimizing disk space usage and
speeding up access times.
2. Data Security:
o Implement file access control mechanisms (e.g., encryption, user authentication)
to protect sensitive customer data from unauthorized access.
3. Data Integrity:
o Maintain data consistency and prevent file corruption. This includes techniques
such as backup and error-checking.
4. Easy File Access and Retrieval:
o Provide efficient ways for employees or systems to retrieve files quickly, such as
indexing or directory organization.

e) Three (3) Disk Scheduling Techniques/Algorithms (3 marks)

Disk scheduling is important for improving the performance of input/output operations by


optimizing the order in which requests are processed. Here are three common disk scheduling
algorithms:

1. FCFS (First-Come, First-Served):


o Processes requests in the order they arrive. Simple but inefficient, especially if
requests are spread out across the disk.
2. SSTF (Shortest Seek Time First):
o Selects the disk request that is closest to the current disk head position. Reduces
seek time but may lead to starvation of some requests.
3. SCAN (Elevator Algorithm):
o The disk arm moves in one direction (scans the disk), servicing requests until it
reaches the end, then reverses direction. This algorithm minimizes seek time and
is efficient in reducing the overall distance traveled by the disk arm.
f) Three Challenges of Distributed Systems for Telecom Kenya Co. Ltd (6 marks)

When implementing a distributed system, Telecom Kenya is likely to face the following
challenges:

1. Network Issues:
o Distributed systems rely on network communication. Network latency,
congestion, and packet loss can impact performance and reliability.
2. Consistency and Synchronization:
o Maintaining data consistency across multiple locations can be difficult. Ensuring
that all nodes reflect the same data (e.g., databases) while minimizing delay is a
key challenge.
3. Security:
o A distributed system has multiple points of entry, making it more vulnerable to
security breaches. Protecting data during transmission and implementing secure
access controls is crucial.
4. Fault Tolerance and Recovery:
o Systems must be resilient to node or network failures. Designing a system that
ensures service availability despite hardware failures is complex and requires
backup systems and redundancy.
5. Scalability:
o As the system grows (e.g., adding new nodes), maintaining performance and
managing the distribution of resources can become challenging.
6. Complexity in Management:
o Managing a distributed system with many interconnected nodes requires careful
coordination and monitoring of each part of the system to ensure smooth
operation.

You might also like