System Software and Operating System
System Software and Operating System
Chapter-1
System Software: Machine, Assembly and High-Level Languages; Compilers and Interpreters; Loading,
Linking and Relocation; Macros, Debuggers.
Machine Language
"The data can also be specified and represented using only 0s and 1s. Such a program is called Machine
Language program"
Assembly Language( "second-generation language")
Assembly language is a low-level language that helps to communicate directly with computer hardware.
It is essentially a human-readable representation of machine language, using mnemonic codes to
represent operations and operands.
Example: x86 assembly language (used in Intel and AMD processors),
High-Level Languages
A high-level language is a computer programming language designed for human readability
Examples of some HLL
PROLOG, FORTRAN, Pascal, Python, Java, C++, JavaScript, Ruby, PHP, and Swift.
Compilers
A compiler is a software program that translates human-readable source code written in a high-level
programming language (like C++ or Java) into a low-level language (such as machine code).
Compiler is partition into five:
o Phase 1: Lexical analysis (Scanning): This is the first phase, which reads the source code from
left to right, character by character, and groups them into meaningful sequences called tokens.
o Phase 2: Syntax analysis (Parsing): This phase takes the stream of tokens from the lexical
analyzer and checks for grammatical correctness based on the programming language's rules.
o Phase 3: Intermediate code generation: This partition generates an intermediate representation
(IR) of the source code. The IR is a lower-level, machine-independent version of the code that is
simpler to optimize and translate into different target machine languages.
o Phase 4: Code optimization: This final partition generates the target machine code or assembly
code from the intermediate representation. The process includes allocating memory locations for
variables and selecting the appropriate machine instructions.
o Phase 5: Code generation
Interpreters
An interpreter is a program that translates and executes high-level programming language code line by
line at runtime, rather than converting the entire program into machine code before execution, as a
compiler does.
Interpretation is very slow process.
Popular languages like Python, JavaScript, and Ruby use interpreters.
Loading
loader is an operating system component that loads executable program from disk into memory, prepares
it for execution, and allocates the necessary memory space to run .
Loader must perform functions
Allocation: space in memory for the programs
Linking: resolves symbolic references between objects decks
Relocation: adjust all the address dependent locations
Loading: physically place the machine code and data into the memory.
Chapter-2
Basics of Operating Systems: Operating System Structure, Operations and Services; System Calls,
Operating-System Design and Implementation; System Boot.
System Boot
A system boot, or booting, is the process of starting a computer by loading its operating system into
main memory (RAM). When you power on a computer, its firmware (BIOS or UEFI) initializes
hardware and locates the boot device.
The BIOS, operating system and hardware components of a computer
system should all be working correctly for it to boot.
Types of Booting
o Cold Booting:
Starting a computer from a completely powered-off state by pressing the
power button.
o Warm Booting (Restart):
Chapter-3
Process Management: Process Scheduling and Operations; Interprocess Communication, Communication
in Client-Server Systems, Process Synchronization, Critical-Section Problem, Peterson’s Solution,
Semaphores, Synchronization.
Process Management
Process is the execution of a program that performs the actions specified in that program.
It can be defined as an execution unit where a program runs.
Process management involves various tasks like creation, scheduling, termination of process and dead
lock.
The OS is responsible for the following activities in connection with Process Management:
1. Scheduling processes and threats on the CPUs.
2. Creating and deleting both user and system processes.
3. Suspending and resuming processes.
4. Providing mechanism for process communication
Process States
1. New: The process is being created.
2. Ready: The operating system moves the process to the "ready" state, placing it in a queue of processes
waiting for the CPU.
3. Waiting: From the "running" state, a process may move to the "waiting" state if it needs to perform an
I/O operation or wait for some event.
4. Executing: The CPU is currently executing the process's instructions.
5. Terminates: (or Exit): The process has finished its execution.
6. Blocked: A process that is waiting for an event to occur.
7. Suspended: This state means the process cannot continue executing right now. It is waiting for some event
to happen, like the completion of an input/output operation (for example, reading data from
a disk).
Process Scheduling
It is an activity of the process manager that handles the removal of the running process from the CPU and
the selection of another process on the basis of a particular strategy.
This is an essential part of a Multiprogramming Operating Systems.
Process Scheduling Queues
Job Queue: holds all processes that exist in the system.
Ready queue − This queue keeps a set of all processes residing in main memory, ready and waiting to
execute.
A "not ready" state in a job queue or system indicates a node or process is unable to perform its intended
function.
Types of schedulers
Long-term (job) schedulers, which control the multiprogramming degree by selecting processes
from secondary memory;
Short-term (CPU) schedulers (or dispatcher), which are responsible for selecting a ready process
to be executed by the CPU.
Medium-term schedulers, which manage the swapping of processes between main and secondary
memory. (it is also part of time-sharing system).
Interprocess Communication
Semaphore(imp)
A semaphore is a variable that controls
the access to a common resource by
multiple processes. The two types of semaphores are binary semaphores and counting semaphores. E.g.
Producer-Consumer problem.
Mutual Exclusion(imp)
Mutual exclusion requires that only one process thread can enter the critical section at a time. This is
useful for synchronization and also prevents race conditions. E.g. Printer Spooling: When several users
want to print a document, the printer is a shared resource.
Barrier
A barrier does not allow individual processes to proceed until all the processes reach it.
Spinlock
This is a type of lock. The processes trying to acquire this lock wait in a loop while checking if the lock
is available or not. E.g. Multiple CPU cores are running, and two cores (Core A and Core B) both need to
increment the counter.
Approaches to Interprocess Communication
1. Pipe
2. Shared Memory
3. Message Queue
4. Direct Communication
5. Indirect Communication
6. Message Passing
7. FIFO
A pipe is a unidirectional communication channel that allows two related processes to exchange data by
treating one process's standard output as the other's standard input.
Shared memory(imp) is the fastest inter-process communication (IPC) method in an operating system,
allowing multiple processes to access and modify a common region of memory. E.g. Data Exchange.
Message queue is an Inter-Process Communication (IPC) mechanism that acts as an asynchronous buffer
for processes to send and receive data in discrete, structured messages. E.g. Order Placement (Producer).
Direct communication in Inter-Process Communication (IPC) occurs when processes explicitly name
each other to establish a link for sending and receiving messages.
Indirect communication is an Inter-Process Communication (IPC) method where processes exchange
information through a shared intermediary, such as a mailbox or port.
Message passing(imp) is an operating system mechanism where processes communicate by sending and
receiving discrete blocks of data called messages, typically through a kernel or dedicated communication
channels.
First-in, first-out (FIFO) (or Full -Duplex-one process can communicate with another process and vice-
versa) (imp) named pipes ensure that information composed to the line by a single procedure is read from
the pipe by another course in the identical order. E.g. If you ask the computer to print a document and then
copy a file, the computer will start printing it first because you asked for it first.
Process Synchronization
It is used to handle problems that arise while multiple processes execute together.
It is the task phenomenon of coordinating the execution of processes in such a way that no two processes
can have access to the same shared data and resources.
It is a procedure that involved in order to preserve the appropriate order of execution of cooperative
processes.
Synchronization Mechanism:
1. Race Condition
2. Critical section
Race Condition: (A race condition is a situation that may occur inside a critical section. This happens
when the result of multiple process/thread execution in the critical section differs according to the order
in which the threads execute). At the time when more than one process executing the same code or
accessing the same memory in that condition there is a possibility that the out come of the shared variable
is wrong for that purpose all the process are doing the race to say that my output is correct.
Critical section(imp): The critical section problem in operating systems ensures that when multiple
processes compete for access to shared resources (like variables or files), only one process can execute its
"critical section" (the code that accesses these resources) at any given time to prevent data corruption.
Solutions to the critical section problem ensure Mutual Exclusion (only one process in the critical
section), Progress (any process can enter a critical section if it is free), and Bounded Waiting (each
process must have a limited waiting time).
Peterson's Solution
Peterson's Solution is a classic software-based algorithm designed to solve the critical section problem for two
processes in an operating system. It ensures mutual exclusion, bounded waiting, and progress, allowing only one
process to access a shared resource (critical section) at a time, preventing race conditions.
Peterson's Solution is designed for only two processes. It cannot be directly applied to scenarios
with more than two processes.
The algorithm uses two shared variables:
flag[i]: shows whether process i wants to enter the critical section.
turn: indicates whose turn it is to enter if both processes want to access the critical section at the
same time.
Step-by-Step Explanation
Intent to Enter: A process sets its flag to true when it wants to enter the critical section.
Turn Assignment: It sets the turn variable to the other process, giving the other process the chance
to enter first if it also wants to.
Waiting Condition: A process waits if the other process also wants to enter and it is the other’s turn.
Critical Section: Once the condition is false, the process enters the critical section safely.
Exit: On leaving, the process resets its flag to false, allowing the other process to proceed.
Process Operations
1. Process Creation: An OS creates new processes as instances of programs that can execute independently.
2. Process State Transitions: A process moves through various states (e.g., new, ready, running, waiting) by
changing its position in different queues, such as the ready queue, waiting queue, or job queue.
3. Scheduling: The scheduler selects a process from the ready queue to be assigned the CPU.
4. Switching: When the OS switches from one process to another, it saves the state of the currently
running process (its PCB) and loads the saved state of the next process, allowing for multitasking.
5. Process Termination: A process finishes its execution and is removed from the system.
In client-server systems, communication relies on a request-response cycle where clients send service requests
to servers, and servers process them and return responses.
Request-Response Model: The core pattern involves the client initiating a request for a service or data,
and the server providing a corresponding response.
Protocols: A common language and set of rules, known as a communication protocol, are used by
clients and servers to understand each other.
HTTP (Hypertext Transfer Protocol): Used for transmitting web pages and other hypermedia
documents.
Mechanisms:
Specific methods are used for data exchange between clients and servers:
Sockets Mechanism: The Sockets are the End Points of Communication between two machines. They
provide a way for processes to communicate with each other, either on the same on machine or over
through Internet (bidirectional). E.g.: a web browser (client) connecting to a web server to load a
webpage.
Remote Procedure Calls (RPCs): A technique that allows a client to execute a procedure or function
on a remote server as if it were a local call. Example: Financial Transaction System.
Pipes: A mechanism for interprocess communication that allows data to flow in one or two directions
between processes. E.g. allowing the standard output of one command to become the standard input of
another. A classic example is ls -l | grep .txt
Message Passing: Message Passing is a communication, Method. Where the machines communicated
with each one by send and receiving the messages. This approach is commonly used in Parallel and
Distributed Systems. E.g. Payment Processing Service (Receiver).
Inter process Communication: The Inter Process Communication also called IPC. It allows
communication between processes within the same Machine. The IPC can enable data sharing and
Synchronous between different processes running concurrently on an operating system. And it includes
Sharing Memory, message queues, semaphores and pipes among others. E.g. copying text from a web
browser and pasting it into a text editor.
Distributed File Systems: Distributed File Systems provide access to files from multiple machines in
network. Client can access and manipulate files stored on Remote Server, Through Standard Interface
Example Network File System and Server Message Block. E.g. Amazon S3 (Simple Storage Service).
How it Works
1. Client Request: A client application sends a request for a resource or service to the server over the
network.
2. Server Processing: The server receives the request, processes it, and executes the necessary functions
or retrieves the requested data.
3. Server Response: The server sends the result or acknowledgment back to the client.
4. Client Receives Response: The client receives and uses the response from the server.
Semaphore(imp)
A semaphore is a variable that controls the access to a common resource by multiple processes. The two types
of semaphores are binary semaphores and counting semaphores. E.g. Producer-Consumer problem.
The producer-consumer problem in an OS involves two processes, a producer that creates data and
a consumer that consumes it, both sharing a fixed-size buffer. only one process can access the
buffer at a time. Solutions often use semaphores to manage the buffer's state (full/empty) and
ensure mutual exclusion to the buffer.