Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
6 views10 pages

System Software and Operating System

UGC NET UNIT-5

Uploaded by

Prathyusha Rao
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views10 pages

System Software and Operating System

UGC NET UNIT-5

Uploaded by

Prathyusha Rao
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

UNIT-5 (System Software And Operating System)

Chapter-1
System Software: Machine, Assembly and High-Level Languages; Compilers and Interpreters; Loading,
Linking and Relocation; Macros, Debuggers.
Machine Language
"The data can also be specified and represented using only 0s and 1s. Such a program is called Machine
Language program"
Assembly Language( "second-generation language")
 Assembly language is a low-level language that helps to communicate directly with computer hardware.
 It is essentially a human-readable representation of machine language, using mnemonic codes to
represent operations and operands.
 Example: x86 assembly language (used in Intel and AMD processors),
High-Level Languages
 A high-level language is a computer programming language designed for human readability
 Examples of some HLL
PROLOG, FORTRAN, Pascal, Python, Java, C++, JavaScript, Ruby, PHP, and Swift.
Compilers
 A compiler is a software program that translates human-readable source code written in a high-level
programming language (like C++ or Java) into a low-level language (such as machine code).
 Compiler is partition into five:
o Phase 1: Lexical analysis (Scanning): This is the first phase, which reads the source code from
left to right, character by character, and groups them into meaningful sequences called tokens.
o Phase 2: Syntax analysis (Parsing): This phase takes the stream of tokens from the lexical
analyzer and checks for grammatical correctness based on the programming language's rules.
o Phase 3: Intermediate code generation: This partition generates an intermediate representation
(IR) of the source code. The IR is a lower-level, machine-independent version of the code that is
simpler to optimize and translate into different target machine languages.
o Phase 4: Code optimization: This final partition generates the target machine code or assembly
code from the intermediate representation. The process includes allocating memory locations for
variables and selecting the appropriate machine instructions.
o Phase 5: Code generation

Interpreters
 An interpreter is a program that translates and executes high-level programming language code line by
line at runtime, rather than converting the entire program into machine code before execution, as a
compiler does.
 Interpretation is very slow process.
 Popular languages like Python, JavaScript, and Ruby use interpreters.
Loading
 loader is an operating system component that loads executable program from disk into memory, prepares
it for execution, and allocates the necessary memory space to run .
 Loader must perform functions
Allocation: space in memory for the programs
Linking: resolves symbolic references between objects decks
Relocation: adjust all the address dependent locations
Loading: physically place the machine code and data into the memory.

Linking and Relocation


Linking
 linker combines separate object files (created by the compiler and assembler) into a single
executable file, resolving references and assigning addresses.
 Purpose: To combine multiple object files and libraries into a single executable file that the
computer can understand.
 Static and Dynamic linking are operating system
Relocation
 A program that can be located in different parts of memory at different times
 Relocation is the process of connecting symbolic references with symbolic definitions. For example,
when a program calls a function, the associated call instruction must transfer control to the proper
destination address at execution.
Macros
 Single instruction is to be defined to represent a block of code
 A macro is a single instruction or a series of commands and keystrokes that can be recorded and
played back to automate a repetitive task, such as formatting text, entering data,
 Macros can range from simple keystroke shortcuts in a word processor to more complex sets of
instructions in programming
Debuggers
 A debugger is a computer program that helps software developers find and fix errors ("bugs") in
other programs by allowing them to control the program's execution.
 Some widely used debuggers are: Arm DTT, formerly known as Allinea DDT.

Chapter-2
Basics of Operating Systems: Operating System Structure, Operations and Services; System Calls,
Operating-System Design and Implementation; System Boot.

Operating System Structure


A program that acts as an intermediary between a user of a computer and the computer hardware
Execute user programs and make solving user problems easier
A more common definition is that the operating system is the one program running at all times on the computer
(usually called the kernel), with all else being application programs
Operating system performs three functions
 Convenience
 Efficiency
 Ability to Evolve
Operations and Services
 It provides programs an environment to execute.
 It provides users the services to execute the programs in a convenient manner.
A few common services provided by an operating system:
 Program execution
 I/O operations
 File System manipulation
 Communication
 Error Detection
 Resource Allocation
 Protection
System Calls
 A system call is the primary interface that allows user programs to request services from the
operating system's kernel, such as accessing hardware or managing processes
 Interact with the operating system.
 A system call is a program's request for a service from the operating system, such as open() to open
a file or fork() to create a new process.
 Services Provided by System Calls: Process creation and management, Main memory management,
File Access, Directory and File system management, Device handling (I/O), Protection,
Networking, etc.
 There are 5 different categories of system calls
o Process control
o File management
o Device management
o Information maintenance
o Communication

Operating-System Design and Implementation


OS design and implementation involves defining user and system goals, choosing an OS type (like batch or
time-share), and structuring the operating system into distinct "policies" (what is done) and "mechanisms" (how
it's done) to promote modularity.
1. Design Phase (OS Type, Structure), 2. Implementation Phase (Language selection) 3. Key Considerations
(Kernal, Data Reliability, File Systems)
There are basically two types of goals while designing an operating system
User Goals
System Goals

System Boot
 A system boot, or booting, is the process of starting a computer by loading its operating system into
main memory (RAM). When you power on a computer, its firmware (BIOS or UEFI) initializes
hardware and locates the boot device.
 The BIOS, operating system and hardware components of a computer
system should all be working correctly for it to boot.
 Types of Booting
o Cold Booting:
Starting a computer from a completely powered-off state by pressing the
power button.
o Warm Booting (Restart):

Restarting the computer through a software command or a reset button


without a complete power cycle.

Chapter-3
Process Management: Process Scheduling and Operations; Interprocess Communication, Communication
in Client-Server Systems, Process Synchronization, Critical-Section Problem, Peterson’s Solution,
Semaphores, Synchronization.
Process Management
 Process is the execution of a program that performs the actions specified in that program.
 It can be defined as an execution unit where a program runs.
 Process management involves various tasks like creation, scheduling, termination of process and dead
lock.
The OS is responsible for the following activities in connection with Process Management:
1. Scheduling processes and threats on the CPUs.
2. Creating and deleting both user and system processes.
3. Suspending and resuming processes.
4. Providing mechanism for process communication

Process States
1. New: The process is being created.
2. Ready: The operating system moves the process to the "ready" state, placing it in a queue of processes
waiting for the CPU.
3. Waiting: From the "running" state, a process may move to the "waiting" state if it needs to perform an
I/O operation or wait for some event.
4. Executing: The CPU is currently executing the process's instructions.
5. Terminates: (or Exit): The process has finished its execution.
6. Blocked: A process that is waiting for an event to occur.
7. Suspended: This state means the process cannot continue executing right now. It is waiting for some event
to happen, like the completion of an input/output operation (for example, reading data from
a disk).

Process Control Block


Every process is represented in the OS by a PCB, which is also called task control block.
Components of PCB:
1. Process State
2. Program Counter
3. CPU Registers
4. CPU Scheduling Information
5. Accounting & Business Information
6. Memory-management Information
7. I/O status Information

 Process state is the current activity of a process.


 Program Counter (PC) is a CPU register that holds the memory address of the next instruction to be
executed. (Unique id).
 A CPU register is a small, extremely fast internal memory unit within the processor that holds data, memory
addresses, and instructions for immediate use by the CPU.
 CPU scheduling is the task performed by the
CPU that decides the way and order in which

processes should be executed.


 Accounting Information: The information such as CPU time, memory usage, etc helps the OS to monitor
the performance of the process.
 Memory management information within a Process Control Block (PCB) includes data like page tables or
segment tables, as well as base and limit registers.
 I/O STATUS INFORMATION - This information includes list of I/O devices allocated to the process list of
open files etc.

Process Scheduling
 It is an activity of the process manager that handles the removal of the running process from the CPU and
the selection of another process on the basis of a particular strategy.
 This is an essential part of a Multiprogramming Operating Systems.
Process Scheduling Queues
 Job Queue: holds all processes that exist in the system.
 Ready queue − This queue keeps a set of all processes residing in main memory, ready and waiting to
execute.
 A "not ready" state in a job queue or system indicates a node or process is unable to perform its intended
function.
Types of schedulers
 Long-term (job) schedulers, which control the multiprogramming degree by selecting processes
from secondary memory;
 Short-term (CPU) schedulers (or dispatcher), which are responsible for selecting a ready process
to be executed by the CPU.
 Medium-term schedulers, which manage the swapping of processes between main and secondary
memory. (it is also part of time-sharing system).

Interprocess Communication

 This mechanism allows processes to communicate with each other.


 This communication could involve a process letting another process know that some event has occurred
or the transferring of data from one process to another.
 It is used for exchanging useful information between numerous threads in one or more processes.

Synchronization methods in Interprocess Communication:


(Synchronization is a necessary part of interprocess communication. It is either provided by the interprocess
control mechanism or handled by the
communicating processes.)
1. Semaphore
2. Mutual Exclusion
3. Barrier
4. Spinlock

Semaphore(imp)
A semaphore is a variable that controls
the access to a common resource by
multiple processes. The two types of semaphores are binary semaphores and counting semaphores. E.g.
Producer-Consumer problem.
Mutual Exclusion(imp)
Mutual exclusion requires that only one process thread can enter the critical section at a time. This is
useful for synchronization and also prevents race conditions. E.g. Printer Spooling: When several users
want to print a document, the printer is a shared resource.
Barrier
A barrier does not allow individual processes to proceed until all the processes reach it.
Spinlock
This is a type of lock. The processes trying to acquire this lock wait in a loop while checking if the lock
is available or not. E.g. Multiple CPU cores are running, and two cores (Core A and Core B) both need to
increment the counter.
Approaches to Interprocess Communication
1. Pipe
2. Shared Memory
3. Message Queue
4. Direct Communication
5. Indirect Communication
6. Message Passing
7. FIFO
 A pipe is a unidirectional communication channel that allows two related processes to exchange data by
treating one process's standard output as the other's standard input.
 Shared memory(imp) is the fastest inter-process communication (IPC) method in an operating system,
allowing multiple processes to access and modify a common region of memory. E.g. Data Exchange.
 Message queue is an Inter-Process Communication (IPC) mechanism that acts as an asynchronous buffer
for processes to send and receive data in discrete, structured messages. E.g. Order Placement (Producer).
 Direct communication in Inter-Process Communication (IPC) occurs when processes explicitly name
each other to establish a link for sending and receiving messages.
 Indirect communication is an Inter-Process Communication (IPC) method where processes exchange
information through a shared intermediary, such as a mailbox or port.
 Message passing(imp) is an operating system mechanism where processes communicate by sending and
receiving discrete blocks of data called messages, typically through a kernel or dedicated communication
channels.
 First-in, first-out (FIFO) (or Full -Duplex-one process can communicate with another process and vice-
versa) (imp) named pipes ensure that information composed to the line by a single procedure is read from
the pipe by another course in the identical order. E.g. If you ask the computer to print a document and then
copy a file, the computer will start printing it first because you asked for it first.

Process Synchronization
 It is used to handle problems that arise while multiple processes execute together.
 It is the task phenomenon of coordinating the execution of processes in such a way that no two processes
can have access to the same shared data and resources.
 It is a procedure that involved in order to preserve the appropriate order of execution of cooperative
processes.
Synchronization Mechanism:
1. Race Condition
2. Critical section
 Race Condition: (A race condition is a situation that may occur inside a critical section. This happens
when the result of multiple process/thread execution in the critical section differs according to the order
in which the threads execute). At the time when more than one process executing the same code or
accessing the same memory in that condition there is a possibility that the out come of the shared variable
is wrong for that purpose all the process are doing the race to say that my output is correct.
 Critical section(imp): The critical section problem in operating systems ensures that when multiple
processes compete for access to shared resources (like variables or files), only one process can execute its
"critical section" (the code that accesses these resources) at any given time to prevent data corruption.
 Solutions to the critical section problem ensure Mutual Exclusion (only one process in the critical
section), Progress (any process can enter a critical section if it is free), and Bounded Waiting (each
process must have a limited waiting time).

Peterson's Solution
Peterson's Solution is a classic software-based algorithm designed to solve the critical section problem for two
processes in an operating system. It ensures mutual exclusion, bounded waiting, and progress, allowing only one
process to access a shared resource (critical section) at a time, preventing race conditions.
 Peterson's Solution is designed for only two processes. It cannot be directly applied to scenarios
with more than two processes.
 The algorithm uses two shared variables:
 flag[i]: shows whether process i wants to enter the critical section.
 turn: indicates whose turn it is to enter if both processes want to access the critical section at the
same time.
 Step-by-Step Explanation
 Intent to Enter: A process sets its flag to true when it wants to enter the critical section.
 Turn Assignment: It sets the turn variable to the other process, giving the other process the chance
to enter first if it also wants to.
 Waiting Condition: A process waits if the other process also wants to enter and it is the other’s turn.
 Critical Section: Once the condition is false, the process enters the critical section safely.
 Exit: On leaving, the process resets its flag to false, allowing the other process to proceed.

Process Scheduling and Operations


Process scheduling in an Operating System (OS) is the process of managing and selecting which program will
run on the CPU at any given time, moving between different states and queues (job, ready, waiting) to keep the
CPU busy, minimize response time, and ensure efficient resource use.

Process Operations
1. Process Creation: An OS creates new processes as instances of programs that can execute independently.
2. Process State Transitions: A process moves through various states (e.g., new, ready, running, waiting) by
changing its position in different queues, such as the ready queue, waiting queue, or job queue.
3. Scheduling: The scheduler selects a process from the ready queue to be assigned the CPU.
4. Switching: When the OS switches from one process to another, it saves the state of the currently
running process (its PCB) and loads the saved state of the next process, allowing for multitasking.
5. Process Termination: A process finishes its execution and is removed from the system.

Communication In Client-Server Systems

In client-server systems, communication relies on a request-response cycle where clients send service requests
to servers, and servers process them and return responses.

Key Aspects of Client-Server Communication

 Request-Response Model: The core pattern involves the client initiating a request for a service or data,
and the server providing a corresponding response.

 Protocols: A common language and set of rules, known as a communication protocol, are used by
clients and servers to understand each other.

 TCP/IP: A fundamental suite of protocols governing data transmission over networks.

 HTTP (Hypertext Transfer Protocol): Used for transmitting web pages and other hypermedia
documents.

Mechanisms:

Specific methods are used for data exchange between clients and servers:

 Sockets Mechanism: The Sockets are the End Points of Communication between two machines. They
provide a way for processes to communicate with each other, either on the same on machine or over
through Internet (bidirectional). E.g.: a web browser (client) connecting to a web server to load a
webpage.

 Remote Procedure Calls (RPCs): A technique that allows a client to execute a procedure or function
on a remote server as if it were a local call. Example: Financial Transaction System.

 Pipes: A mechanism for interprocess communication that allows data to flow in one or two directions
between processes. E.g. allowing the standard output of one command to become the standard input of
another. A classic example is ls -l | grep .txt

 Message Passing: Message Passing is a communication, Method. Where the machines communicated
with each one by send and receiving the messages. This approach is commonly used in Parallel and
Distributed Systems. E.g. Payment Processing Service (Receiver).
 Inter process Communication: The Inter Process Communication also called IPC. It allows
communication between processes within the same Machine. The IPC can enable data sharing and
Synchronous between different processes running concurrently on an operating system. And it includes
Sharing Memory, message queues, semaphores and pipes among others. E.g. copying text from a web
browser and pasting it into a text editor.

 Distributed File Systems: Distributed File Systems provide access to files from multiple machines in
network. Client can access and manipulate files stored on Remote Server, Through Standard Interface
Example Network File System and Server Message Block. E.g. Amazon S3 (Simple Storage Service).

How it Works

1. Client Request: A client application sends a request for a resource or service to the server over the
network.

2. Server Processing: The server receives the request, processes it, and executes the necessary functions
or retrieves the requested data.

3. Server Response: The server sends the result or acknowledgment back to the client.

4. Client Receives Response: The client receives and uses the response from the server.

Semaphore(imp)
A semaphore is a variable that controls the access to a common resource by multiple processes. The two types
of semaphores are binary semaphores and counting semaphores. E.g. Producer-Consumer problem.
 The producer-consumer problem in an OS involves two processes, a producer that creates data and
a consumer that consumes it, both sharing a fixed-size buffer. only one process can access the
buffer at a time. Solutions often use semaphores to manage the buffer's state (full/empty) and
ensure mutual exclusion to the buffer.

You might also like