Chapter 1: Introduction to Operating Systems
Lesson 1: Definition and Functions of an Operating System
An operating system (OS) is the fundamental software that manages computer hardware and
provides services to computer programs. Without an operating system, a computer is essentially
just a collection of hardware devices with no way of interacting with each other or with users in a
meaningful way. The operating system acts as a bridge between the user and the computer
hardware, ensuring that resources such as the processor, memory, storage, and input/output
devices are used efficiently and securely.
At its simplest level, the OS enables you to start your computer, interact with applications, and
perform tasks such as saving files, browsing the internet, or printing a document. For example,
when you open Microsoft Word on Windows and type a sentence, the operating system coordinates
the keyboard input, stores the characters in memory, updates the screen display, and manages
saving the file onto the hard disk when you choose “Save.”
Definition of an Operating System
Formally, an operating system can be defined as:
A system software that manages hardware resources, provides a platform for application
programs, and acts as an intermediary between the user and the computer hardware.
This means the OS is not a single program but rather a collection of programs working together
to ensure the smooth functioning of the entire system.
Core Functions of an Operating System
To understand what makes the operating system so central, let us look at the major functions it
performs.
1. Process Management
The operating system manages all processes (running programs) in the system. It is responsible
for creating, scheduling, and terminating processes. Since modern systems often run many
programs at once (for example, listening to music while browsing the web and downloading a file),
the OS ensures that the CPU is shared efficiently among them.
A scheduling algorithm decides which process gets the CPU next. For instance, in a time-sharing
system, the OS may give each process a small time slice before moving to the next one, making it
appear as if they are running simultaneously.
Example: When you open Google Chrome, the OS creates a process for it. If you then open Spotify,
another process is created. The OS ensures both get a fair share of CPU time without one freezing
the other.
2. Memory Management
Programs require memory to run. The OS allocates memory to different processes and ensures that
they do not interfere with one another. It keeps track of which parts of memory are free and which
are already in use. If the physical memory (RAM) is not enough, the OS may use part of the hard
disk as virtual memory, giving the illusion of more memory.
Example: When you have multiple browser tabs open, the OS manages the memory assigned to
each tab so that one tab crashing does not affect the entire browser.
3. File System Management
Files are the primary way data is stored permanently on a computer. The OS organizes data into
files and directories, and it controls how these files are created, read, written, and deleted. The file
system also ensures security by assigning permissions (e.g., who can read, write, or execute a file).
Example: On Windows, you may have a file called report.docx stored in the Documents folder.
On Linux, you might have /home/user/report.docx. In both cases, the OS handles how the file
is stored on disk and retrieved when needed.
4. Device Management
Every computer has hardware devices such as printers, keyboards, monitors, hard drives, and USB
drives. The OS uses device drivers—special programs that translate OS commands into device-
specific instructions—to manage them. This way, application developers do not need to write
custom code for every hardware variation.
Example: When you plug in a USB flash drive, the OS automatically recognizes it and allows you
to copy files without needing to understand how the drive works internally.
5. User Interface
The OS provides the interface through which users interact with the computer. This can be a
command-line interface (CLI), such as the Linux terminal, or a graphical user interface (GUI),
such as Windows desktop with icons and menus.
Example: Typing ls in a Linux terminal to view files is a command-line interaction, whereas
double-clicking a folder in Windows Explorer is a graphical interaction. Both actions rely on the
OS.
6. Security and Access Control
The OS enforces security policies that protect the system from unauthorized access and ensure that
one user or process cannot interfere with another. It manages authentication (e.g., requiring a
username and password), access rights to files, and even antivirus integration in some systems.
Example: When you try to install new software on Windows, you may see a prompt saying “Do
you want to allow this app to make changes to your device?” This is the OS preventing
unauthorized modifications.
7. Networking
Modern operating systems include built-in support for networking. This allows computers to
connect to local networks or the internet, manage data transmission, and ensure secure
communication between devices.
Example: When you open a webpage, the OS uses networking protocols (like TCP/IP) to connect
to the server and retrieve the page content.
Example
Let’s take a simple action: printing a document. Here is how the OS manages it step by step:
1. You click “Print” in Microsoft Word.
2. The OS checks if the printer is available and loads the appropriate driver.
3. The OS schedules the printing task while ensuring the CPU can continue running other
tasks.
4. Memory is allocated to temporarily store the document before sending it to the printer.
5. The file is transmitted to the printer via the device driver.
6. The printer produces the output while the OS updates you with the status.
This simple task involves process management, memory management, device management, and
user interaction—all coordinated by the operating system.
Lesson 2: Types of Operating Systems
Operating systems come in different forms, each designed to meet specific needs depending on the
environment in which they are used. The way an OS manages resources, schedules tasks, and
interacts with users often depends on its type. To understand this better, we’ll look at the main
categories: Batch, Time-sharing, Real-time, Distributed, and Embedded operating systems.
1. Batch Operating Systems
Batch operating systems were among the earliest OS designs. In these systems, users did not
interact directly with the computer. Instead, they prepared jobs—programs with input data—on
punch cards or magnetic tapes. These jobs were collected and processed in batches by the OS, one
after the other, without user intervention.
The main advantage of batch systems is that they make good use of the CPU by processing jobs
without idle time. However, the lack of direct interaction meant that users often waited a long time
to see results.
Example: IBM’s early mainframes in the 1960s used batch systems where jobs like payroll
processing or scientific calculations were run overnight, and results were available the next day.
2. Time-Sharing Operating Systems
Time-sharing systems improved on batch processing by allowing multiple users to interact with a
computer at the same time. The OS allocates short slices of CPU time to each user in turn, creating
the illusion that everyone has their own machine.
This system made computing much more interactive and efficient, especially for universities,
research labs, and companies. Users could log in through terminals, type commands, and get
immediate responses.
Example: UNIX is a classic time-sharing operating system. In modern times, personal computers
running Windows, Linux, or macOS also follow time-sharing principles because they allow
multiple applications (and users) to run at once.
3. Real-Time Operating Systems (RTOS)
Real-time operating systems are designed to handle tasks with strict timing constraints. They are
used where it is critical to process data and provide responses within a guaranteed time frame.
There are two types of RTOS:
• Hard real-time systems: Missing a deadline is unacceptable, as it may cause system
failure. For example, in aircraft navigation systems or medical equipment.
• Soft real-time systems: Occasional deadline misses are tolerable, though they may
degrade performance. For example, video streaming systems or online gaming servers.
Examples: VxWorks, QNX, and FreeRTOS are widely used in industries like aerospace, robotics,
and telecommunications.
4. Distributed Operating Systems
A distributed OS manages a group of independent computers and makes them appear to users as
if they are a single system. The OS coordinates resource sharing, computation, and communication
across multiple machines connected by a network.
The benefit of distributed systems is that they provide high performance, reliability, and scalability
by pooling resources. If one machine fails, the workload can be shifted to others.
Example: Amoeba and Google’s cluster operating environments act as distributed OS systems. In
a more familiar sense, cloud platforms like Amazon Web Services or Google Cloud rely on
distributed operating system principles.
5. Embedded Operating Systems
Embedded operating systems are designed to run on small, specialized devices rather than general-
purpose computers. These systems are lightweight, efficient, and often run with limited resources
such as small memory or low processing power.
They are tailored for specific tasks, such as controlling household appliances, industrial machines,
or consumer electronics. Unlike general-purpose OS, embedded OS usually don’t require user
interaction beyond buttons or touch screens.
Examples:
• Android OS on smartphones (though advanced, it is still considered embedded).
• VxWorks used in spacecraft and automotive systems.
• Embedded Linux powering devices like smart TVs, routers, and medical devices.
Why Different Types Exist
The diversity of operating systems reflects the wide range of computing needs. A researcher
analyzing weather data may rely on a time-sharing UNIX system, while an airplane must depend
on a hard real-time OS to ensure passenger safety. A company running global applications may
prefer distributed systems for reliability, while your washing machine uses an embedded OS to
ensure it runs cycles correctly.
In other words, the type of OS chosen depends on the application environment, performance
requirements, and available hardware.
Lesson 3: Operating System Structures
An operating system is not just a single program. It is made up of many components that must
work together efficiently and reliably. How these components are organized and interact is what
we call the structure of the operating system. Different structures have been developed over time
to address challenges such as performance, reliability, flexibility, and ease of maintenance.
The main structures are monolithic systems, layered systems, microkernels, and hybrid
systems.
1. Monolithic Structure
In a monolithic operating system, the entire OS is a large, single program that runs in one address
space (the kernel space). All the essential services such as file management, memory management,
device drivers, and process scheduling are integrated into one big block of code.
This design makes the system fast because there is no overhead of switching between components.
However, it also makes the system complex and difficult to maintain. A bug in one part can
crash the entire system since everything is tightly connected.
Example:
• MS-DOS was a classic monolithic operating system. It had a simple structure but limited
security and multitasking support.
• Early versions of UNIX were also monolithic.
Analogy: Think of it as a huge single room where all workers (services) sit together.
Communication is easy and fast, but if one worker misbehaves, the whole room is in chaos.
2. Layered Structure
To improve organization, the layered structure was developed. Here, the OS is divided into
layers, with each layer building on top of the one below it. Lower layers handle hardware-level
tasks like memory and CPU management, while higher layers provide user interfaces and
applications.
The idea is that each layer only communicates with the layer directly below or above it, making
the system easier to design and debug. For example, if there is a bug in the file system layer, it
doesn’t affect the process scheduling layer.
Example:
• THE OS (developed in the 1960s at Eindhoven University) was the first to implement
layering.
• Windows NT uses layering in parts of its design.
Analogy: Imagine a company with different departments stacked like floors of a building. Each
floor handles its own responsibilities and communicates only with the floor directly above or
below. This keeps things organized but may slow down communication compared to everyone
being in one big hall.
3. Microkernel Structure
A microkernel aims to keep the kernel as small as possible. Only the most essential functions,
such as CPU scheduling, memory management, and inter-process communication, are inside the
kernel. Other services like device drivers, file systems, and network protocols are moved to user
space as separate processes.
This design increases security and reliability because if a driver or service fails, it won’t crash
the whole system. The downside is performance overhead, since communication between kernel
and services requires more context switching.
Examples:
• Mach (developed at Carnegie Mellon University) was a famous microkernel.
• QNX is a real-world commercial microkernel used in cars, medical devices, and industrial
machines.
• Modern macOS is built partly on Mach.
Analogy: Think of the kernel as a minimal government that only handles law and order, while all
other services like education, healthcare, and transportation are run by private organizations. The
government is small and stable, but coordinating between many private services takes time.
4. Hybrid Structure
The hybrid structure combines the strengths of monolithic and microkernel designs. It keeps
some services in the kernel (for performance) while pushing others to user space (for security and
flexibility). This balances speed with modularity.
Hybrid systems often look monolithic but are internally modular. They can dynamically load
and unload components, making them adaptable.
Examples:
• Windows (modern versions like Windows 10/11) are hybrid OSes. They integrate a
layered architecture with microkernel principles.
• Linux is often called monolithic, but since it supports loadable kernel modules (like
drivers), it is considered modular and close to hybrid.
• macOS uses a hybrid structure combining Mach (microkernel) and BSD (monolithic
UNIX).
Analogy: Imagine a government that handles key sectors like defense and justice directly, but
allows private organizations to manage education and healthcare. This way, you get both stability
and efficiency.
Why Structures Matter
The structure of an operating system determines:
• Performance: Monolithic kernels are fast but harder to secure.
• Security and reliability: Microkernels and hybrids isolate services, reducing crashes.
• Flexibility: Layered and hybrid designs allow easier updates and maintenance.
• Application: Embedded systems may use microkernels for stability, while personal
computers prefer hybrids for balance.
In practice, most modern OSes (Windows, Linux, macOS, Android) are hybrids because they need
to be fast, secure, and flexible.
Lesson 4: Evolution and History of Operating Systems
Operating systems have not always looked like the ones we use today. Their development has been
shaped by advances in hardware, software, and user needs. From the early days of single-task
systems to the modern era of cloud and mobile platforms, OS history shows how computing
evolved from simple machines to powerful, interconnected devices.
We can trace the evolution of operating systems through several generations.
1. First Generation (1940s – Early 1950s): No Operating Systems
In the earliest electronic computers, there were no operating systems at all. Machines such as
ENIAC and UNIVAC were programmed directly using machine language (binary code) or
assembly language. Programs were loaded manually using switches, punch cards, or paper tape.
At this stage, computers could only execute one program at a time, and users had to manage
everything, including memory and input/output.
Example: The ENIAC (1945), often called the first general-purpose computer, had no OS. Every
new task required rewiring the machine.
2. Second Generation (1950s – Early 1960s): Batch Processing Systems
As computers became faster and more reliable, operating systems began to emerge. Batch systems
were introduced to automate job execution. Users submitted jobs (program + data) on punch cards
to an operator, who grouped them into batches for execution.
The OS handled automatic job sequencing, loading programs one after another without human
intervention. However, there was no interaction during execution, and debugging was very slow.
Example: IBM’s early mainframes (IBM 701, IBM 1401) used batch systems to process scientific
and business applications.
3. Third Generation (1960s – 1970s): Multiprogramming and Time-Sharing
This era marked a breakthrough in OS design. Two important concepts were introduced:
• Multiprogramming: The OS could keep multiple jobs in memory at once and switch
between them to maximize CPU usage. For example, while one program was waiting for
input/output, another could use the CPU.
• Time-sharing: Multiple users could interact with the computer simultaneously. The OS
gave each user a small time slice, creating the illusion of personal access.
This was the birth of interactive computing.
Examples:
• UNIX (developed in 1969 at Bell Labs) became a foundation for many future systems.
• MULTICS (Multiplexed Information and Computing Service) was a pioneering time-
sharing OS.
4. Fourth Generation (1980s – 1990s): Personal Computers and GUIs
The invention of the microprocessor brought computing to individuals. Operating systems adapted
to make computers user-friendly. Instead of command lines alone, graphical user interfaces (GUIs)
were introduced.
This generation also saw networking features becoming integrated into OSes, making it possible
for PCs to connect and share resources.
Examples:
• MS-DOS (1981) was a command-line OS for IBM PCs.
• Microsoft Windows (1985 onward) introduced graphical desktops.
• Apple Macintosh System Software (later macOS) popularized GUI-based computing.
During this period, Linux was also born (1991, by Linus Torvalds) as an open-source UNIX-like
system.
5. Fifth Generation (2000s – Present): Modern, Mobile, and Cloud OS
Modern operating systems are far more advanced, combining multitasking, multimedia,
networking, and security features. They also support distributed and mobile computing, enabling
people to use devices anywhere and anytime.
Key features include:
• Networking and Internet Integration: OSes manage internet connectivity, file sharing,
and cloud storage.
• Mobile Operating Systems: Smartphones introduced platforms like Android (2008) and
iOS (2007), optimized for touchscreens and portable devices.
• Cloud and Virtualization: OSes now support virtualization (e.g., VMware, Hyper-V) and
cloud services (Google Cloud OS, Microsoft Azure).
• Security and Multi-core Processing: Modern OSes emphasize protecting users from
cyber threats and optimizing for multi-core CPUs.
Examples: Windows 10/11, macOS, Linux distributions, Android, iOS, and ChromeOS.
Timeline of OS Development
• 1940s: No OS – manual programming.
• 1950s: Batch systems.
• 1960s–70s: Multiprogramming, time-sharing, UNIX.
• 1980s–90s: PCs, GUIs, Windows, macOS, early Linux.
• 2000s–Present: Internet, mobile OS, cloud computing, open-source OS growth.
Why the Evolution Matters
The history of operating systems shows a shift from hardware-focused computing to user-
focused computing. In the beginning, only scientists and engineers could use computers, but
today, thanks to OS advancements, billions of people interact with computers daily — often
without even realizing it. Your smartphone, car, ATM machine, and even microwave oven are
powered by an OS.
The evolution also highlights how OS design responds to needs: efficiency in early days,
interactivity in the 1960s, user-friendliness in the 1980s, and connectivity/security in modern
times.
Lesson 5: Lab – Exploring OS Environments (Windows vs Linux Basics)
Theory gives us the foundation of operating systems, but to fully understand how they work,
students need hands-on experience. This lab introduces two of the most widely used OS
environments today: Windows and Linux. By comparing them, learners will appreciate
similarities, differences, and the unique strengths of each.
Objectives of the Lab
By the end of this lab, students should be able to:
1. Recognize the basic interface of Windows and Linux.
2. Perform common file and directory operations in both systems.
3. Compare graphical user interface (GUI) and command-line interface (CLI) interactions.
4. Understand how user accounts and permissions are handled.
5. Gain confidence in switching between different OS environments.
Part A: Exploring Windows Environment
Windows is a graphical, user-friendly operating system widely used in personal computers.
Step 1: Familiarization with the Desktop
• Boot into Windows.
• Observe the desktop elements: Start Menu, Taskbar, Icons, File Explorer, Recycle Bin.
• Ask students to launch File Explorer (Windows + E).
Explanation: The desktop is the main GUI interface. Windows emphasizes ease of use, so most
tasks can be done by clicking icons rather than typing commands.
Step 2: File and Folder Operations
• Create a folder named Lab1 on the desktop.
• Inside it, create a text file called notes.txt.
• Rename the file to lesson1.txt.
• Copy and paste it to another folder.
• Delete it and restore it from the Recycle Bin.
Observation: Windows makes file management intuitive through right-click menus and drag-and-
drop actions.
Step 3: Using Command Prompt
• Open Command Prompt (type cmd in Start).
• Run the following commands:
o dir → lists files in the current directory.
o cd Desktop\Lab1 → navigates to the Lab1 folder.
o echo Hello World > hello.txt → creates a new file.
o del hello.txt → deletes the file.
Observation: Although Windows is GUI-based, its CLI allows direct commands for flexibility.
Step 4: User Accounts and Permissions
• Go to Control Panel → User Accounts.
• Note the different account types (Administrator vs Standard User).
• Right-click a file → Properties → Security Tab → explore how permissions (Read,
Write, Execute) are set.
Part B: Exploring Linux Environment
Linux is a multi-user, open-source operating system known for stability, security, and CLI
power.
Step 1: Familiarization with the Desktop (GUI)
• Boot into a Linux distribution (e.g., Ubuntu).
• Observe the desktop: Launcher/Activities Bar, File Manager, Terminal.
• Open the File Manager and compare with Windows File Explorer.
Observation: Both provide GUI access to files, but Linux layouts vary depending on the
distribution and desktop environment (e.g., GNOME, KDE).
Step 2: File and Folder Operations (GUI)
• Create a folder called Lab1 in the Home directory.
• Inside it, create a text file notes.txt.
• Rename, copy, move, and delete it using the GUI.
Observation: Linux GUI feels similar to Windows but may use different terms like “Move to
Trash” instead of “Delete.”
Step 3: Using the Linux Terminal (CLI)
Open Terminal and run these commands:
• ls → lists files.
• pwd → prints the current directory.
• cd Lab1 → navigates to the Lab1 folder.
• touch hello.txt → creates a file.
• nano hello.txt → opens a text editor inside the terminal. Type “Hello World” and save.
• rm hello.txt → deletes the file.
Observation: Unlike Windows, Linux relies heavily on the command line for administration. This
makes it very powerful for automation and scripting.
Step 4: User Accounts and Permissions
• Run whoami → shows current user.
• Run id → shows groups and privileges.
• Create a new user (admin only): sudo adduser testuser.
• Check file permissions: ls -l.
• Change permissions of a file: chmod 755 notes.txt.
Observation: Linux enforces strict multi-user security. Every file has an owner, group, and
permission set, unlike Windows where user accounts are managed more simply.
Part C: Comparison and Reflection
• Interface: Windows is GUI-focused; Linux provides both GUI and strong CLI.
• File System: Windows uses drives (C:, D:), while Linux uses a single root (/) hierarchy.
• Permissions: Linux enforces stricter access control by default.
• Customization: Linux is highly customizable (open-source); Windows is more
standardized.
• Usage: Windows dominates personal PCs; Linux powers servers, embedded systems, and
advanced computing.
Lab Questions for Students
1. What are two differences between Windows File Explorer and Linux File Manager?
2. Using commands, create a folder in Windows (cmd) and Linux (terminal). Write the
commands.
3. Compare how user permissions are handled in Windows vs Linux.
4. Which OS would you prefer for personal use, and which for servers? Why?