Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
13 views11 pages

OS Supplement

The document provides an overview of the compilation process of C/C++ programs, detailing the roles of the preprocessor, compiler, assembler, and linker. It discusses memory hierarchy, virtual memory management, and the layout of a C/C++ program in memory, including segments like text, data, stack, and heap. Additionally, it covers operating system resource management, polling, interrupts, exception handling, daemons, and threading in both POSIX and Windows environments.

Uploaded by

shortsclipshub
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views11 pages

OS Supplement

The document provides an overview of the compilation process of C/C++ programs, detailing the roles of the preprocessor, compiler, assembler, and linker. It discusses memory hierarchy, virtual memory management, and the layout of a C/C++ program in memory, including segments like text, data, stack, and heap. Additionally, it covers operating system resource management, polling, interrupts, exception handling, daemons, and threading in both POSIX and Windows environments.

Uploaded by

shortsclipshub
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Operating System Supplement

1.1 Compilation of a C/C++ program

● Preprocessor​: The C preprocessor or cpp is the ​macro preprocessor for the C and
C++ computer programming languages. The preprocessor provides the ability for the
inclusion of header files, macro expansions, conditional compilation, and line control​.
It is a separate program invoked by the compiler as the ​first part of translation​.
● Compiler​: The compiler is a computer program that ​translates computer code written
in one ​high-level programming language to a lower level language (e.g., assembly
language, object code, or machine code) to create an executable program.
● Assembler​: The assembler is a program that ​turns assembly language into machine
code​. It takes basic computer instructions and converts them into a pattern of bits
that the computer's processor can use to perform its basic operations. It packages
them in a form called ​Relocatable Object Program​.
● Linker​: A linker or link editor is a computer utility program that takes ​one or more
object files generated by a compiler or an assembler and combines them into a single
executable file, library file, or another object file​. A simpler version that writes its
output directly to memory is called the loader, though loading is typically considered a
separate process.

1.2 Memory Hierarchy


The memory hierarchy separates computer storage into a ​hierarchy based on response
time​. Since response time, complexity, and capacity are related, the levels may also be
distinguished by their performance and controlling technologies. Memory ​hierarchy affects
performance in computer architectural design, algorithm predictions, and lower level
programming constructs involving locality of reference.
Designing for high performance requires considering the restrictions of the memory
hierarchy, i.e. the size and capabilities of each component. Each of the various components
can be viewed as part of a hierarchy of memories (m1,m2,...,mn) in which each member mi
is typically smaller and faster than the next highest member mi+1 of the hierarchy. To limit
waiting by higher levels, a lower level will respond by filling a buffer and then signaling for
activating the transfer.

1
There are four major storage levels:
● Internal – Processor registers and cache.
● Main – System RAM and controller cards.
● On-line mass storage – Secondary storage.
● Off-line bulk storage – Tertiary and Off-line storage.

1.3 Virtual Memory


Virtual memory (also ​virtual storage​) is a ​memory management technique that provides an
“​idealized abstraction of the storage resources that are actually available on a given
machine" which "creates the ​illusion to users of a very large (main) memory​”. Virtual Memory
combines active ​RAM and inactive memory on ​Direct-Access Storage Device (DASD) to
mimic a large range of contiguous addresses.
The ​operating system​, using a combination of hardware and software, ​maps memory
addresses​, called ​virtual addresses​, used by a program ​into physical addresses in
computer memory. ​Main storage, as seen by a process or task, appears as a contiguous
address space or collection of contiguous segments. Address translation hardware in the
CPU, often referred to as a memory management unit or MMU, automatically translates
virtual addresses to physical addresses. Software within the operating system may extend
these capabilities to provide a virtual address space that can exceed the capacity of real
memory and thus reference more memory than is physically present in the computer.
The primary ​benefits of virtual memory include ​freeing applications from having to manage a
shared memory space​, ​increased security due to memory isolation, and being ​able to
conceptually use more memory than might be physically available​, using the technique of
paging (a memory management scheme by which a computer stores and retrieves data from
secondary storage for use in main memory).

2
1.3.x Swap Space
A ​swap file (or swap space or, in Windows NT, a pagefile) is a ​space on a hard disk used as
the virtual memory extension of a computer's real memory (RAM). Having a swap file ​allows
your computer's ​operating system to pretend that you have more RAM than you actually​ do.

1.4 Memory Layout of a C/C++ program


When a process is started, the ​operating system creates a virtual image of it. A typical
memory representation of C program consists of the following sections:
● Text segment​: It is a section in memory, also known as the ​code segment or simply as
text​, which ​contains executable instructions​. Usually, the text segment is ​shareable so
that only a single copy needs to be in memory for frequently executed programs, such as
text editors, the C compiler, the shells, and so on. Also, the text segment is ​often
read-only​, to prevent a program from accidentally modifying its instructions.
● Initialized data segment​: It is usually called simply the Data Segment. It is a portion of
virtual address space of a program, which ​contains the global variables and static
variables that are initialized by the programmer​. The data segment is ​not read-only​, since
the values of the variables can be altered at run time. This segment can be further
classified into initialized read-only area and initialized read-write area.
● Uninitialized data segment​: Uninitialized data segment is often called the “​BSS (Block
Started by Symbol)” segment. Data in this segment is ​initialized by the kernel to
arithmetic 0 before the program starts executing. Uninitialized data starts at the end of
the data segment and ​contains all global variables and static variables that are initialized
to zero or do not have explicit initialization​ in source code.

3
● Stack segment​: The stack area ​contains the program stack, a LIFO structure​, typically
located in the higher parts of memory. On the standard x86 computer architecture it
grows toward address zero; on some other architectures it grows the opposite direction.
A ​stack pointer register tracks the top of the stack​; it is adjusted each time a value is
“pushed” onto the stack. The set of values pushed for one function call is termed a ​stack
frame​; A stack frame consists at minimum of a return address.
Stack is ​where automatic variables are stored, along with information that is saved each
time a function is called​. Each time a function is called, the address of where to return to
and certain information about the caller’s environment, such as some of the machine
registers, are saved on the stack. The newly called function then allocates room on the
stack for its automatic and temporary variables. This is how recursive functions in C can
work. Each time a recursive function calls itself, a new stack frame is used, so one set of
variables doesn’t interfere with the variables from another instance of the function.
● Heap segment​: Heap is the segment ​where dynamic memory allocation usually takes
place​. The heap area begins at the end of the BSS segment and grows to larger
addresses from there.The Heap area is managed by malloc, realloc, and free, which may
use the brk and sbrk system calls to adjust its size. The Heap area is ​shared by all
shared libraries and dynamically loaded modules in a process​.

4
1.5 Management by Operating System
Resources
● Allocation​: finite resources and competing demands
● Protection​: ensures some degree of safety and security
● Reclamation​: voluntary at run time, implied at termination; involuntary and cooperative
● Virtualization​: illusion of infinite, private resources

Services
Implements ​Abstraction​, ​Simplification​, ​Convenience​, ​Standardization​ for:
● Program execution
● I/O operations
● File system manipulation
● Communication
● Error detection
● Resource allocation
● Protection

1.6 Polling
Polling​, or ​polled operation​, refers to ​actively sampling the status of an external device by
a client program as a synchronous activity. Polling is ​most often used in terms of I/O​, and is
also referred to as polled I/O or software-driven I/O. It is the process where the computer or
controlling device ​waits for an external device to check for its readiness or state​, often with
low-level hardware. Τhis is sometimes used synonymously with ​busy-wait polling​. In this
situation, when an I/O operation is required, the computer does nothing other than check the
status of the I/O device until it is ready, at which point the device is accessed. In other
words, the computer waits until the device is ready.
Polling also refers to the ​situation where a device is repeatedly checked for readiness​, and if
it is not, the computer returns to a different task. Although not as wasteful of CPU cycles as
busy waiting, this is ​generally not as efficient as the alternative to polling, interrupt-driven I/O.
Polling has the disadvantage that if there are too many devices to check, the time required to
poll them can exceed the time available to service the I/O device.

1.7 Interrupts
An interrupt is an ​input signal to the processor indicating an event that needs immediate
attention​. An interrupt signal alerts the processor and serves as a request for the processor
to interrupt the currently executing code, so that the event can be processed in a timely
manner. If the request is accepted, the ​processor responds by suspending its current
activities​, saving its state, and ​executing a function called an ​interrupt handler (or an
Interrupt Service Routine, ISR​) to deal with the event. This interruption is temporary, and,
unless the interrupt indicates a fatal error, the processor resumes normal activities after the
interrupt handler finishes.

5
Interrupts are ​commonly used by hardware devices to indicate electronic or physical state
changes that require attention. Interrupts are also commonly ​used to implement computer
multitasking​, especially in real-time computing. Systems that use interrupts in these ways are
said to be interrupt-driven.

1.8 Trap
A ​trap​, also known as an ​exception or a ​fault​, is typically ​a type of synchronous interrupt
caused by an exceptional condition (e.g., breakpoint, division by zero, invalid memory
access). A trap usually ​results in a switch to kernel mode​, wherein the operating system
performs some action before returning control to the originating process. A trap in a kernel
process is more serious than a trap in a user process, and in some systems is fatal. In some
usages, the term trap refers specifically to an interrupt intended to initiate a context switch to
a monitor program or debugger. Deriving from this original usage, trap is sometimes used for
the mechanism of intercepting normal control flow in some domains.

1.9 Exception Handling


Exception handling is the ​process of responding to the occurrence​, during computation, ​of
exceptions – anomalous or exceptional conditions requiring special processing – often
disrupting the normal flow of program execution. It is ​provided by specialized programming
language constructs​, computer ​hardware mechanisms like interrupts or operating system
IPC facilities like signals​.
In general, an exception ​breaks the normal flow of execution and executes a pre-registered
exception handler​. Some exceptions, especially hardware ones, may be handled so
gracefully that execution can resume where it was interrupted.

2.1 Daemon
In multitasking computer operating systems, a ​daemon is a computer ​program that runs as a
background process​, rather than being under the direct control of an interactive user.
Traditionally, the process names of a daemon end with the letter d, for clarification that the
process is in fact a daemon, and for differentiation between a daemon and a normal
computer program. e.g., ​syslogd is the daemon that implements the system logging facility,
and ​sshd​ is a daemon that serves incoming SSH connections.
In a Unix environment, ​the parent process of a daemon is often, but not always, the ​init
process​. A daemon is usually either created by a process forking a child process and then
immediately exiting, thus causing ​init to adopt the child process, or by the ​init process
directly launching the daemon.
Systems often start daemons at boot time which will respond to network requests, hardware
activity, or other programs by performing some task. Daemons such as ​cron ​may also
perform defined tasks at scheduled times​.

6
3.1 POSIX and Pthreads
POSIX (​Portable Operating Systems Interface for Computing Environments​) is a ​set of
standards for operating system interfaces​, that are largely based on UNIX System V. The
POSIX specification ​defines a standard interface between threads and their threading library​.
Threads that use the POSIX threading API are called ​Pthreads (sometimes referred to as
POSIX threads​). The POSIX specification is ​not concerned with the details of the
implementation of the threading interface — Pthreads can be implemented in the kernel or
by user-level libraries.
POSIX ​states that the processor registers, the stack and the signal mask are maintained
individually for each thread​, and ​any other resource information must be globally accessible
to all threads in the process. POSIX also ​defines a signal model to address many of the
concerns. According to POSIX, when a thread generates a synchronous signal due to an
exception such as an illegal memory operation, the signal is delivered only to that thread. If
the signal is not specific to a thread, such as a signal to kill a process, then the threading
library delivers that signal to a thread that does not mask it.

3.2 Linux Threads


Support for threads in the Linux operating system was introduced as user-level threads in
version 1.0.9 and as kernel-level threads in version 1.3.56. 49 Although Linux supports
threads, it is important to note that ​many Linux kernel subsystems do not distinguish
between threads and processes​. In fact, ​Linux allocates the same type of process descriptor
to processes and threads​, both of which are ​called ​tasks​.
Linux uses the UNIX-based system call fork to spawn child tasks. Linux responds to the ​fork
system call by creating a new task that contains a copy of all of its parent's resources (e.g.,
address space, register contents, stack). ​To enable threading, Linux provides a modified
version of the ​fork system call named ​clone.​ Similar to ​fork​, ​clone creates a copy of the
calling task — in the process hierarchy, the copy becomes the child of the task that issued
the clone system call. Unlike fork, ​clone accepts arguments that specify which resources to
share with the child process​. As of version 2.6 of the kernel, ​Linux provides a one-to-one
thread mapping that supports an arbitrary number of threads in the system. All tasks are
managed by the same scheduler, meaning that ​processes and threads with equal priority
receive the same level of service​.

3.3 Windows XP Threads


In Windows XP, a ​process consists of program code, an execution context, resources (e.g.,
open files) and ​one or more associated threads​. The execution context includes such items
as the process' virtual address space and various attributes (e.g., security attributes).
Threads are the actual unit of execution​; threads execute a piece of a process' code in the
process' context, using the process's resources. In addition to its process's context, ​a thread
contains its own execution context which includes its runtime stack, the state of the
machine's registers and several attributes (e.g, scheduling priority).

7
When the system initializes a process, the ​process creates a ​primary thread​. It acts as any
other thread, except that if the primary thread returns, the process terminates, unless the
primary thread explicitly directs the process not to terminate. ​A thread can create other
threads belonging to its process. All threads belonging to the same process ​share that
process's virtual address space​. Threads can ​maintain their own private data in ​thread local
storage (TLS)​.

Fibers
Threads can create ​fibers​, which are ​similar to threads, except that a fiber is scheduled for
execution by the thread that creates it​, rather than the scheduler. Fibers make it ​easier for
developers to port applications that employ user-level threads​. A fiber ​executes in the
context of the thread that creates the fiber​. Just as threads possess their own thread local
storage (TLS), fibers ​possess their own ​fiber local storage (FLS)​, which functions for fibers
exactly as TLS functions for a thread. A fiber can also access its thread's TLS. If a fiber
deletes itself (i.e., terminates), its thread terminates.

Thread Pools
Windows XP also provides each process with a ​thread pool that consists of a number of
worker threads​, which are kernel-mode threads that execute functions specified by user
threads. Because the functions specified by user threads may outnumber worker threads,
Windows XP ​maintains requests to execute functions in a queue​. The thread pool consists of
worker threads that sleep until a request is queued to the pool. The thread that queues the
request must specify the function to execute and must provide context information. The
thread pool is created the first time a thread submits a function to the pool​.

8
4.1 Solaris Scheduling
Solaris uses ​priority-based thread scheduling where each thread belongs to one of six
classes:
● Time sharing (TS)
● Interactive (IA)
● Real time (RT)
● System (SYS)
● Fair share (FSS)
● Fixed priority (FP)
Within each class there are different priorities and different scheduling algorithms. The
default scheduling class for a process is time sharing​. The scheduling policy for the ​time
sharing class dynamically ​alters priorities and ​assigns time slices of different lengths using
a multilevel feedback queue​. By default, there is an ​inverse relationship between priorities
and time slices​. The higher the priority, the smaller the time slice; and the lower the priority,
the larger the time slice. ​Interactive processes typically have a higher priority​; ​CPU-bound
processes, a lower priority​. This scheduling policy ​gives good response time for interactive
processes and good throughput for CPU-bound processes​.
The ​interactive class uses the same scheduling policy as the time-sharing class, but it
gives windowing applications​, such as those created by the KDE or GNOME window
managers, a ​higher priority​ for better performance.
Threads in the ​real time class are given the highest priority​. This assignment allows a real
time process to have a guaranteed response from the system within a bounded period of
time. In general, however, few processes belong to the real time class.
Solaris uses the ​system class to run kernel threads​, such as the scheduler and paging
daemon. Once established, the ​priority of a system thread does not change​. The system
class is reserved for kernel use.
The ​fixed priority and ​fair share classes were introduced with Solaris 9. ​Threads in the
fixed priority class have the same priority range as those in the time-sharing class​; however,
their ​priorities are not dynamically adjusted​. The ​fair share scheduling class uses CPU
shares instead of priorities to make scheduling decisions. CPU shares indicate entitlement to
available CPU resources and are allocated to a set of processes (known as a project).
Each scheduling class includes a set of priorities. However, the ​scheduler converts the
class-specific priorities into global priorities and selects the thread with the highest global
priority to run. The selected ​thread runs on the CPU until it (1) ​blocks​, (2) ​uses its time slice​,
or (3) ​is preempted by a higher-priority thread​. If there are ​multiple threads with the same
priority, the scheduler uses a round-robin queue​. The kernel maintains 10 threads for
servicing interrupts. These threads do not belong to any scheduling class and execute at the
highest priority. Solaris has traditionally used the many-to-many model but switched to the
one-to-one model beginning with Solaris 9.
Global priority are as: ​(160 - 169) Interrupt threads, ​(100 - 159) Real time threads, ​(60 - 99)
System threads, ​(0 - 59) rest​, i.e., in order, Fair Share threads, Fixed Priority threads,
Timeshare threads, and Interactive threads.

9
4.2 Windows XP Scheduling
Windows XP schedules threads using a ​priority-based, preemptive scheduling algorithm​.
The Windows XP scheduler ​ensures that the highest-priority thread will always run​. The
portion of the Windows XP kernel that handles scheduling is called the dispatcher. A thread
selected to run by the dispatcher will run ​until it is preempted by a higher-priority thread​,
terminates​, its ​time quantum ends​, or ​calls a blocking system call​, such as for I/O. If a
higher-priority real-time thread becomes ready while a lower-priority thread is running, the
lower-priority thread will be preempted. This preemption ​gives a real-time thread preferential
access to the CPU​ when the thread needs such access.
The dispatcher uses a 32-level priority scheme to determine the order of thread execution.
Priorities are divided into two classes. The ​variable class ​contains threads having priorities
from 1 to 15​, and the ​real-time class contains threads with priorities ranging from 16 to 31​.
(There is also a ​thread running at priority 0 that is used for memory management​). The
dispatcher uses a queue for each scheduling priority and traverses the set of queues from
highest to lowest until it finds a thread that is ready to run. If no thread is found, the
dispatcher will execute a special thread called the ​idle thread​.
There is a relationship between the numeric priorities of the Windows XP kernel and the
Win32 API. The Win32 API identifies several priority classes to which a process can belong.
These include:
REALTIME_PRIORITY_CLASS​, HIGH_PRIORITY_CLASS​,
ABOVE_NORMAL_PRIORITY_CLASS​, NORMAL_PRIORITY_CLASS​,
BELOW_NORMAL_PRIORITY_CLASS​, IDLE_PRIORITY_CLASS
Priorities in all classes except the REALTIME_PRIORITY_CLASS are variable, meaning that
the priority of a thread belonging to one of these classes can change.
A thread within a given priority classes also has a relative priority. The values for relative
priorities include:
TIME_CRITICAL​, HIGHEST​,
ABOVE_NORMAL​, NORMAL​,
BELOW _NORMAL​, LOWEST​, IDLE
The ​priority of each thread is based on both the priority class it belongs to and its relative
priority within that class​. Furthermore, each thread has a base priority representing a value
in the priority range for the class the thread belongs to. By default, the base priority is the
value of the NORMAL relative priority for that class. The base priorities for each priority class
are:
REALTIME_PRIORITY_CLASS -​ 24
HIGH_PRIORITY_CLASS -​ 13
ABOVE_NORMALPRIORITY_CLASS -​ 10
NORMAL_PRIORITY _CLASS -​ 8
BELOW_NORMALPRIORITY_CLASS -​ 6
IDLE_PRIORITY_CLASS -​ 4
Processes are typically members of the NORMAL_PRIORITY_CLASS. A process belongs to
this class unless the parent of the process was of the IDLE_PRIORITY_CLASS or unless

10
another class was specified when the process was created. The initial priority of a thread is
typically the base priority of the process the thread belongs to.

When a thread's time quantum runs out, that thread is interrupted; if the thread is in the
variable-priority class, its priority is lowered. The priority is never lowered below the base
priority, however. Lowering the priority tends to limit the CPU consumption of
compute-bound threads. When a variable-priority thread is released from a wait operation,
the dispatcher boosts the priority. The amount of boost depends on what the thread was
waiting for; for example, a thread that was waiting for keyboard I/O would get a large
increase, whereas a thread waiting for a disk operation would get a moderate one. This
strategy tends to give good response time to interactive threads that are using the mouse
and windows. It also enables I/O-bound threads to keep the I/O devices busy while
permitting compute-bound threads to use spare CPU cycles in the background. This strategy
is used by several time-sharing operating systems, including UNIX. In addition, the window
with which the user is currently interacting receives a priority boost to enhance its response
time.
When a user is running an interactive program, the system needs to provide especially good
performance. For this reason, Windows XP has a special scheduling rule for processes in
the NORMAL_PRIORITY_CLASS. Windows XP distinguishes between the foreground
process that is currently selected on the screen and the background processes that are not
currently selected. When a process moves into the foreground, Windows XP increases the
scheduling quantum by some factor, typically by 3. This increase gives the foreground
process three times longer to run before a time-sharing preemption occurs.

11

You might also like