PROGRAMMING ASSIGN, UNIT 4
TOPICS ABOUT MEMORY AND OS
UNIVERSITY OF THE PEOPLE
CS 2301
Explain cache coherency.
Cache coherency refers to the integrity of data stored in local caches on shared resources. Cache
consistency is a special case of memory consistency (Lister et al., 2016). When the clients of a
system, in particular the CPUs in a multiprocessor, maintain caches of a shared memory, the
conflicts grow. For example, if client A has a copy of a memory block from a previous read and client
B changes that block, client A could be working with bad data, without being aware of it (Stallings,
2014). Cache consistency tries to manage these conflicts and maintain consistency between caches
and memory.
What is the benefit of using sparse addresses in virtual memory?
Sparse addresses are virtual address spaces that include gaps. From this point of view, a program is
a set of logical components of variable size or a set of segments, that is, the logical address space is
considered as a set of segments, each defined by an identifier, and consisting of a starting point and
assigned size (Tanenbaum & Bos, 2015). Proper use of sparse addresses helps reduce internal
memory fragmentation caused by paging by allocating the amount of memory required by each
program. For example, loading a program into memory requires searching for the appropriate gaps
for its segments, and since these are variable in size, they will be adjusted as much as possible to
the needs, producing small gaps (Tsichritzis & Bernstein, 2014).
In addition, it allows the memory management unit to be asked that each of the segments have a
different set of permissions for the process. In this way we can prevent a programming error from
resulting in data provided by the user or the environment modifying the code that is being executed
(Lister et al., 2016). Name and describe the different states that a process can exist in at any given
time. The states of a process are due to its participation and availability within the operating system
and arise from the need to control the execution of each process. Processors can only run a single
process at a time, taking turns using it. There are non-preemptive or cooperative processes that
basically take up all the processor's time until they decide to quit. Preemptive processes are those
that occupy the processor for a period until an interruption or signal reaches the processor to make
the process change, this is known as context change (Stallings, 2014).
The possible states that a process can have are execution, blocked and ready: 1. Execution, is a
process that is using the processor. 2. Blocked, It cannot be executed until an external event is
carried out. 3. Done, you have made the processor available for another process to occupy it.
The possible transitions are 4. The first is carried out when the operating system determines that
the process cannot continue right at that moment, in some systems a "pause" system call can be
made to go to the blocked state, in Unix when the process is reading data from a pipeline or a special
file (terminal) and there is no input available, the process is blocked automatically (Tanenbaum &
Bos, 2015). Transitions 2 and 3 are carried out by the process planner since the process is not aware
of it. Transition 2 occurs when the process planner decides that the process has already been
running long enough and must give way to the execution of other processes (acquire processor
time). Transition 3 is carried out when all processes have taken up processor time and the first
process must be resumed (Lister et al., 2016). Transition 4 occurs when an external event occurs for
which a process was waiting, for example, entering data from the terminal. If there is no other
process running at that time, transition 3 is activated and the process begins to run; it could also go
to the "ready" state and wait a moment to start execution (Stallings, 2014).
Describe two general approaches to load balancing.
Load balancing is a technique used to share the work to be done among various elements that have
the necessary resources and are capable of doing it, such as processors, equipment, storage systems
and other devices (Tanenbaum & Bos, 2015).
It is carried out thanks to balancing algorithms that divide the work in the most precise and equitable
way possible, find the mapping of tasks that is proportional and ensure that each component has
an amount of work that demands approximately the same time, to avoid the so-called necks. bottle
among the elements that compose it (Stallings, 2014).
A mapping that balances the workload of the processors increases the overall efficiency and reduces
the execution time. Balancing management is particularly complex if the processors (and the
communications between them) are heterogeneous, since protocols, speeds, operating systems
and communications must be taken into account (Tanenbaum & Bos, 2015).
There are three classes of balancing algorithms:
1. Centralized balancing: a node executes the algorithm and maintains the global state of the system
(Tanenbaum & Bos, 2015).
2. Semi-distributed balancing: The processors are classified and divided by regions, each region has
a local centralized algorithm, while another algorithm balances the load between the regions.
Balancing can be initiated by shipping or receiving. If it's send-initiated balancing, a heavily loaded
processor sends work to others. If it is host-initiated balancing, a low-load processor requests work
from others. If the load per processor is low or medium, send-initiated balancing is best. If the load
is high, receive-initiated balancing should be used. Otherwise, in both cases, a strong unnecessary
migration of tasks may occur (Tsichritzis & Bernstein, 2014).
3. Fully distributed balancing: each processor maintains its own vision of the system by exchanging
information with its neighbors and thus being able to make local changes (Stallings, 2014).
Describe the differences between physical, virtual, and logical memory.
Logical memory is the address space that the operating system perceives as its primary storage.
There are three related concepts here:
Physical - An actual device
Physical memory is what addresses the CPU on its address bus. It is the lowest level software you
can go for. Physical memory is organized as a sequence of 8-bit bytes, each with a physical address
(Tsichritzis & Bernstein, 2014).
Logical - A translation to a physical device
Logical memory is what user-level programs address in their code. It looks like a contiguous address
space, but behind the scenes each linear address maps to a physical address. This allows user-level
programs to address memory in a common way and leaves the management of physical memory to
the kernel (Tsichritzis & Bernstein, 2014).
Virtual - A simulation of a physical device
Virtual memory is a memory management technique that ensures that the operating system has a
greater amount of memory than is physically available, both for the user software and for itself
(Tanenbaum & Bos, 2015).
The illusion of virtual memory is supported by the memory translation mechanism (logical memory),
along with a large amount of fast hard disk storage. Thus, at any time, the virtual address space is
tracked in such a way that a small part of it is in physical memory and the rest is stored on disk, and
can be easily referenced (Stallings, 2014).
Conclution
The number of historical contributions to the development of operating systems combined with the
advances in electronics for the creation of chips, processors and controllers have allowed us to have
several alternatives to the myriad of problems that arise with the implementation of new
technologies. I look forward to the change’s quantum computing will bring to our lives.
References
Lister, A., Eager, B., & Eager, R. D. (2016). Fundamentals of operating systems. Macmillan
International Higher Education.
Stallings, W. (2014). Operating Systems. Pearson Education UK.
Tanenbaum, A. S., & Bos, H. (2015). Modern operating systems. Pearson.
Tsichritzis, D. C., & Bernstein, P. A. (2014). Operating systems. Academic Press.