CLG Notes
CLG Notes
IOT Page 1
IOT Page 2
IOT Page 3
IOT Page 4
IOT Page 5
IOT Page 6
IOT Page 7
IOT Page 8
IOT Page 9
IOT Page 10
IOT Page 11
IOT Page 12
IOT Page 13
IOT Page 14
IOT Page 15
IOT Page 16
IOT Page 17
SPCC
30 March 2024 23:15
2. Explain with flow chart the design of two pass assembler (ls-2)
SPCC Page 18
In Pass 1 of the assembly process, the assembler goes through the source program to assign locations to instructions
and data, as well as define values for symbols (labels). Here's a breakdown of the steps involved:
1. Initialize Location Counter (LC): Set LC to the first location in the program (usually 0).
2. Read Source Statement: Read a source statement from the input.
3. Check Operation Code: Examine the operation code field to determine if it's a machine instruction or a pseudo-
operation (pseudo-op).
4. Process Machine Instructions: If it's a machine instruction, search the Machine Op-Codes Table (MOT) to find a match
for the op-code field. Determine the length of the instruction and update the LC accordingly. If a literal is found in the
operand field, add it to the Literal Table (LT). If there's a symbol in the label field, save it in the Symbol Table (ST) with
the current value of LC.
5. Process Pseudo-Operations: For pseudo-ops like USING and DROP, simply save the cards for pass 2. For EQU, evaluate
the expression in the operand field to define the symbol.
6. Handle OS and DC Pseudo-Ops: These pseudo-ops can affect both the location counter and symbol definition. Examine
the operand field to determine storage requirements. Adjust the LC if necessary due to alignment conditions.
7. Terminate Pass 1: When encountering the END pseudo-op, terminate Pass 1. Perform housekeeping operations like
SPCC Page 19
7. Terminate Pass 1: When encountering the END pseudo-op, terminate Pass 1. Perform housekeeping operations like
assigning locations to collected literals and reinitialize conditions for Pass 2.
Pass 1 primarily focuses on assigning locations, defining symbols, and collecting information needed for Pass 2, such as
literals and pseudo-ops.
SPCC Page 20
Pass 2 of the assembly process finalizes the assembly by generating code based on the symbols defined in Pass 1. Here's
how it works:
Initialize Location Counter (LC): Same as Pass 1, set LC to the starting location.
SPCC Page 21
Initialize Location Counter (LC): Same as Pass 1, set LC to the starting location.
Read Source Card: Read a card from the source file left by Pass 1.
Process Machine Instructions: Similar to Pass 1, examine the operation code field to determine if it's a machine instruction.
Search the Machine Op-Codes Table (MOT) to find a match for the op-code field. Determine the length, binary op-code, and
format type of the instruction. Process the operand field based on the instruction format type.
Handle Pseudo-Ops: Handle pseudo-ops like END, EQU, USING, DROP, OS, and DC. For END, terminate the assembly. For
EQU, print the EQU card. For USING and DROP, evaluate the operand fields and mark entries in the Base Table accordingly.
Process OS and DC as in Pass 1, but generate actual code for DC pseudo-op in Pass 2.
Generate Code: For RR-format instructions, evaluate and insert register specification fields into the instruction. For RX-
format instructions, evaluate register and index fields and generate an Effective Address (EA). Format the instructions for
later processing by the loader, typically placing several instructions on a single card. Print a listing line containing the source
card, its storage location, and hexadecimal representation.
Housekeeping Tasks: Perform various tasks such as generating code for remaining literals in the Literal Table (LT).
Terminate Pass 2: When encountering the END pseudo-op, terminate Pass 2. Perform any remaining housekeeping tasks.
Pass 2 completes the assembly process by generating machine code based on the symbols defined in Pass 1 and preparing it
for further processing by the loader.
__________________________________________________________________________________________________
3. Numerical based two pass assembler consider the following assembly program
START 501
A DS I
B DS I
C DS I
READ A
READ B
MOVER AREG,A
ADD AREG,B
MOVEM AREG,C
PRINT C
END
Generate pass-1 and pass-2 and also show the content of database table involved in. (ls-2)
ANS:-
Pass 1:
MOVER AREG, A: Check MOT for MOVER, increment LC by the size of MOVER instruction.
ADD AREG, B: Check MOT for ADD, increment LC by the size of ADD instruction.
SPCC Page 22
MOVEM AREG, C: Check MOT for MOVEM, increment LC by the size of MOVEM instruction.
Pass 2:
MOVER AREG, A: Generate machine code for MOVER instruction, store in object code.
ADD AREG, B: Generate machine code for ADD instruction, store in object code.
MOVEM AREG, C: Generate machine code for MOVEM instruction, store in object code.
Database Tables:
A: 501
B: 505
C: 509
Machine Op-Codes Table (MOT):
MOVER: Size
ADD: Size
MOVEM: Size
Literal Table (LT):
_____________________________________________________________________________
4. With reference to assembler explain the following tables with suitable examples POT, MOT, ST, LT, BT (ls-2)
ANS:-
POT (Pseudo-Op Table):
The POT contains a list of all the pseudo-operations supported by the assembler along with their definitions.
Example:
START 101
END
DS 4
SPCC Page 23
DS 4
USING *, 15
MOT (Machine Op-Code Table):
The MOT contains a list of all the machine instructions recognized by the assembler along with their op-codes and formats.
Example:
ADD 01 3
SUB 02 3
MOVER 04 2
ST (Symbol Table):
The ST contains a list of all symbols (labels, variables, etc.) defined in the program along with their corresponding addresses.
Example:
A 101
B 105
C 109
LT (Literal Table):
The LT contains a list of all the literals (constants) used in the program along with their addresses.
Example:
'HELLO' 201
1234 205
BT (Base Table):
The BT contains information about the base registers used in the program.
Example:
B 15
These tables are essential components of an assembler, helping in the translation of assembly language code into machine
code. They aid in symbol resolution, instruction lookup, and other processing tasks required during assembly.
_____________________________________________________________________________
5. Explain forward reference problem and how it is handled in assembler design (LS-2)
ANS:-
The forward reference problem in assembler design occurs when the assembler encounters a symbol (such as a label or
variable) that is referenced before it is defined in the source code. This poses a challenge for the assembler because it needs
to assign addresses to symbols in a linear pass through the source code, but it cannot assign an address to a symbol before it
is defined. Here's a detailed explanation of how this problem is handled in assembler design:
1. First Pass:
During the first pass of the assembler, the assembler performs the following tasks:
• Symbol Table Initialization: The assembler initializes an empty symbol table to store information about symbols
encountered in the source code.
• Scan Source Code: The assembler scans through the entire source code line by line, analyzing each line to identify
symbols, instructions, and directives.
• Symbol Table Entry: When the assembler encounters a symbol (label or variable), it creates an entry in the symbol table
for that symbol. The entry includes the symbol's name, its associated address (initially set to zero or an arbitrary value),
and any other relevant information.
• Forward References: If the assembler encounters a symbol that is referenced before it is defined, it marks it as a
forward reference. The assembler continues processing, temporarily leaving the address unresolved.
2. Second Pass:
After completing the first pass, the assembler revisits the source code in a second pass to resolve forward references. During
the second pass:
• Revisit Source Code: The assembler reprocesses each line of the source code, including those with forward references.
• Resolve Forward References: When the assembler encounters a forward reference, it looks up the symbol in the symbol
table to find its address, which should have been resolved during the first pass.
• Update Symbol Address: The assembler updates the address of the symbol with its correct value obtained from the first
pass.
• Generate Object Code: As the assembler processes each instruction, it generates the corresponding machine code and
SPCC Page 24
• Generate Object Code: As the assembler processes each instruction, it generates the corresponding machine code and
stores it in memory or generates an object file.
Error Handling:
• If a symbol is referenced but not defined anywhere in the source code, it leads to an unresolved external symbol error.
• Assemblers typically detect such errors and report them to the user, indicating the line number and context where the
error occurred.
• Resolving unresolved external symbols might require modifications to the source code to define the missing symbols or
correct the references.
Example:
Consider the following assembly code:
START 100
LOOP ADD A, B
SUB C, A
JMP LOOP
A DS 1
B DS 1
C DS 1
END
_____________________________________________________________________________
• ORG (Origin): Specifies the origin or starting address of the program or a section of code.
ORG 2000
; Program code goes here
SPCC Page 25
TITLE 'Sample Assembly Program'
_____________________________________________________________________________
7. Enlist different types of error that are handle by pass 1 and pass 2 assemblers (LS-2)
ANS:-
In a two-pass assembler, both pass 1 and pass 2 handle different types of errors during the assembly process. Here are the
different types of errors that are typically handled by each pass:
Errors Handled by Pass 1:
• Syntax Errors: Pass 1 checks for syntax errors in the assembly code, such as missing commas, incorrect operand
formats, or invalid mnemonics.
• Undefined Symbols: Pass 1 identifies undefined symbols used in the code by checking the symbol table. If a symbol is
used before being defined, it is flagged as an error.
• Forward Referencing: Pass 1 handles forward references by allocating temporary addresses or placeholders for symbols
that are referenced before being defined. These addresses are later resolved in pass 2.
• Invalid Instructions: Pass 1 verifies the validity of machine instructions by checking against the opcode table. If an
instruction is not recognized or supported, it is reported as an error.
• Literal Pool Management: Pass 1 manages literal pools by identifying literals in the code and assigning them temporary
addresses. These literals are later processed in pass 2.
• Base Register Management: If base-relative addressing is used, pass 1 manages the base register assignments and
ensures that base-relative instructions are properly handled.
Errors Handled by Pass 2:
• Address Calculation Errors: Pass 2 calculates the final addresses for instructions and data, resolving any forward
references encountered in pass 1. If there are errors in address calculation, such as overflow or underflow, they are
reported in pass 2.
• Symbol Resolution: Pass 2 resolves symbols to their final addresses based on the symbol table generated in pass 1. If a
symbol cannot be resolved or conflicts with existing symbols, it is flagged as an error.
• Literal Pool Processing: Pass 2 processes literal pools generated in pass 1 by assigning them final addresses and
generating machine code instructions or data for them.
• Base Register Resolution: Pass 2 resolves base-relative addressing by substituting base register values and computing
displacement values for instructions referencing memory locations relative to a base register.
• Output Code Generation: Pass 2 generates the final machine code instructions or object code based on the resolved
addresses and assembled instructions. If there are errors in generating the output code, they are reported in pass 2.
_____________________________________________________________________________
8. Draw a neat flowchart of two pass macro processor , explain with the help of an example (ls -3)
ANS:-
SPCC Page 26
PASS 1- MACRO DEFINITION
In pass 1 of the assembler, the focus is on processing macro definitions. Here's how it works:
• Identification of Macro Definitions: Pass 1 scans each input line to identify if it contains a macro definition. This is
typically done by checking for the presence of a specific pseudo-opcode, such as "MACRO."
SPCC Page 27
typically done by checking for the presence of a specific pseudo-opcode, such as "MACRO."
• Saving Macro Definitions: When a macro definition is encountered, the entire definition is saved in the Macro Definition
Table (MDT). This table stores all the lines of code that constitute the macro, including any parameters and their
replacements.
• Macro Name Table (MNT): The name of the macro is extracted from the definition and stored in the Macro Name Table
(MNT). This table maintains a list of all defined macros along with pointers to their corresponding entries in the MDT.
• Processing Macro Definitions: Pass 1 continues processing subsequent lines of code until it reaches the END pseudo-op,
indicating the end of the source program or assembly file. During this process, all macro definitions encountered are
saved in the MDT and their names are recorded in the MNT.
• Transfer to Pass 2: Once all macro definitions have been processed, control is transferred to pass 2 of the assembler.
Pass 2 will handle the processing of macro calls, where macros are invoked in the source code.
SPCC Page 28
PASS 2-MACRO CALLS AND EXPANSION:
In pass 2 of the assembler, the focus is on processing macro calls and expanding them. Here's how it works:
1. Identification of Macro Calls: Pass 2 examines the operation mnemonic of each input line to determine if it matches a macro
SPCC Page 29
1. Identification of Macro Calls: Pass 2 examines the operation mnemonic of each input line to determine if it matches a macro
name stored in the Macro Name Table (MNT). If a macro call is found, further processing is initiated.
2. Setting the Macro Definition Table Pointer (MDTP): When a macro call is detected, the call processor sets a pointer, known
as the Macro Definition Table Pointer (MDTP), to the corresponding macro definition stored in the Macro Definition Table
(MDT). The initial value of the MDTP is obtained from the "MDT index" field of the MNT entry.
3. Preparing the Argument List Array (ALA): The macro expander prepares the Argument List Array (ALA), which consists of a
table of dummy argument indices and corresponding arguments to the macro call. Each argument is matched to its
corresponding dummy argument in the macro definition.
4. Substituting Arguments in Macro Definition: As the macro definition is read from the MDT, the values from the argument list
are substituted for the dummy argument indices in the macro definition. This process continues until the MEND line in the
MDT is encountered, indicating the end of the macro definition.
5. Handling Argument References: Arguments can be referenced either by position or by name. For positional references, the
substitution process is straightforward. For references by name, the macro processor locates the dummy argument on the
macro name line to determine the proper index.
6. Expansion of Macro Calls: The expansion of the macro call involves substituting the arguments into the macro definition and
generating the corresponding lines of code. This expanded code becomes part of the source deck and is further processed
by the assembler.
7. Completion of Expansion: Once all macro calls have been expanded and the END pseudo-op is encountered, the expanded
source deck is transferred to the assembler for further processing. This includes handling any remaining instructions or data
in the source program.
SPCC Page 30
_____________________________________________________________________________
• Named Block of Code: A macro is defined with a name, which serves as its identifier throughout the program.
Example:
MACRO ADDITION
ADD A, B
MEND
• Parameterized: Macros can accept parameters, allowing for flexibility in the generated code.
Example:
MACRO ADDITION A, B
ADD A, B
MEND
• Reusability: Once defined, macros can be invoked multiple times at different points in the program.
Example:
ADDITION X, Y
ADDITION A, B
• Expansion: When a macro is invoked, it expands into the corresponding block of code defined in its definition.
Example: If ADDITION X, Y is invoked, it expands to ADD X, Y.
• Parameter Substitution: Parameters passed to the macro are substituted into the macro definition.
Example: If ADDITION A, B is invoked, it expands to ADD A, B.
• Conditional Assembly: Macros can include conditional assembly directives to generate different code based on
conditions.
Example:
MACRO MAX A, B
IF A > B
MOVE A, MAXIMUM
ELSE
MOVE B, MAXIMUM
ENDIF
MEND
• Nesting: Macros can be defined within other macros, allowing for hierarchical organization and code abstraction.
Example:
MACRO OUTER_MACRO
MACRO INNER_MACRO
; Inner macro definition
MEND
MEND
• Scope: Macros have their own scope, and their definitions are valid within the scope where they are defined.
Example:
MACRO EXAMPLE_MACRO
; Macro definition
MEND
; Code outside the macro definition
SPCC Page 31
__________________________________________________________________________________
Here's how we can define the COMPARE_AND_SWAP macro with conditional expansion:
In this example, the COMPARE_AND_SWAP macro takes three parameters: VALUE1, VALUE2, and SIGNED. The SIGNED
parameter is used to determine whether the comparison should be signed or unsigned.
If SIGNED is 1, the macro compares VALUE1 and VALUE2 using the CMP instruction for signed comparison.
If SIGNED is 0, the macro compares VALUE1 and VALUE2 using the CMPU instruction for unsigned comparison.
After the comparison, if VALUE1 is greater than VALUE2, the values are swapped. Otherwise, no swapping occurs.
Here's how you can use the COMPARE_AND_SWAP macro with conditional expansion:
COMPARE_AND_SWAP R1, R2, 1 ; Compare signed values
This invocation compares the values in registers R1 and R2 using signed comparison.
COMPARE_AND_SWAP R3, R4, 0 ; Compare unsigned values
This invocation compares the values in registers R3 and R4 using unsigned comparison.
By using conditional macro expansion, we can create versatile macros that adapt their behavior based on specific
requirements, enhancing code flexibility and reusability.
Macro expansion is the process of substituting a macro call with the appropriate macro definition. This process involves
three main activities: macro definition, macro invocation, and macro expansion. The macro definition defines the macro and
its behavior, the macro invocation calls the macro, and the macro expansion substitutes the macro call with the macro
definition.
In Lisp, for example, macro expansion is achieved through the use of the macroexpand function, which takes a form and an
environment as arguments and returns the expansion of the form. The macroexpand-hook variable is a special variable that
SPCC Page 32
environment as arguments and returns the expansion of the form. The macroexpand-hook variable is a special variable that
can be set to a function that is called during macro expansion.
(defmacro hello-world ()
`(format t "Hello, World!"))
(macroexpand '(hello-world))
; Output: (format t "Hello, World!")
In this example, the hello-world macro is defined to expand to the form (format t "Hello, World!"). When
the macroexpand function is called with the form (hello-world), it returns the expansion of the form, which is (format t
"Hello, World!").
In summary, Macros are a way to repeat frequently-used lines of code, and macro expansion is the process of substituting a
macro call with the appropriate macro definition.
12. What is loader and explain the function of loader with an example (ls-4)
ANS:-
A loader is a program that loads an executable program from disk into memory for execution. Its primary function is to place
the program into memory in such a way that it can be executed by the CPU. The loader performs several tasks to accomplish
this, including allocating memory, relocating the program, resolving external references, and setting up the program's
execution environment.
_____________________________________________________________________________
SPCC Page 33
13. Explain design and flowchart of absolute loader (ls-4)
ANS:-
The absolute loader is a type of loader used in computer systems to load a program into memory for execution. Unlike other
loaders, such as the relocatable loader or the linkage editor, the absolute loader does not perform any relocation or linking.
Instead, it assumes that the program is to be loaded into a specific memory location. Here's an explanation of the design and
a flowchart for an absolute loader:
Design of Absolute Loader:
Input:
The input to the absolute loader is an object program in absolute format. This object program contains machine
instructions and data in absolute memory addresses.
Memory Allocation:
The absolute loader assumes that the program is to be loaded into a specific memory location. Therefore, it does not
perform any relocation or linking. Instead, it loads the program directly into memory at the specified address.
Loading Process:
The loading process involves reading the object program and copying its contents into memory starting from the
specified address.
It reads the object program one block at a time and copies it to the specified memory location.
Error Handling:
The absolute loader performs error checking to ensure that the object program is loaded correctly into memory.
It checks for errors such as memory overflow, invalid format, or any other inconsistencies in the object program.
Explanation of Flowchart:
• Start:
The process begins.
• Read object program into memory:
The object program is read into memory.
• Initialize memory pointer:
A memory pointer is initialized to point to the specified memory address where the program is to be loaded.
• Repeat until end of object program:
This loop continues until the end of the object program is reached.
• Read one block of the object program:
Each block of the object program is read.
• Copy block into memory at memory pointer:
The block of the object program is copied into memory at the memory pointer.
• Increment memory pointer:
The memory pointer is incremented to point to the next memory location.
• End loop:
The loop continues until the entire object program is loaded into memory.
• Perform error checking:
After loading the object program, error checking is performed.
SPCC Page 34
After loading the object program, error checking is performed.
• If errors found, display error message and abort:
If errors are found during error checking, an error message is displayed, and the loading process is aborted.
• Else, display success message:
If no errors are found, a success message is displayed.
• End:
The process ends.
_____________________________________________________________________________
14. Explain the working of Direct Linking Loader with an example and also show entry in different built by DLL (ls-4)
ANS:-
• A Direct Linking Loader is a type of loader used in computer systems to load a program into memory for execution. Unlike
other loaders, such as the relocatable loader or the linkage editor, the direct linking loader performs both linking and
loading in a single step. Here's how a Direct Linking Loader works, along with an example and an illustration of the entry in
different built by DLL:
• Working of Direct Linking Loader:
• Input:
The input to the Direct Linking Loader is an object program in relocatable format. This object program contains machine
instructions and data with relative addresses.
• Memory Allocation:
The Direct Linking Loader loads the object program into memory, starting at a specified base address.
• Linking and Loading Process:
During the loading process, the loader adjusts the addresses in the object program to reflect the actual memory locations
where the program will be loaded.
It performs this adjustment by adding the base address to the relative addresses in the program code.
• Error Handling:
The Direct Linking Loader performs error checking to ensure that the object program is loaded correctly into memory.
It checks for errors such as memory overflow, invalid format, or any other inconsistencies in the object program.
Example:
Suppose we have an object program with the following instructions:
Address Instruction
----------------------
100 ADD 200
101 SUB 300
102 JMP 400
And suppose we want to load this program into memory starting at address 500. The Direct Linking Loader would perform
the following steps:
1. Adjust Addresses:
Since the program is relocatable, the loader adjusts the addresses in the program code to reflect the actual memory
locations where the program will be loaded.
The addresses are adjusted by adding the base address (500) to the relative addresses in the program code.
Adjusted Address Instruction
--------------------------------
600 ADD 700
601 SUB 800
602 JMP 900
2. Loading Process:
The loader copies the object program into memory, starting at address 500.
Memory Address Instruction
----------------------------
500 ADD 700
501 SUB 800
502 JMP 900
3. Error Handling:
The loader performs error checking to ensure that the object program is loaded correctly into memory.
It checks for errors such as memory overflow, invalid format, or any other inconsistencies in the object program.
SPCC Page 35
this:
Name Base Address Size
----------------------------------
Program 500 3
This entry indicates that the program "Program" starts at memory address 500 and has a size of 3 memory locations.
This information allows the operating system to locate and execute the program correctly.
_____________________________________________________________________________
Example:
Consider a program that uses functions from a shared library libmath.so. Instead of including the entire libmath.so library in
the executable file, the program only contains references to the functions it needs. At runtime, the dynamic linker loads
libmath.so into memory, resolves the references, and updates the program's symbol table, allowing the program to call the
functions from libmath.so as needed.
_____________________________________________________________________________
SPCC Page 36
16. Difference between Dynamic loading and Dynamic linking with example (ls-4)
ANS:-
Dynamic Loading:
1. Loading Process:
Dynamic loading loads the libraries into memory only when they are required during program execution.
2. Trigger:
Loading process is initiated explicitly by the program when it requires access to a particular library or module.
3. Memory Usage:
Reduces memory usage as libraries are loaded into memory only when they are required.
4. Program Startup:
Programs start more quickly because they do not need to load all libraries at startup.
5. Example:
A multimedia application may dynamically load image processing libraries only when the user requests to edit an
image, and load video processing libraries only when the user requests to edit a video.
6. Complexity:
Requires additional code to handle the loading and unloading of libraries, increasing complexity.
7. Runtime Overhead:
There is a slight overhead associated with dynamically loading libraries at runtime.
8. Dependencies:
The program needs to manage dependencies and explicitly handle loading and unloading of libraries.
9. Flexibility:
Provides flexibility in terms of memory usage and program startup time.
10. Example Scenario: - Consider a web browser that supports various media formats. When a user tries to play a video file,
the browser may dynamically load the necessary codecs only when required. This allows the browser to start quickly and
reduces memory usage.
Dynamic Linking:
1. Linking Process:
Dynamic linking is a technique in which the linking of libraries to an executable program is deferred until runtime.
2. Deferred Linking:
Libraries are linked to the program at runtime rather than at compile time.
3. Memory Usage:
Reduces memory usage as libraries are shared among multiple programs.
4. Program Startup:
Slightly slower program startup compared to dynamic loading because of the linking process at runtime.
5. Example:
A text editor application may use a shared library for spell checking. Instead of statically linking the spell checking
library at compile time, the text editor is dynamically linked to the spell checking library at runtime.
6. Runtime Overhead:
There is a slight overhead associated with dynamic linking because the linking process occurs at runtime.
7. Dependency Management:
Introduces dependencies between the executable program and the shared libraries it uses.
8. Simplified Updates:
Updates to shared libraries automatically apply to all programs that use them.
9. Flexibility:
Provides flexibility in terms of memory usage and library updates.
10. Example Scenario: - Consider a Linux system where multiple applications use the same C standard library (libc). Instead
of each application having its own copy of libc, the system dynamically links each application to the shared libc library,
reducing memory usage and simplifying updates.
_____________________________________________________________________________
SPCC Page 37
Purpose: When a program is compiled, it contains symbolic references to memory locations. During relocation, these
references are adjusted to reflect the actual memory addresses where the program will be loaded.
Process: During relocation, the loader scans the object file, identifies the memory addresses that need to be adjusted, and
calculates the correct addresses based on the program's load location.
Example: Suppose a program contains a reference to a global variable x at address 1000. If the program is loaded into
memory at address 5000, relocation is performed to adjust the reference to x to address 6000.
Linking:
Definition: Linking is the process of combining multiple object files and libraries into a single executable file.
Purpose: The purpose of linking is to resolve symbolic references between different object files and libraries and create a
single executable file that can be loaded into memory and executed.
Process: During linking, the linker combines the object files and libraries, resolves symbolic references, performs relocation,
and generates an executable file.
Example: Suppose a program consists of multiple source files main.c, util.c, and math.c. After compiling these source files
into object files (main.o, util.o, math.o), the linker combines them, resolves the references between them, performs
relocation, and generates a single executable file (program.exe).
_____________________________________________________________________________
18. Explain different types of Loaders in detail : absolute, relocating, compile and go, direct linking, dynamic linking, dynamic
loading (ls-4)
ANS:-
1. Absolute Loader:
• Functionality:
Absolute loaders load an executable file into memory at a specific location and execute it.
• Process:
Reads the absolute machine code and loads it into memory at a predetermined memory location specified in the
program.
• Relocation:
No relocation is performed as the program is loaded at a fixed memory location.
• Advantages:
Simple and fast loading process.
• Disadvantages:
Lack of flexibility as the program is always loaded at the same memory location.
SPCC Page 38
Lack of flexibility as the program is always loaded at the same memory location.
• Example:
Older systems that used absolute loading methods.
2. Relocating Loader:
• Functionality:
Relocating loaders load an executable file into memory and perform relocation, adjusting the addresses of the
program's symbols based on the load location.
• Process:
Scans the object file, identifies the memory addresses that need to be adjusted, and calculates the correct addresses
based on the program's load location.
• Types of Relocation:
Static Relocation: Relocation is performed at compile time.
Dynamic Relocation: Relocation is performed at runtime.
• Advantages:
Provides flexibility as the program can be loaded at different memory locations.
• Disadvantages:
Requires additional processing to perform relocation.
• Example:
Modern operating systems use relocating loaders to load and execute programs.
3. Compile and Go Loader:
• Functionality:
Compile and Go loaders compile the source code into machine code and then load and execute it.
• Process:
Reads the source code, compiles it into machine code, loads the machine code into memory, and executes it.
• Advantages:
Simplifies the loading process by combining compilation and loading into a single step.
• Disadvantages:
Requires the entire program to be compiled before execution.
• Example:
Early systems where programs were compiled and executed immediately.
4. Direct Linking Loader:
• Functionality:
Direct linking loaders link the object program with library routines by placing library code directly into the object code.
• Process:
Combines the object program with library routines before loading it into memory.
• Advantages:
Faster execution as library routines are directly integrated into the object code.
• Disadvantages:
Increased memory usage as library routines are duplicated in each object program.
• Example:
Early systems where library routines were linked directly into the object code.
5. Dynamic Linking:
• Functionality:
Dynamic linking loaders load and link shared libraries or modules into memory during runtime.
• Process:
Dynamically loads shared libraries or modules into memory when they are needed during program execution.
• Advantages:
Saves memory by loading only the necessary modules.
Allows for the use of shared libraries, reducing disk space and memory usage.
SPCC Page 39
Allows for the use of shared libraries, reducing disk space and memory usage.
• Disadvantages:
Slight overhead associated with dynamic linking.
• Example:
Dynamic loading of libraries in modern operating systems and applications.
6. Dynamic Loading:
• Functionality:
Dynamic loading loaders load executable modules into memory and link them dynamically during runtime.
• Process:
Dynamically loads shared libraries or modules into memory when they are needed during program execution.
• Advantages:
Saves memory by loading only the necessary modules.
• Disadvantages:
Requires additional code to handle the loading and unloading of modules.
• Example:
Dynamic loading of modules in modern operating systems and applications.
_____________________________________________________________________________
• Syntax Analysis (Parsing): This phase analyzes the tokens produced in the previous phase and checks whether they
conform to the rules of the programming language. For example, the parser would check that the
SPCC Page 40
conform to the rules of the programming language. For example, the parser would check that the
tokens INT, x, =, 5, ; form a valid declaration statement.
• Semantic Analysis (Analysis): This phase checks the meaning of the source code, including the types of variables, the
scope of identifiers, and the relationships between variables. For example, the analyzer would check that the
variable x is declared as an integer and that the assignment x = 5 is valid.
• Intermediate Code Generation: This phase generates an intermediate representation of the source code, such as three-
address code, which is a low-level, platform-independent representation of the code. For example, the intermediate
code for the source code int x = 5; might be x := 5;.
• Optimization: This phase analyzes the intermediate code and applies various optimizations to improve the performance
of the generated machine code. For example, the optimizer might eliminate unnecessary assignments or reorder
operations to reduce the number of instructions.
• Code Generation: This phase converts the optimized intermediate code into machine code that can be executed
directly by the computer’s processor. For example, the code generator might produce machine code that loads the
value 5 into a register and stores it in the memory location corresponding to the variable x.
• Code Emission: This phase writes the generated machine code to a file or loads it into memory for execution. For
example, the code emitter might write the machine code to a file called x.out or load it into memory for execution.
SPCC Page 41
_____________________________________________________________________________
Example :
E -> E+T | T
T -> T*F | F
F -> INTLIT
This is a grammar to syntactically validate an expression having additions and multiplications in it. Now, to carry out
semantic analysis we will augment SDT rules to this grammar, in order to pass some information up the parse tree and check
for semantic errors, if any. In this example, we will focus on the evaluation of the given expression, as we don’t have any
semantic assertions to check in this very basic example.
For understanding translation rules further, we take the first SDT augmented to [ E -> E+T ] production rule. The translation
rule in consideration has val as an attribute for both the non-terminals – E & T. Right-hand side of the translation rule
corresponds to attribute values of the right-side nodes of the production rule and vice-versa. Generalizing, SDT are
augmented rules to a CFG that associate 1) set of attributes to every node of the grammar and 2) a set of translation rules to
every production rule using attributes, constants, and lexical values.
Let’s take a string to see how semantic analysis happens – S = 2+3*4. Parse tree corresponding to S would be
SPCC Page 42
Advantages of Syntax Directed Translation:
• Ease of implementation: SDT is a simple and easy-to-implement method for translating a programming language. It provides
a clear and structured way to specify translation rules using grammar rules.
• Separation of concerns: SDT separates the translation process from the parsing process, making it easier to modify and
maintain the compiler. It also separates the translation concerns from the parsing concerns, allowing for more modular and
extensible compiler designs.
• Efficient code generation: SDT enables the generation of efficient code by optimizing the translation process. It allows for
the use of techniques such as intermediate code generation and code optimization.
_____________________________________________________________________________
SPCC Page 43
_____________________________________________________________________________
Compiler:
A compiler is a program that translates source code written in a high-level programming language into machine code.
The compiled machine code is then stored in an object file, which can be linked with other object files to create an
executable file. The executable file can be run directly on the computer.
Interpreter:
An interpreter, on the other hand, is a program that translates source code written in a high-level programming language
SPCC Page 44
An interpreter, on the other hand, is a program that translates source code written in a high-level programming language
into machine code line by line, as it is executed. The interpreter does not generate machine code ahead of time; instead, it
translates the source code into machine code as it is executed. This means that the interpreter does not require a
compilation step, and the source code is executed directly.
Key differences:
• Compilation vs. Interpretation: Compilers translate the entire source code into machine code ahead of time, while
interpreters translate the source code line by line as it is executed.
• Code generation: Compilers generate machine code, while interpreters do not generate machine code; instead, they
translate the source code into machine code as it is executed.
• Code optimization: Compilers can optimize the generated machine code to improve performance, while interpreters do not
have the opportunity to optimize the code.
• Error handling: Compilers can detect and report errors at compile-time, while interpreters detect and report errors at
runtime.
• Execution speed: Compiled code is generally faster than interpreted code, as the machine code has already been generated
and optimized. Interpreted code, on the other hand, is slower, as the interpreter translates the source code into machine
code line by line.
_____________________________________________________________________________
27.
SPCC Page 45
SPCC Page 46