Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
6 views57 pages

Unit I

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views57 pages

Unit I

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 57

UNIT-I

Introduction to Computers:

Definition: A computer is an electronic device that can receive, store, process, and output data. It is a
machine that can perform a variety of tasks and operations, ranging from simple calculations to complex
simulations and artificial intelligence.
Computers consist of hardware components such as the central processing unit (CPU), memory, storage
devices, input/output devices, and peripherals, as well as software components such as the operating
system and applications.
The history of computers can be traced back to the 19th century when mechanical devices such as the
Analytical Engine and tabulating machines were developed. However, modern computers as we know
them today were developed in the mid-20th century with the invention of the transistor and the
development of integrated circuits.
History of Computers
Before computers were developed people used sticks, stones, and bones as counting tools. As technology
advanced and the human mind improved with time more computing devices were developed like Abacus,
Napier’s Bones, etc. These devices were used as computers for performing mathematical computations
but not very complex ones.
Some of the popular computing devices are described below, starting from the oldest to the latest or most
advanced technology developed:
Abacus
Around 4000 years ago, the Chinese invented the Abacus, and it is believed to be the first computer. The
history of computers begins with the birth of the abacus.
Structure: Abacus is basically a wooden rack that has metal rods with beads mounted on them.
Working of abacus: In the abacus, the beads were moved by the abacus operator according to some
rules to perform arithmetic calculations. In some countries like China, Russia, and Japan, the abacus is
still used by their people.
Napier’s Bones
Napier’s Bones was a manually operated calculating device and as the name indicates, it was invented by
John Napier. In this device, he used 9 different ivory strips (bones) marked with numbers to multiply and
divide for calculation. It was also the first machine to use the decimal point system for calculation.
Pascaline
It is also called an Arithmetic Machine or Adding Machine. A French mathematician-philosopher Blaise
Pascal invented this between 1642 and 1644. It was the first mechanical and automatic calculator. It is
invented by Pascal to help his father, a tax accountant in his work or calculation. It could perform
addition and subtraction in quick time. It was basically a wooden box with a series of gears and wheels.
It is worked by rotating wheel like when a wheel is rotated one revolution, it rotates the neighbouring
wheel and a series of windows is given on the top of the wheels to read the totals.
Stepped Reckoner or Leibniz wheel
A German mathematician-philosopher Gottfried Wilhelm Leibniz in 1673 developed this device by
improving Pascal’s invention to develop this machine. It was basically a digital mechanical calculator,
and it was called the stepped reckoner as it was made of fluted drums instead of gears (used in the
previous model of Pascaline).

Difference Engine
Charles Babbage who is also known as the “Father of Modern Computer” designed the Difference
Engine in the early 1820s. Difference Engine was a mechanical computer which is capable of performing
simple calculations. It works with help of steam as it was a steam-driven calculating machine, and it was
designed to solve tables of numbers like logarithm tables.
Analytical Engine
Again in 1830 Charles Babbage developed another calculating machine which was Analytical Engine.
Analytical Engine was a mechanical computer that used punch cards as input. It was capable of
performing or solving any mathematical problem and storing information as a permanent memory
(storage).
Tabulating Machine
Herman Hollerith, an American statistician invented this machine in the year 1890. Tabulating Machine
was a mechanical tabulator that was based on punch cards. It was capable of tabulating statistics and
record or sort data or information. This machine was used by U.S. Census in the year 1890. Hollerith’s
Tabulating Machine Company was started by Hollerith and this company later became International
Business Machine (IBM) in the year 1924.
Differential Analyzer
Differential Analyzer was the first electronic computer introduced in the year 1930 in the United States.
It was basically an analog device that was invented by Vannevar Bush. This machine consists of vacuum
tubes to switch electrical signals to perform calculations. It was capable of doing 25 calculations in a few
minutes.
Mark I
In the year 1937, major changes began in the history of computers when Howard Aiken planned to
develop a machine that could perform large calculations or calculations involving large numbers. In the
year 1944, Mark I computer was built as a partnership between IBM and Harvard. It was also the first
programmable digital computer marking a new era in the computer world.

Generations of Computers

First Generation Computers


In the period of the year 1940-1956, it was referred to as the period of the first generation of computers.
These machines are slow, huge, and expensive. In this generation of computers, vacuum tubes were used
as the basic components of CPU and memory. Also, they were mainly dependent on the batch operating
systems and punch cards. Magnetic tape and paper tape were used as output and input devices. For
example ENIAC, UNIVAC-1, EDVAC, etc.
Second Generation Computers
In the period of the year, 1957-1963 was referred to as the period of the second generation of computers.
It was the time of the transistor computers. In the second generation of computers, transistors (which
were cheap in cost) are used. Transistors are also compact and consume less power. Transistor computers
are faster than first-generation computers. For primary memory, magnetic cores were used, and for
secondary memory magnetic disc and tapes for storage purposes. In second-generation computers,
COBOL and FORTRAN are used as Assembly language and programming languages, and Batch
processing and multiprogramming operating systems were used in these computers.
For example IBM 1620, IBM 7094, CDC 1604, CDC 3600, etc.
Third Generation Computers
In the third generation of computers, integrated circuits (ICs) were used instead of transistors(in the
second generation). A single IC consists of many transistors which increased the power of a computer
and also reduced the cost. The third generation computers are more reliable, efficient, and smaller in size.
It used remote processing, time-sharing, and multiprogramming as operating systems. FORTRON-II TO
IV, COBOL, and PASCAL PL/1 were used which are high-level programming languages.
For example IBM-360 series, Honeywell-6000 series, IBM-370/168, etc.
Fourth Generation Computers
The period of 1971-1980 was mainly the time of fourth generation computers. It used VLSI(Very Large
Scale Integrated) circuits. VLSI is a chip containing millions of transistors and other circuit elements and
because of these chips, the computers of this generation are more compact, powerful, fast, and
affordable(low in cost). Real-time, time-sharing and distributed operating system are used by these
computers. C and C++ are used as the programming languages in this generation of computers.
For example STAR 1000, PDP 11, CRAY-1, CRAY-X-MP, etc.
Fifth Generation Computers
From 1980 – to till date these computers are used. The ULSI (Ultra Large Scale Integration) technology
is used in fifth-generation computers instead of the VLSI technology of fourth-generation computers.
Microprocessor chips with ten million electronic components are used in these computers. Parallel
processing hardware and AI (Artificial Intelligence) software are also used in fifth-generation computers.
The programming languages like C, C++, Java, .Net, etc. are used.
For example Desktop, Laptop, NoteBook, UltraBook, etc.

Types of Computer

There are two bases on which we can define the types of computers. We will discuss the type of
computers on the basis of size and data handling capabilities. We will discuss each type of computer in
detail. Let’s see first what are the types of computers.
 Super Computer
 Mainframe computer
 Mini Computer
 Workstation Computer
 Personal Computer (PC)
 Server Computer
 Analog Computer
 Digital Computer
 Hybrid Computer
 Tablets and Smartphone
Supercomputer
When we talk about speed, then the first name that comes to mind when thinking of computers is
supercomputers. They are the biggest and fastest computers (in terms of speed of processing data).
Supercomputers are designed such that they can process a huge amount of data, like processing trillions
of instructions or data just in a second. This is because of the thousands of interconnected processors in
supercomputers. It is basically used in scientific and engineering applications such as weather
forecasting, scientific simulations, and nuclear energy research. It was first developed by Roger Cray in
1976.
Super Computers

Characteristics of Supercomputers
 Supercomputers are the computers that are the fastest and they are also very expensive.
 It can calculate up to ten trillion individual calculations per second, this is also the reason
which makes it even faster.
 It is used in the stock market or big organizations for managing the online currency world
such as Bitcoin etc.
 It is used in scientific research areas for analyzing data obtained from exploring the solar
system, satellites, etc.
Mainframe computer
Mainframe computers are designed in such a way that they can support hundreds or thousands of users at
the same time. It also supports multiple programs simultaneously. So, they can execute different
processes simultaneously. All these features make the mainframe computer ideal for big organizations
like banking, telecom sectors, etc., which process a high volume of data in general.
Characteristics of Mainframe Computers
 It is also an expensive or costly computer.
 It has high storage capacity and great performance.
 It can process a huge amount of data (like data involved in the banking sector) very quickly.
 It runs smoothly for a long time and has a long life.
Minicomputer
Minicomputer is a medium size multiprocessing computer. In this type of computer, there are two or
more processors, and it supports 4 to 200 users at one time. Minicomputer is similar to Microcontroller.
Minicomputers are used in places like institutes or departments for different work like billing,
accounting, inventory management, etc. It is smaller than a mainframe computer but larger in
comparison to the microcomputer.
Characteristics of Minicomputer
 Its weight is low.
 Because of its low weight, it is easy to carry anywhere.
 less expensive than a mainframe computer.
 It is fast.
Workstation Computer
A workstation computer is designed for technical or scientific applications. It consists of a fast
microprocessor, with a large amount of RAM and a high-speed graphic adapter. It is a single-user
computer. It is generally used to perform a specific task with great accuracy.
Characteristics of Workstation Computer
 It is expensive or high in cost.
 They are exclusively made for complex work purposes.
 It provides large storage capacity, better graphics, and a more powerful CPU when compared
to a PC.
 It is also used to handle animation, data analysis, CAD, audio and video creation, and editing.
Personal Computer (PC)
Personal Computers is also known as a microcomputer. It is basically a general-purpose computer
designed for individual use. It consists of a microprocessor as a central processing unit(CPU), memory,
input unit, and output unit. This kind of computer is suitable for personal work such as making an
assignment, watching a movie, or at the office for office work, etc. For example, Laptops and desktop
computers.
Characteristics of Personal Computer (PC)
 In this limited number of software can be used.
 It is the smallest in size.
 It is designed for personal use.
 It is easy to use.

Server Computer

Server Computers are computers that are combined data and programs. Electronic data and applications
are stored and shared in the server computer. The working of a server computer is that it does not solve a
bigger problem like a supercomputer but it solves many smaller similar ones. Examples of server
computer are like Wikipedia, as when users put a request for any page, it finds what the user is looking
for and sends it to the user.
Analog Computer
Analog Computers are particularly designed to process analog data. Continuous data that changes
continuously and cannot have discrete values are called analog data. So, an analog computer is used
where we don’t need exact values or need approximate values such as speed, temperature, pressure, etc.
It can directly accept the data from the measuring device without first converting it into numbers and
codes. It measures the continuous changes in physical quantity. It gives output as a reading on a dial or
scale. For example speedometer, mercury thermometer, etc.
Digital Computer
Digital computers are designed in such a way that they can easily perform calculations and logical
operations at high speed. It takes raw data as input and processes it with programs stored in its memory
to produce the final output. It only understands the binary input 0 and 1, so the raw input data is
converted to 0 and 1 by the computer and then it is processed by the computer to produce the result or
final output. All modern computers, like laptops, desktops including smartphones are digital computers.
Hybrid Computer
As the name suggests hybrid, which means made by combining two different things. Similarly, the
hybrid computer is a combination of both analog and digital computers. Hybrid computers are fast like
analog computers and have memory and accuracy like digital computers. So, it has the ability to process
both continuous and discrete data. For working when it accepts analog signals as input then it converts
them into digital form before processing the input data. So, it is widely used in specialized applications
where both analog and digital data are required to be processed. A processor which is used in petrol
pumps that converts the measurements of fuel flow into quantity and price is an example of a hybrid
computer.

Tablet and Smartphones

Tablets and Smartphones are the types of computers that are pocket friendly and easy to carry is these are
handy. This is one of the best use of modern technology. These devices have better hardware capabilities,
extensive operating systems, and better multimedia functionality. smartphones and tablets contain a
number of sensors and are also able to provide wireless communication protocols.

Tablet and Smartphones


The most efficient computers in terms of processing data and performance are
supercomputers. These computers are used for research and exploratory purposes. Supercomputers are
exceedingly large and highly expensive. It can only fit in large, air-conditioned spaces. Supercomputers are
used for a range of tasks, such as space exploration, research and the testing of nuclear weapons.
Supercomputer Features:
 They make use of AI (Artificial Intelligence)
 They are fastest and strongest
 They are very costly
 They are large in size
 They process information at a rapid rate
2. Mainframe Computers

Minicomputers are used by small businesses and industries. They go by the term “ Midrange Computers”.
These minicomputers frequently have several users, just as mainframe computers. They are a bit slower
than mainframe computers.
Features of minicomputers:
 It is smaller than mainframe or supercomputers in terms of size
 In comparison to a mainframe or supercomputer, it is less costly
 It is able to perform many jobs at once
 It may be utilized by several users simultaneously
 It is utilized by small businesses.
4. Microcomputers
3. Hybrid computers

Hybrid computer are the combination of both analog and digital computer. They accept both the analog and
digital data for processing. Hybrid computers incorporate the measuring feature of an analog computer
and counting feature of a digital computer. For computational purposes, these computers use analog
components and for storage, digital memories are used.
Now-a-days analog- to- digital computer (ADC) and digital-to analog computer(DAC) rare used to
transforming data into suitable form.
In these computers, some calculations take place in analog manner and rest of them takes place in a digital
manner. Hybrid computers are best used in the hospital where the analog part is responsible for
measurement of patient’s heart beat, blood pressure, temperature and other vital signs and then the
operation is carried out in a digital fashion to monitor patient’s vital signs. Hybrid Computers are also
used in weather forecasting.
Classification of Computers — Based on Purpose:

Computers are broadly classified into two types based on its purpose:

 General-purpose computer

 Specific-purpose computer
General Purpose Computer:
A general-purpose computer is built to do a variety of common tasks. Computers of this type have the ability to
store multiple programs. They can be applied in the workplace, in science, in education, and even at home. Such
computers are adaptable, but they are also less effective and move more slowly.
Specific Purpose Computer: A single specific task can be handled by a specific-purpose computer, which is
designed to execute a certain task. They aren’t made to manage several programs. They were therefore not
adaptable. Since they are made to handle a specific task, they are more efficient and faster than general-
purpose computers. These computers are utilized for things like airline reservations, air traffic control, and
satellite tracking.

Characteristics of Computer System


Let’s go over the characteristics of computers.

1. Speed

Executing mathematical calculation, a computer works faster and more accurately than human.
Computers have the ability to process so many millions (1,000,000) of instructions per second. Computer
operations are performed in micro and nano seconds. A computer is a time-saving device. It performs
several calculations and tasks in few seconds that we take hours to solve. The speed of a computer is
measure in terms of GigaHertz and MegaHertz.

2. Diligence

A human cannot work for several hours without resting, yet a computer never tires. A computer can
conduct millions of calculations per second with complete precision without stopping. A computer can
consistently and accurately do millions of jobs or calculations. There is no weariness or lack of
concentration. Its memory ability also places it ahead of humans.

3. Reliability

A computer is reliable. The output results never differ unless the input varies. the output is totally depend
on the input. when an input is the same the output will also be the same. A computer produces consistent
results for similar sets of data, if we provide the same set of input at any time we will get the same result.

4. Automation
The world is quickly moving toward AI (Artificial Intelligence)-based technology. A computer may
conduct tasks automatically after instructions are programmed. By executing jobs automatically, this
computer feature replaces thousands of workers. Automation in computing is often achieved by the use
of a program, a script, or batch processing.

5. Versatility

Versatility refers to a capacity of computer. Computer perform different types of tasks with the same
accuracy and efficiency. A computer can perform multiple tasks at the same time this is known as
versatility. For example, while listening to music, we may develop our project using PowerPoint and
Wordpad, or we can design a website.

6. Memory

A computer can store millions of records. these records may be accessed with complete precision.
Computer memory storage capacity is measured in Bytes, Kilobytes(KB), Megabytes(MB),
Gigabytes(GB), and Terabytes(TB). A computer has built-in memory known as primary memory.

7. Accuracy

When a computer performs a computation or operation, the chances of errors occurring are low. Errors in
a computer are caused by human’s submitting incorrect data. A computer can do a variety of operations
and calculations fast and accurately.

System logical organization

Block Diagram of a Computer

Input

All the data received by the computer goes through the input unit. The input unit comprises different
devices like a mouse, keyboard, scanner, etc. In other words, each of these devices acts as a mediator
between the users and the computer.
The data that is to be processed is put through the input unit. The computer accepts the raw data in binary
form. It then processes the data and produces the desired output.

The 3 major functions of the input unit are-

 Take the data to be processed by the user.


 Convert the given data into machine-readable form.
 And then, transmit the converted data into the main memory of the computer. The sole purpose is to
connect the user and the computer. In addition, this creates easy communication between them.

CPU – Central Processing Unit

Central Processing Unit or the CPU, is the brain of the computer. It works the same way a human brain
works. As the brain controls all human activities, similarly the CPU controls all the tasks.

Moreover, the CPU conducts all the arithmetical and logical operations in the computer.

Now the CPU comprises of two units, namely – ALU (Arithmetic Logic Unit) and CU (Control Unit). Both
of these units work in sync. The CPU processes the data as a whole.

Let us see what particular tasks are assigned to both units.

ALU – Arithmetic Logic Unit

The Arithmetic Logic Unit is made of two terms, arithmetic and logic. There are two primary functions that
this unit performs.

1. Data is inserted through the input unit into the primary memory. Performs the basic arithmetical
operation on it. Like addition, subtraction, multiplication, and division. It performs all sorts of
calculations required on the data. Then sends back data to the storage.
2. The unit is also responsible for performing logical operations like AND, OR, Equal to, Less than,
etc. In addition to this it conducts merging, sorting, and selection of the given data.

CU – Control Unit

The control unit as the name suggests is the controller of all the activities/tasks and operations. All this is
performed inside the computer.

The memory unit sends a set of instructions to the control unit. Then the control unit in turn converts those
instructions. After that these instructions are converted to control signals.

These control signals help in prioritizing and scheduling activities. Thus, the control unit coordinates the
tasks inside the computer in sync with the input and output units.
Memory Unit

All the data that has to be processed or has been processed is stored in the memory unit. The memory unit
acts as a hub of all the data. It transmits it to the required part of the computer whenever necessary.

The memory unit works in sync with the CPU. This helps in faster accessing and processing of the data.
Thus, making tasks easier and quicker.

There are two types of computer memory-

1. Primary memory – This type of memory cannot store a vast amount of data. Therefore, it is only
used to store recent data. The data stored in this is temporary. It can get erased once the power is
switched off. Therefore, is also called temporary memory or main memory.

RAM stands for Random Access Memory. It is an example of primary memory. This memory is
directly accessible by the CPU. It is used for reading and writing purposes. For data to be
processed, it has to be first transferred to the RAM and then to the CPU.

2. Secondary memory – As explained above, the primary memory stores temporary data. Thus it
cannot be accessed in the future. For permanent storage purposes, secondary memory is used. It is
also called permanent memory or auxiliary memory. The hard disk is an example of secondary
memory. Even in a power failure data does not get erased easily.

Output

There is nothing to be amazed by what the output unit is used for. All the information sent to the computer
once processed is received by the user through the output unit. Devices like printers, monitors, projectors,
etc. all come under the output unit.

The output unit displays the data either in the form of a soft copy or a hard copy. The printer is for the hard
copy. The monitor is for the display. The output unit accepts the data in binary form from the computer. It
then converts it into a readable form for the user.

SOFTWARE

 Introduction
 A computer system has three components viz.
o Hardware
o Software
o User
 Hardware: It consists of the physical components of a computer.
 Software: A set of instructions that tells the computer to perform an intended task.
 Types of Software
 Software is broadly classified into two categories namely,
o System Software
o Application Software

 System Software
 System software is a computer program that controls the system hardware and interacts
with application software.
 System software is hardware dependent and not portable.
 System software provides a convenient environment for program development and execution.
 Programming languages like assembly language/C/C++/Visual C++/Pascal are used to develop the
system software.
 System software is of three types:
o Language Translators
o Operating System
o Utilities Software

 Application Software
 Application software that has been written to process performs a specific job.
 Application software is generally written in high level languages.
 It focus is on the application, not the computing system.
 Application software is classified into two types:
o Application Specific
o General Purpose
 Application specific software is created to execute an exact task.
 It has a limited task. For example accounting software for maintaining accounts.
 General purpose software is not limited to only one function.
 or example: Microsoft office (MS-Word, MS-Excel), Tally, Oracle etc.

Utility Software
 Utilities are those helpful programs that assist the computer by performing helpful functions like
backing up disk, scanning/cleaning viruses etc.
 Utility software is generally called as Application oriented ready-made system programs.
 Some of the important utility software is: Backup utility, Disk Defragmenter, Antivirus
software.
What is utility software?
These software analyzes and maintain a computer. These software are focused on how OS works on that
basis it perform task to enable smooth functioning of computer.

Some of the popular utility software are described below


Antivirus, backup software, file manager, disk compression tool all are utility software.

Antivirus: It is used to protect a computer from the virus. It detects a virus and notify the user and take
action to secure the computer.
Examples: Windows Defender, AVG, AVAST, MCAFEE, etc

.File Management Tool:


The software is used to manage files stored in a file system. It can be used to create, group file

Compression Tool:
These tool are used to reduce the size of a file based on the selected algorithm.
Ex: WinZip, WinRAR .

DISK MANAGEMENT TOOL: It enables us to view or manage the disk drives installed in their computer and
the partition associated with those drives.

DISK CLEANUP TOOL :


It is computer utility maintenance which is included in Microsoft Windows. It allows user to remove files
that are no longer needed or that can be safely deleted. Removing unnecessary files, including temporary
files, can help to improve the functioning and increase the free space of the computer.

DISK DEFRAGMENTER:
It is a utility in Microsoft Windows designed to increase access speed by rearranging file stored on a disk
to occupy contiguous storage locations, a technique is called
Defragmentation.

Introduction to Computer Languages


o Programming Language is a set of rules called syntax
which user has to follow, to instruct the computer what operation are to be performed.
 Computer language are classified into two categories:
o Low-Level Languages
 Machine level languages
 Assembly languages
o High-Level Languages
 General Purpose languages (Ex: BASIC, PASCAL, C)
 Specific purpose languages (Ex: COBOL, FORTAN, C++)
 Machine Level Language
 Machine level language is the fundamental language of a computer.
 It is written using binary numbers i.e. 0’s and 1’s.
 A program written in the machine level language is called Machine code.
 The instructions provided in machine language are directly understood by the computer and
converted into electrical signals to run the computer.
 For example a typical program in machine language to add two numbers:

STATEMENTS ACTION
0001 00110010 Load the data
0100 10100101 Add the contents
1000 00101001 Store the results
0000 00000000 Stop

 An instruction given in the machine language has two parts:


o OPCODE (Operation Code)
o Operand (Address/ Location)
 The first 4-bit represents Opcode denoting operation such as load, move, store etc.
 The last 8-bit represents the operand denoting the address.
 Advantages: It can be directly typed and executed and no compilation or translation is requires.
 Disadvantage: These instructions are machine dependent and it is difficult to program, modify and
debug errors.

 Assembly Level Language:


 Assembly level language is a low-level programming language that allows a user to write programs
using letters, words and symbols called mnemonics, instead of the binary digits used in machine level
languages.
 A program written in the assembly level language is called Assembly code.
 For example a typical program in assembly language to add two numbers:
STATEMENTS ACTION
STA A Load the data to accumulator
ADD B Add the contents of B to Accumulator
STR C Store the results in location C
PRT C Print the results
HLT Stop

 However a program in assembly language has to be converted to its equivalent machine language to
be executed on computer.
 The translator program that converts an assembly code into machine code is called an assembler.
 Advantages: Mnemonic code are easy to remember, easy to understand, easy to modify and
debug.
 Disadvantage: These languages are the mnemonic are machine dependent and assembly language
programming takes longer to code.

 High-level Languages

 A language designed to make programming easier through the use of familiar English words and
symbols.
 High-level languages used English like language, which are easier to learn and use.
 High-level languages are machine independent. Therefore, a program written for one computer can be
executed on different computers with no or only slight modifications.
 Some of the high-level languages are C, C++, JAVA, FORTRAN, QBASIC, and PASCAL.
 For example a typical program in high level language to add two numbers:
cin>>a>>b;

c = a + b;
cout<< “ Answer = “ << c;
 However a program in high-level language has to be converted to its equivalent machine language to
be excuted on computer.
 The translator program that converts an high level code into machine code is called an compiler.
 Advantage:
o HLL’s are machine independent.
o Easy to learn and understand.
o Easy to modify and debug the program.
 Disadvantage:
o HLL is slower in execution.
o HLL requires a compiler to convert source code to object code.
o HLL take more time to execute and require more memory.

Language Translators
 The translator translates the high-level language to low level language.
 There are three types of translators: Compilers, Interpreters, and Assemblers.

Translators

Assembler Complier Interpreters

 Assembler:
 Assembler is system software which translates an assembly language program into its machine
language.

 It recognizes the mnemonics used in the assembly level languages and substitutes the
required machine code for each instruction.
 Example: TASM (Turbo Assembler), MASM (Microsoft Macro Assembler) etc.

 Compilers:
 Compiler is system software that translates high-level language (source code) into the machine
level language (machine/object code).
 It reads the whole program and translates the entire program at once into a series of machine level
language instructions.
 Once compiled, the program normally gets saved automatically and can be executed directly.
 Examples: C, C++.
 Interpreters:
 An Interpreter reads once a statement of a high-level language program at a time and translates it into
machine level language and executes it immediately.
 It continues to read, translate and execute the statements one by one until it reaches the end of the
program.
 Therefore, it is slower than a compiler.
 The machine code produced by the interpreter is not saved and hence, to execute a statement again, it
has to be interpreted again.
 Example: BASIC, PROLOG, Python.

 Linker and Loader:


 A source program written in high-level languages may contain a number of modules or segments. To
execute properly the modules are to be linked so that execution of the program is sequential.
 This operation is performed by software called as the linker.

 A linker is system software that links (combines) smaller programs to form a single program.

 Once an executable program is generated someone will have to load the program into the main
memory of the computer so that it can be executed.
 This operation is performed by software called as the loader.
 A loader is system software that loads machine code of a program into the system memory and
prepares these programs for execution.

NUMBER SYSTEM
 Decimal Number System
 It is the most widely used number system.
 The decimal number system consists of 10 digits from 0 to 9.
 Its base or radix is 10.
 These digits can be used to represent any numeric value.
 Examples: 123(10), 456(10), 7890(10).

Consider a decimal number 542.76(10), which can be represented as:


5 × 10² + 4 × 10¹ + 2 × 10⁰ + 7 × 10⁻¹ + 6 × 10⁻²
Place Value Hundreds Tens Units Tenths (1/10) Hundredths
(1/100)
Weights 10² 10¹ 10⁰ 10⁻¹ 10⁻²
Digits 5 4 2 7 6
Values 500 40 2 0.7 0.06
So, 542.76(10) = 500 + 40 + 2 + 0.7 + 0.06

 Binary Number System


 Digital computers represent all kinds of data and information in the binary system.

 The binary number system consists of two digits: 0 (low voltage) and 1 (high voltage).

 Its base or radix is 2.

 Each digit (called a bit) in the binary number system can be either 0 or 1.

 The positional values are expressed in powers of 2.

 Examples: 1011(2), 111(2), 100001(2).

Consider a binary number 11011.10(2), which can be represented as:

1 × 2⁴ + 1 × 2³ + 0 × 2² + 1 × 2¹ + 1 × 2⁰ + 1 × 2⁻¹ + 0 × 2⁻²

Place 2⁴ 2³ 2² 2¹ 2⁰ 2⁻¹ 2⁻²


Value

Digits 1 1 0 1 1 1 0

Values 16 8 0 2 1 0.5 0

So, 11011.10(2) = 16 + 8 + 0 + 2 + 1 + 0.5 = 27.5

Important Notes

In the binary number 11010(2):

 The leftmost bit (1) is the Most Significant Bit (MSB).

 The rightmost bit (0) is the Least Significant Bit (LSB).

 Octal Number System

 The octal number system has digits starting from 0 to 7.


 The base or radix of this system is 8.
 The positional values are expressed in power of 8.
 Any digit in this system is always less than 8.
 Example: 123(8), 236(8), 564(8)
 The number 6418 is not a valid octal number because 8 is not a valid digit.
 Consider a Octal number 234.56(8) which can be represented in equivalent value as:
2x82 + 3x81 + 4x80 + 5x8-1 + 6x8-2

Weights 82 81 80 8-1 8-2


Digits 2 3 4 5 6
Values 128 24 4 0.625 0.09375

234.56(8) = 128 + 24 + 4 + 0.625 + 0.09375 = 156.71875

 Hexadecimal Number System

 The hexadecimal number system consists of 16 digits from 0 to 9 and A to F.


 The letters A to F represent decimal numbers from 10 to 15.
 That is, ‘A’ represents 10, ‘B’ represents 11, ‘C’ represents 12, ‘D’ represents 13, ‘E’ represents
14 and ‘F’ represents 15.
 The base or radix of this number system is 16.
 Example: A4 (16), 1AB (16), 934(16), C (16)
 Consider a Hexadecimal number 5AF.D(16) which can be represented in equivalent value as:
5x162 + Ax161 + Fx160 + Dx16-1
Weights 162 161 160 16-1
Digits 5 A(10) F(15) D(13)
Values 1280 160 15 0.8125

5AF.D(16) = 1280 + 160 + 15 + 0.8125 = 1455.8125

Number System Base Symbol used


Binary 2 0, 1
Octal 8 0,1,2,3,4,5,6,7
Decimal 10 0,1,2,3,4,5,6,7,8,9
0,1,2,3,4,5,6,7,8,9, A,B,C,D,E,F
Hexadecimal 16
where A=10; B=11; C=12; D=13; E=14; F=15
 Number System Conversions

Conversion from Decimal to Binary:

1. Steps to convert decimal number to binary number:

 Step 1: Divide the given decimal number by 2.


 Step 2: Take the remainder and record it on the right side.
 Step 3: Repeat the Step 1 and Step 2 until the decimal number cannot be divided further.
 Step 4: The first remainder will be the LSB and the last remainder
is the MSB. The equivalent binary number is then written from left
to right i.e. from MSB to LSB.

Example1 : To convert the decimal number 87(10) to binary.

 So 87 decimal is written as 1010111 in binary.


 It can be written as 87(10)= 1010111(2)

2. Steps to convert decimal fraction number to binary number:

 Step 1: Multiply the given decimal fraction number by 2.

 Step 2: Note the carry and the product.

 Step 3: Repeat the Step 1 and Step 2 until the decimal number cannot be divided further.
 Step 4: The first carry will be the MSB and the last carry is the LSB. The equivalent

binary fraction number is written from MSB to LSB.

Example 1: To convert the decimal number 0.3125(10) to binary.


Multiply by 2 Carry Product
0.3125 x 2 0 (MSB) 0.625
0.625 x 2 1 0.25
0.25 x 2 0 0.50
0.50 x 2 1 (LSB) 0.00
0.00

 Therefore, 0.3125(10) = 0.0101(2)

Conversion from Binary to Decimal:


1. Steps to convert binary number to decimal number

 Step 1: Start at the rightmost bit.


 Step 2: Take that bit and multiply by 2n, when n is the current position beginning at
0and increasing by 1 each time. This represents a power of two.
 Step 3: Then, add all the products.
 Step 4: After addition, the resultant is equal to the decimal value of the binary number.

Example 1: To convert the binary number 1010111(2) to decimal.

 Therefore, 1010111(2) = 87(10)

Example 2: To convert the binary number 11011.101(2) to decimal.

= 1x24 + 1x23 + 0x22 + 1x21 + 1x20 + 1x2-1 + 0x2-2 + 1x2-3


= 1x16 + 1x8 + 0x4+ 1x2 + 1x1 + 1x0.5+ 0x0.25+ 1x0.125
= 16 + 8 + 2 + 1 + 0.5 + 0.125
= 27.625(10)

Conversion from Decimal to Octal

1. Steps to convert decimal number to octal number

 Step 1: Divide the given decimal number by 8.

 Step 2: Take the remainder and record it on the side.

 Step 3: Repeat the Step 1 and Step 2 until the decimal number cannot be divided further.

 Step 4: The first remainder will be the LSB and the last remainder is the MSB. The equivalent
octal number is then written from left to right i.e. from MSB to LSB.

Example 1: To convert the decimal number 3034(10) to octal number.

 So 3034 decimal is written as 5732 in octal.


 It can be written as 3034(10) = 5732(8)

 Note: If the number is less than 8 the octal number is same


as decimal number.

Example 2: To convert the decimal number 0.3125(10) to octal number.


 Therefore, 0.3125(10) = 0.24(8)

Conversion from Octal to Decimal

1. Steps to convert octal number to decimal number

 Step 1: Start at the rightmost bit.

 Step 2: Take that bit and multiply by 8n, when n is the current position beginning at
0and increasing by 1 each time. This represents the power of 8.
 Step 3: Then, add all the products.
 Step 4: After addition, the resultant is equal to the decimal value of the octal number.

Example 1: To convert the octal or base-8 number 5732(8) to decimal

 Therefore, 5732(8) = 3034(10)

Example 2: To convert the octal number 234.56(8) to decimal number.

= 2x82 + 3x81 + 4x80 + 5x8-1 + 6x8-2


= 2x64+ 3x8 + 4x1 + 5x0.125+ 6x0.015625
= 128 + 24 + 4 + 0.625 + 0.09375
= 156.71875(10)

Conversion from Decimal to Hexadecimal

1. Steps to convert decimal number to hexadecimal number

 Step 1: Divide the decimal number by 16.


 Step 2: Take the remainder and record it on the side.
 Step 3: Repeat the Step 1 and Step 2 until the decimal number cannot be divided further.
 Step 4: The first remainder will be the LSB and the last remainder is the MSB. The equivalent
hexadecimal number is then written from left to right i.e. from MSB to LSB.

Example To convert the decimal number 16242(10) to hexadecimal

 So 16242 decimal is written as 3F72 in hexadecimal.


 It can be written as 16242(10) = 3F72 (16)
 Note: If the number is less than 16 the hexadecimal number
is same as decimal number.

Conversion from Hexadecimal to Decimal


2. Steps to convert hexadecimal number to decimal number
 Step 1: Start at the rightmost bit.
 Step 2: Take that bit and multiply by 16n, where n is the current position beginning at
0and increasing by 1 each time. This represents a power of 16.
 Step 3: Then, add all the products.
 Step 4: After addition, the resultant is equal to the decimal value of the hexadecimal number.
Example 1: To convert the Hexadecimal or base-16 number 3F72 to a decimal number.

Therefore, 3F72(16)= 16242(10)

Example 2: To convert the hexadecimal number 5AF.D(16) to decimal number.

= 5x162 + 10x161 + 15x160 + 13x16-1


= 5x256+ 10x16 + 15x1 + 13x0.0625
= 1280 + 160 + 15 + 0.8125
= 1455.8125(10)

Conversion from Binary to Octal

Steps to convert Binary to octal

 Take a binary number in groups of 3 and use the appropriate octal digit in its place.

 Begin at the rightmost 3 bits. If we are not able to form a group of three, insert 0s to the

leftuntil we get all groups of 3 bits each.

 Write the octal equivalent of each group. Repeat the steps until all groups have been converted.
Example 1: Consider the binary number 1010111(2)

001 010 111

1 2 7
Therefore, 1010111(2) = 127 (8)

Example 2: Consider the binary number 0.110111(2)

000 110 111

0 6 7
Therefore, 0.110111 (2) = 0.67 (8)

Example 3: Consider the binary number 1101.10111(2)

001 101 101 110

1 5 5 6
Therefore, 1101.10111(2) = 15.56 (8)

Note: To make group of 3 bits, for whole numbers, it may be necessary to add a 0’s to the left of MSBand
when representing fractions, it may be necessary to add a 0’s to right of LSB.

Conversion from Octal to Binary

Steps to convert octal to binary

 Step 1: Take the each digit from octal number

 Step 2: Convert each digit to 3-bit binary number. (Each octal digit is represented by a

three-bit binary number as shown in Numbering System Table)

Octal digit 0 1 2 3 4 5 6 7
Binary Equivalent 000 001 010 011 100 101 110 111

Example 1: Consider the octal number 456(8) into binary


4  100
5  101
6  110
Therefore, 456(8) = 100101110 (2)

Example 2: Consider the octal number 73.16(8) into


binary
7  100

3  101
1  001
6  110
Therefore, 73.16(8) = 100101.001110 (2)
Conversion from Binary to Hexadecimal

Steps to convert Binary to Hexadecimal

 Take a binary number in groups of 4 and use the appropriate hexadecimal digit in its place.

 Begin at the rightmost 4 bits. If we are not able to form a group of four, insert 0s to the

left until we get all groups of 4 bits each.

 Write the hexadecimal equivalent of each group. Repeat the steps until all groups have

been converted.

Example 1: Consider the binary number 1011001(2)


0101 1001

5 9
Therefore, 1011001 (2) = 59 (16)

Example 2: Consider the binary number 0.11010111(2)

0 1101 0111

0 D 7
Therefore, 0.110111 (2) = 0.D7 (16)

Conversion from Hexadecimal to Binary


Steps to convert hexadecimal to binary
 Step 1: Take the each digit from hexadecimal number
 Step 2: Convert each digit to 4-bit binary number. (Each hexadecimal digit is represented by
a four-bit binary number as shown in Numbering System Table)

Example: Consider the hexadecimal number CEBA (16)

Therefore, CEBA (16) = 1100 1110 1011 1010 (2)

Conversion from Octal to Hexadecimal


Steps to convert Octal to Hexadecimal
Using Binary system, we can easily convert octal numbers to hexadecimal numbers and vice-versa

 Step 1: write the binary equivalent of each octal digit.


 Step 2: Regroup them into 4 bits from the right side with zeros added, if necessary.
 Step 3: Convert each group into its equivalent hexadecimal digit.

Example: Consider the octal number 274 (8)

2  010
7  111
4  100
Therefore, 274 (8) = 010 111 100 (2)
Group the bits into group of 4 bits as 0 1011 1100

000 1011 1100

0 B C Therefore,

274 (8) = 0BC(16) =BC (16)

Conversion from Hexadecimal to Octal


Steps to convert Hexadecimal to Octal

 Step 1: write the binary equivalent of each hexadecimal digit.


 Step 2: Regroup them into 3 bits from the right side with zeros added, if necessary.
 Step 3: Convert each group into its equivalent octal digit.

Example: Consider the hexadecimal number FADE (16)

F  1111
A  1010
D  1101
E  1110
Therefore, FADE (16) = 1111 1010 1101 1110 (2)
Group the bits into group of 3 bits from LSB as 001 111 101 011 011 110

001 111 101 011 011 110

1 7 5 3 3 6

Therefore, FADE (16)= 175336 (8)

1’s and 2’s complement of a Binary Number


1’s complement of a binary number is another binary number obtained by toggling all
bits in it, i.e., transforming the 0 bit to 1 and the 1 bit to 0.In the 1’s complement format
, the positive numbers remain unchanged . The negative numbers are obtained by
taking the 1’s complement of positive counterparts.
for example +9 will be represented as 00001001 in eight-bit notation and -9 will be
represented as 11110110, which is the 1’s complement of 00001001.
Examples:
1's complement of "0111" is "1000"
1's complement of "1100" is "0011"
2’s complement of a binary number is 1, added to the 1’s complement of the binary
number. In the 2’s complement representation of binary numbers, the MSB represents
the sign with a ‘0’ used for plus sign and a ‘1’ used for a minus sign. the remaining bits
are used for representing magnitude. positive magnitudes are represented in the same
way as in the case of sign-bit or 1’s complement representation. Negative magnitudes
are represented by the 2’s complement of their positive counterparts.
Examples:
2's complement of "0111" is "1001"
2's complement of "1100" is "0100"
Another trick to finding two’s complement:
Step 1: Start from the Least Significant Bit and traverse left until you find a 1. Until
you find 1, the bits stay the same
Step 2: Once you have found 1, let the 1 as it is, and now
Step 3: Flip all the bits left into the 1.
Computer Codes

Binary-coded decimal

A binary-coded decimal (BCD) is a type of binary representation for decimal values where each digit is
represented by a fixed number of binary bit

Representing larger digits and numbers with binary-coded decimal

In the decimal system, all numbers larger than 9 have two or more digits. In the binary-coded decimal system,
these numbers are expressed digit by digit.

Example 1

Decimal number = 1764

The binary-coded decimal rendition is represented as the following:

1 7 6 4

0001 0111 0110 0100

Example 2

Decimal number = 238

The binary-coded decimal rendition is represented as the following:

2 3 8
0010 0010 0010

The binary-coded decimal representation of a number is not the same as its simple binary representation. For
example, in binary form, the decimal quantity 1895 appears as 11101100111. In binary-coded decimal, it
appears as 0001100010010101.

Gray Code

A binary numbering system in which two successive values only differ by one bit is called gray code, often
referred to as reflected binary code or unit distance code. Frank Gray created it in 1953, and today it is a
common tool for error detection and repair in digital communication and data storage systems. The Gray code
is a sequencing of the binary numeral system in which two successive values differ in an only binary digits.

What is Gray Code?


The binary numeral system is ordered in the reflected binary code, also known as the Gray code, so that two
subsequent values only differ in one bit (binary digit). In the typical sequence of binary numbers produced by
the hardware that could provide an error or ambiguity during the change from one number to the next, gray
codes are highly helpful.

ASCII Code:
American Standard Code for Information Interchange (ASCII) is a character encoding standard that assigns a
unique numerical value to all characters including special symbols. In C programming, the ASCII value of the
character is stored instead of the character itself. For example, the ASCII value of 'A' is 65.
Unicode
The Unicode Standard provides a unique number for every character, no matter what platform, device,
application or language. It has been adopted by all modern software providers and now allows data to be
transported through many different platforms, devices and applications without corruption. Support of Unicode
forms the foundation for the representation of languages and symbols in all major operating systems, search
engines, browsers, laptops, and smart phones—plus the Internet and World Wide Web (URLs, HTML, XML,
CSS, JSON, etc.).

ASCII is a character encoding system that only includes up to 256 characters, primarily composed of English
letters, numbers, and symbols. It uses up to eight bits to represent each character. In contrast, Unicode is a much
larger encoding standard that includes over 149,000 characters. It can represent nearly every modern script and
language in the world.
Additionally, Unicode supports special characters such as mathematical symbols, musical notation, and emojis,
which are not included in ASCII. Most importantly, the number of supported characters can be expanded in the
future, as required.
Another difference between Unicode and ASCII is how they are encoded. ASCII uses a fixed-length encoding
mechanism, with each character represented using seven or eight bits. In contrast, Unicode uses variable-length
encoding, meaning each character can be represented using one or more bytes. This enables Unicode to support
a much larger character set while maintaining backward compatibility with ASCII.

History of C

The root of all modern languages is ALGOL, introduced in the early 1960s.ALGOL was the first computer
language to use a block structure. Martin Richards developed a high-level computer language called BCPL in
the year 1967. The intention was to develop a language for writing an operating system (OS). As you know an
OS is software which controls the various processes in a computer system. This language was later improved
by Ken Thompson and he gave it a new name B. The basic ideas about some topics such as arrays, etc., which
were later inherited by C were developed in BCPL and B.C is a general purpose computer programming
language developed in 1972 by Dennis Ritchie at the Bell Laboratories for use with the Unix
operating system. It was named ‘C’ because many of its features were derived from an earlier language called
‘B’.
Many of the ideas of structure of C language were taken from BCPL and B. Ritchie has given an excellent
exposition of the problems experienced during development of C in his lecture entitled “The Development of
the C Language”.
Although C was designed for implementing system software, it is also widely used for developing
portable application software.
C became popular because of the success of UNIX system which was largely written in C and which could be
used on different types of computers. C is one of the most popular programming languages of all time
and there are very few computer architectures for which say C compiler does not exist. C has greatly
influenced many other popular programming languages, most notably C++, which began as an
extension to C.
.In 1978, Brian Kernighan and Ritchie authored a book entitled “The C Programming Language”, which gave
the basic framework of C and remained a reference book for many years. However, in a few years following
the publication of the book, the language in actual use was developed much beyond the book. It was felt that it
needed formal standardization. In the year 1983, ANSI established a committee called X3J11, for the purpose.
A revised standard was accepted by ISO in 1999. New keywords and header files were included in the
language. The language according to this standard is known as C-99.
Features of C Language

C is the widely used language. It provides many features that are given below.

1. Procedural Language
2. Simple
3. Machine Independent or Portable
4. Mid-level programming language
5. structured programming language
6. Rich Library
7. Memory Management
8. Fast Speed
9. Pointers
10. Recursion
11. Extensible

1) Procedural Language
In a procedural language like C step by step predefined instruction are carried out
2) Simple
C is a simple language in the sense that it provides a structured approach (to break the problem into
parts), the rich set of library functions, data types, etc.

3) Machine Independent or Portable

Unlike assembly language, c programs can be executed on different machines with some machine specific
changes. Therefore, C is a machine independent language.

4) Mid-level programming language


Although, C is intended to do low-level programming. It is used to develop system applications such as
kernel, driver, etc. It also supports the features of a high-level language. That is why it is known as mid-level
language.

5) Structured programming language

C is a structured programming language in the sense that we can break the program into parts using
functions. So, it is easy to understand and modify. Functions also provide code reusability.

6) Rich Library

C provides a lot of inbuilt functions that make the development fast.

7) Memory Management

It supports the feature of dynamic memory allocation. In C language, we can free the allocated memory at any
time by calling the free() function.

8) Speed

The compilation and execution time of C language is fast since there are lesser inbuilt functions and hence the
lesser overhead.

9) Pointer

C provides the feature of pointers. We can directly interact with the memory by using the pointers. We can use
pointers for memory, structures, functions, array, etc.

10) Recursion

In C, we can call the function within the function. It provides code reusability for every function. Recursion
enables us to use the approach of backtracking.

11) Extensible

C language is extensible because it can easily adopt new features.

Problem Solving Technique:


Sometimes it is not sufficient just to cope with problems. We have to solve that problems. Most
people are involving to solve the problem. These problem are occur while performing small task or
making small decision. So, Here are the some basic steps to solve the problems

Step 1: Identify and Define Problem

Explain you problem clearly as possible as you can.

Step 2: Generate Possible Solutions

 List out all the solution that you find. Don’t focus on the quality of the solution

 Generate the maximum number of solution as you can without considering the quality of the solution

Step 3: Evaluate Alternatives

After generating the maximum solution, Remove the undesired solutions.

Step 4: Decide a Solution

After filtering all the solution, you have the best solution only. Then choose on of the best solution and
make a decision to make it as a perfect solution.

Step 5: Implement a Solution:

After getting the best solution, Implement that solution to solve a problem.

Step 6: Evaluate the result

After implementing a best solution, Evaluate how much you solution solve the problem. If your
solution will not solve the problem then you can again start with Step 2.

ALGORITHM

Definition:

An algorithm is a well-defined sequential computational technique that accepts a value or a


collection of values as input and produces the output(s) needed to solve a problem.

Characteristics of an Algorithm
There are some characteristics which every algorithm should follow. There are five different
characteristics which deal with various aspects of algorithm. They are as follows:
1. Input specified
2. Output specified
3. Definiteness
4. Effectiveness
5. Finiteness
6. Independent

Let us see these characteristics one by one.

1) Input specified
The input is the data to be transformed during the computation to produce the output. An
algorithm should have 0 or more well-defined inputs. Input precision requires that you
know what kind of data, how much and what form the data should be
2) Output specified
The output is the data resulting from the computation (your intended result). An algorithm
should have 1 or more well-defined outputs, and should match the desired output. Output
precision also requires that you know what kind of data, how much and what form the
output should be
3) Definiteness
Algorithms must specify every step and the order the steps must be taken in the process.
Definiteness means specifying the sequence of operations for turning input into output.
Algorithm should be clear and unambiguous.
4) Effectiveness

For an algorithm to be effective, it means that all those steps that are required to get to output
must be feasible with the available resources. It should not contain any unnecessary and
redundant steps which could make an algorithm ineffective.
For example, suppose you are cooking a recipe and you chop vegetables which are not be used
in the recipe then it is a waste of time.

5) Finiteness
The algorithm must stop, eventually. Stopping may mean that you get the expected output
OR you get a response that no solution is possible. Algorithms must terminate after a
finite number of steps. An algorithm should not be infinite and always terminate after
definite number of steps.
6) Independent
An algorithm should have step-by-step directions, which should be independent of any
programming code. It should be such that it could be run on any of the programming
languages.

Example

 Start.
 Read 3 numbers a,b,c.
 Compute sum = a+b+c.
 Compute average = sum/3.
 Print average value.
 Stop.

FLOW CHART

A diagrammatic representation of an algorithm is called a flow chart. Flowcharts include inputs, outputs, sequence
of actions, decision points, sequence of actions, and process measurements. Symbols used in the flowchart are
mentioned below :
Name Symbol Shape Purpose

Terminal Oval start/stop/begin/end

Input/output Parallelogram Input/output of data

To represent a
Process Rectangle
process
Decision operation
that determines
Decision
Diamond which of the
box
alternative paths to
be followed

Used to connect
Connector Circle different parts of the
flowchart

Join 2 symbols and


Flow Arrows also represent flow of
execution

Module (or)
Predefined Double Sided
subroutines specified
process Rectangle
elsewhere

Used to connect
Page
Pentagon flowcharts in 2
connector
different pages

shows initialization,
For loop condition, and
Hexagon
symbol incrementation of
loop variable
Shows the data that is
Document Printout
ready for printout
Example

Given below is the flowchart for finding an average of three numbers −

Structure of C Program

Sections of the C Program

1. Documentation
2. Preprocessor Section
3. Definition
4. Global Declaration
5. Main() Function
6. Sub Programs

1. Documentation

This section consists of the description of the program, the name of the program, and the creation date and
time of the program. It is specified at the start of the program in the form of comments. Documentation can be
represented as:
// description, name of the program, programmer name, date, time etc.
or
/*
description, name of the program, programmer name, date, time etc.
*/
Anything written as comments will be treated as documentation of the program and this will not interfere with
the given code. Basically, it gives an overview to the reader of the program.

2. Preprocessor Section
All the header files of the program will be declared in the preprocessor section of the program. Header files
help us to access other’s improved code into our code. A copy of these multiple files is inserted into our
program before the process of compilation.
Example:
#include<stdio.h>
#include<math.h>

3. Definition

Preprocessors are the programs that process our source code before the process of compilation. There are
multiple steps which are involved in the writing and execution of the program. Preprocessor directives start
with the ‘#’ symbol. The #define preprocessor is used to create a constant throughout the program. Whenever
this name is encountered by the compiler, it is replaced by the actual piece of defined code.
Example:
#define long long ll

4. Global Declaration

The global declaration section contains global variables, function declaration, and static variables. Variables
and functions which are declared in this scope can be used anywhere in the program.
Example:
int num = 18;

5. Main() Function

Every C program must have a main function. The main() function of the program is written in this section.
Operations like declaration and execution are performed inside the curly braces of the main program. The
return type of the main() function can be int as well as void too. void() main tells the compiler that the
program will not return any value. The int main() tells the compiler that the program will return an integer
value.
Example:
void main()
or
int main()

6. Sub Programs

User-defined functions are called in this section of the program. The control of the program is shifted to the
called function whenever they are called from the main or outside the main() function. These are specified as
per the requirements of the programmer.
Example:
int sum(int x, int y)
{
return x+y;
}

Example: Below C program to find the sum of 2 numbers:


// Documentation

/**

* file: sum.c

* author: you

* description: program to find sum.

*/

// Link

#include <stdio.h>

// Definition

#define X 20

// Global Declaration

int sum(int y);

// Main() Function

int main(void)

int y = 55;

printf("Sum: %d", sum(y));

return 0;

// Subprogram

int sum(int y)

return y + X;

Output
Sum: 75

Creating, Compilation and Execution of C program


Step 1: Creating a Source Code
Source code is a file with C programming instructions in a high-level language. To create source code, we use
any text editor to write the program instructions. The instructions written in the source code must follow the C
programming language rules. The following steps are used to create a source code file in Windows OS…

 Click on File -> New in C Editor window


 Type the program
 Save it as FileName.c (Use shortcut key F2 to save)
Step 2: Compile Source Code (Alt + F9)

The compilation is the process of converting high-level language instructions into low-level language
instructions. In this process the source file is going to be submitted to the Compiler. On receiving a source file,
the compiler first checks for the Errors. If there are any Errors then compiler returns List of Errors, if there are no
errors then the source code is converted into object code and stores it as a file with .obj extension. Then the
object code is given to the Linker. The Linker combines both the object code and specified header file code and
generates an Executable file with a .exe extension.

Step 3: Executing / Running Executable File (Ctrl + F9)


After completing compilation successfully, an executable file is created with a .exe extension. The processor can
understand this .exe file content so that it can perform the task specified in the source file.The .exe file is
submitted to the CPU. On receiving .exe file, CPU performs the task according to the instruction written in the
file. The result generated from the execution is placed in a window called User Screen.

Step 4: Check Result (Alt + F5)


After running the program, the result is placed into User Screen. Just we need to open the User Screen to check
the result of the program execution.

Execution Process of a C Program


When we execute a C program it undergoes with the following process…

Compilation process in c

What is a compilation?

The compilation is a process of converting the source code into object code. It is done with the help of the
compiler. The compiler checks the source code for the syntactical or structural errors, and if the source code is
error-free, then it generates the object code.

The c compilation process converts the source code taken as input into the object code or machine code. The
compilation process can be divided into four steps, i.e., Pre-processing, Compiling, Assembling, and Linking.
The preprocessor takes the source code as an input, and it removes all the comments from the source code. The
preprocessor takes the preprocessor directive and interprets it. For example, if <stdio.h>, the directive is
available in the program, then the preprocessor interprets the directive and replace this directive with the content
of the 'stdio.h' file.

The following are the phases through which our program passes before being transformed into an executable
form:

o Preprocessor
o Compiler
o Assembler
o Linker

Preprocessor
The source code is the code which is written in a text editor and the source code file is given an extension ".c".
This source code is first passed to the preprocessor, and then the preprocessor expands this code. After
expanding the code, the expanded code is passed to the compiler.
Compiler
The code which is expanded by the preprocessor is passed to the compiler. The compiler converts this code into
assembly code. Or we can say that the C compiler converts the pre-processed code into assembly code.
Assembler
The assembly code is converted into object code by using an assembler. The name of the object file generated by
the assembler is the same as the source file. The extension of the object file in DOS is '.obj,' and in UNIX, the
extension is 'o'. If the name of the source file is 'hello.c', then the name of the object file would be 'hello.obj'.
Linker
Mainly, all the programs written in C use library functions. These library functions are pre-compiled, and the
object code of these library files is stored with '.lib' (or '.a') extension. The main working of the linker is to
combine the object code of library files with the object code of our program. Sometimes the situation arises when
our program refers to the functions defined in other files; then linker plays a very important role in this. It links
the object code of these files to our program. Therefore, we conclude that the job of the linker is to link the
object code of our program with the object code of the library files and other files. The output of the linker is the
executable file. The name of the executable file is the same as the source file but differs only in their extensions.
In DOS, the extension of the executable file is '.exe', and in UNIX, the executable file can be named as 'a.out'.
For example, if we are using printf() function in a program, then the linker adds its associated code in an output
file.

Let's understand through an example.


hello.c

1. #include <stdio.h>
2. int main()
3. {
4. printf("Hello ");
5. return 0;
6. }

Now, we will create a flow diagram of the above program:

In the above flow diagram, the following steps are taken to execute a program:

o Firstly, the input file, i.e., hello.c, is passed to the preprocessor, and the preprocessor converts the source
code into expanded source code. The extension of the expanded source code would be hello.i.
o The expanded source code is passed to the compiler, and the compiler converts this expanded source code
into assembly code. The extension of the assembly code would be hello.s.
o This assembly code is then sent to the assembler, which converts the assembly code into object code.
o After the creation of an object code, the linker creates the executable file. The loader will then load the
executable file for the execution

Greatest of Three Numbers

Algorithm:

1. Start
2. Declare variables a, b, c, and max
3. Input three numbers a, b, c
4. Compare numbers:
o If a > b and a > c, max = a
o Else if b > c, max = b
o Else, max = c
5. Display max
6. End

Flow-chart

C Program

#include <stdio.h>
void main()
{
int a, b, c, max;
printf("Enter three numbers: ");
scanf("%d %d %d", &a, &b, &c);
if(a > b && a > c)
{
max = a;
}
else if(b > c)
{
max = b;
}
else
{
max = c;
}

printf("Greatest: %d\n", max);


}

Reversing the digits of an integer:

Algorithm:
Input: num
(1) Initialize rev_num = 0
(2) Loop while num > 0
(a) Multiply rev_num by 10 and add remainder of num
divide by 10 to rev_num
rev_num = rev_num*10 + num%10;
(b) Divide num by 10
(3) Return rev_num
Example:
num = 4562
rev_num = 0
rev_num = rev_num *10 + num%10 = 2
num = num/10 = 456
rev_num = rev_num *10 + num%10 = 20 + 6 = 26
num = num/10 = 45
rev_num = rev_num *10 + num%10 = 260 + 5 = 265
num = num/10 = 4
rev_num = rev_num *10 + num%10 = 2650 + 4 = 2654
num = num/10 = 0
GCD of two integers:
GCD stands for Greatest Common Divisor. So GCD of 2 numbers is nothing but the largest
number that divides both of them.
Example: Lets say 2 numbers are 36 and 60. Then
GCD=12

GCD is also known as HCF (Highest Common Factor)

Algorithm for Finding GCD of 2 numbers:

Step 1: Start

Step 2: Declare variable n1, n2, gcd=1, i=1

Step 3: Input n1 and n2


Step 4: Repeat until i<=n1 and i<=n2

Step 4.1: If n1%i==0 && n2%i==0:

Step 4.2: gcd = i

Step 5: Print gcd

Step 6: Stop

Flowchart for Finding GCD of 2 numbers:


Algorithm to Check if a Number is Prime
Algorithm:
1. Start
2. Read Number n
3. Set the value of i=2 (Initialize variables)
4. If i<n then go to step 5 otherwise go to step 6
5. If n%i ==0 then go to step 6
 Else, Increment the value of I by 1 and go to step 4.
6. If i==n then
 Print “n is prime number”
 Else
 Print “n is not prime number”
7. Stop
Flowchart:
Computing nth Fibonacci numbers:
Fibonacci Series Algorithm:
o Start
o Declare variables i, a,b , show
o Initialize the variables, a=0, b=1, and show =0
o Enter the number of terms of Fibonacci series to be printed
o Print First two terms of series
o Use loop for the following steps
-> show=a+b
-> a=b
-> b=show
-> increase value of i each time by 1
-> print the value of show
o End

Fibonacci Series Flowchart:

You might also like