Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
10 views65 pages

PC Hard Ware Notes

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views65 pages

PC Hard Ware Notes

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 65

VIJAYAM SCIENCE AND ARTS DEGREE COLLEGE

(Affliated to S.V. University, Tirupathi)

III BCA – V SEMESTER

PC HARDWARE AND NETWORK TROUBLE SHOOTHING

STUDY MATERIAL

DEPARTMENT OF COMPUTER SCIENCE


Composed by
D. CHANDRAMOULI., MCA
Lecturer in computer science
SRI VENKATESWARA UNIVERSITY
BACHELOR OF COMPUTER APPLICATIONS
SEMESTER SYSTEM WITH CBCS
SEMESTER VI
W.E.F. 2022-2023
Course-PC HARDWARE AND NETWORKING
(Skill Enhancement Course (Elective), 5 credits)
Total Hrs: 60 Max Marks: 100

Course objectives:

Upon successful completion of the course, a student will be able to:


Learning Outcomes:
CO 1. Identify the computer peripherals, software and hardware devices.
CO 2. Describe the basics of networks and networking tools
CO 3. Describe the Network Addressing and sub-netting
CO 4. Explains the Networks protocols and management
CO 5. Identifies Basic Network administrator roles

UNIT-1Introduction to computer hardware

1.1ntroduction & Definition of Computer


1.1.1 Block Diagram of computer
1.1.2Classification of computer
1.1.3 Characteristics of Computers
1.1.4Types of Languages and language translators.
1.1.5 History and Generation of computers, Memory - Bits, Bytes,
KB,MB,GB,TB,PB,EB,ZB,YB,Brontope byte, Geeope Byte. Etc
IEC Units: kibi, mebi,gibi,tebi,pebi,exbi,zebi,yobi
1.1.6Computer Software, Types of Software with Ex. (System/Application/Utility S/W
1.1.7 Computer Hardware- Intro. to Hardware components of computer
1.2. Components and its parts
1.2.1.Identifying the Important Hardware Components of PC.- CPU, Motherboard, RAM, HDD,
ODD, SMPS, K/B, Mouse, Monitor (CRT,LCD,LED) etc
1.3. SMPS
1.3.1About SMPS
1.3.2 Types of SMPS
1.3.3 Power stored in UPS
1.3.4 Components and Circuits inside the SMPS Unit
1.4 UPS (Uninterrupted Power Supply)
1.4.1Types of UPS (Offline/Line Interactive & Online)
Page 19 of 25
1.4.2 Working Principle of each type of UPS.
1.4.3 Connecting, Maintenance and Troubleshooting.

UNIT-2 Computer management and servicing

2.1 Assembling and dissembling PCs


2.2 Introduction to BIOS / CMOS Setup, POST (Power On SelfTest)
2.1.1Introduction to BIOS/CMOS Setup, POST (Power On Self-Test
2.1.2 Demonstration of BIOS/CMOS Configuration (Date, Time, Enable/Disable Devices).
2.1.3 Dual BIOS Feature
2.1.4 BIOS/CMOS Setup, Booting Sequence/Boot Order
2.3 Introduction to Operating System
2.3.1Definition and types of Operating Systems - MSDos, Windows 9x/XP/Vista/7/8, Linux,
MAC OS, Android etc.
2.3.2 Process of Booting the Operating System.
2.3.3 Win XP/Win 7. Activation and Automatic Updating procedures.
2.4 Computer Management
2.4.1 Computer Management, Disk Management, Defragmentation,
2.4.2 Services and Applications,
2.4.3 local Users and Groups
2.4.4 Advanced System Settings
2.4.5 Device Manager, Task Manager, Windows Registry
2.5 Partitioning
2.5.1 Partitioning of Hard Drive - Primary, Extended, Logical partitions using Partition Tools.

UNIT-3 Overview of Networking

3.1 Overview of Networking


3.2 Classification of Networks–LAN, MAN, WAN
3.3 Hardware and Software Components, Wi-Fi, Bluetooth
3.5 Network Communication Standards.
3.6 NETWORKING MODEL -OSI Reference Model, TCP/IP Reference Model
3.7 LAN Cables, Connectors, wireless network adapter
3.8 Wireless network adapter
3.9 Functions of LAN Tools
3.9.1 Anti-Magnetic mat
3.9.2 Anti-Magnetic Gloves
3.9.3 Crimping Tool
3.9.4 Cable Tester
3.9.5 Cutter
3.9.6 Loop back plug
3.9.7 Toner probe
3.9.8 Punch down tool
3.9.9 Protocol analyzer
3.9.10 Multi meter
3.10 Network Topologies
2.7.1 Bus
Page 20 of 25
2.7.2 Ring
2.7.3 Star
2.7.4 Mesh
2.7.5 Hybrid Topologies

UNIT- 4 Network Addressing and sub-netting

4.1 Network Addressing.


4.2 TCP/IP Addressing Scheme
4.3 Components of IP Address and classes
4.4 Sub-netting
4.5 Internet Protocol Addressings - IPv4 ,IPv6
4.6 Classful addressing and classless addressing

UNIT-5 Networks protocols and management

5.1 protocols in computer networks


5.2. Hyper Text Transfer Protocol(HTTP)
5.2.1 File Transfer Protocol(FTP)
5.2.2 Simple Mail Transfer Protocol(SMTP)
5.2.3address Resolution Protocol(ARP)
5.2.4 Reverse Address Resolution Protocol(RARP)
5.3. Telnet, ICMP
5.4. Simple Network Management Protocol(SNMP)
5.5. DHCP, DNS
5.6 Network Management.
5.7 Network Monitoring and Troubleshooting.
5.8 Remote Monitoring (RMON)

Text Book:

1. “Introduction to Data Communications and Networking”, B.


Forouzan,TataMcGrawHill

2. “Computer Networks”, Tanenbaum, PHI,

3. PC AND CLONES Hardware, Troubleshooting and Maintenance B. Govinda rajalu,


Tata Mc-graw-Hill Publication

Reference Books:

1. PC Troubleshooting and Repair Stephen J. Bigelow Dream tech Press, New Delhi
2. “Data and Computer Communications”, Stallings, PHI,
3. “DataCommunication”, William Schewber, McGrawHill,1987
4. IT essential V7 companion guide – Cisco Networking Academy 2020
5. Upgrading and repairing PCs(22nd edition) – Scott Mueller – 2015 Que

Page 21 of 25
Unit-I
Introduction to Computer hardware
Computer: A computer is an electronic device which accepts data as input, process and produce
useful information as output.
Data: Data is a collection of facts, figures, digits and some special symbols.
Block diagram of computer: A computer block diagram provides a high-level overview of the
various components that make up a computer system and how they interact with each other. While
there can be different variations and complexities based on the type of computer (e.g., desktop,
laptop, server), I'll present a general block diagram of a modern computer. Keep in mind that this
is a simplified representation and not an exhaustive technical breakdown.
Let's explore the major components:
1. Input
2. CPU
3. Output
Input Unit: A Computer need to receive data and
instruction in order to solve any problem. Therefore, we
need to input the data and instructions into the computer.
The input unit consists of one or more input devices.
Keyboard is the one of the most commonly used input
device. Other commonly used input devices are the
Mouse, Scanner, Microphone etc. All the input devices
perform the following functions.
• Accept the data and instructions from the outside world.
• Convert it to a form that the computer can understand.
• Supply the converted data to the computer system for further processing.
Central Processing Unit:
The Control Unit (CU) , Memory Unit(MU) and Arithmetic Logic Unit (ALU) of the computer
are together known as the Central Processing Unit (CPU). The CPU is like brain performs the
following functions:
• It performs all calculations.
• It takes all decisions.
•It controls all units of the computer.
Control Unit:
It controls all other units in the computer. The control unit instructs the input unit, where to store
the data after receiving it from the user. It controls the flow of data and instructions from the
storage unit to ALU. It also controls the flow of results from the ALU to the storage unit. The
control unit is generally referred as the central nervous system of the computer that control and
synchronizes it’s working.
Arithmetic Logical Unit:
All calculations are performed in the Arithmetic Logic Unit (ALU) of the computer. It also does
comparison and takes decision. The ALU can perform basic operations such as addition,
subtraction, multiplication, division, etc., and does logic operations viz, >, <, =, ‘etc. Whenever
calculations are required, the control unit transfers the data from storage unit to ALU once the
computations are done, the results are transferred to the storage unit by the control unit and then it
is send to the output unit for displaying results.
Storage Unit:
The storage unit of the computer holds data and instructions that are entered through the
input Unit, before they are processed. It preserves the intermediate and final results before these
are sent to the output devices. It also saves the data for the later use. The various storage devices
of a computer system are divided into two categories.
a) Primary Storage: Stores and provides very fast. This memory is generally used to hold the
program being currently executed in the computer, the data being received from the input unit, the
intermediate and final results of the program. The primary memory is temporary in nature. The
data is lost, when the computer is switched off. In order to store the data permanently, the data has
to be transferred to the secondary memory. The cost of the primary storage is more compared to
the secondary storage. Therefore, most computers have limited primary storage capacity.
b) Secondary Storage:
Secondary storage is used like an archive. It stores several programs, documents, data bases etc.
The programs that you run on the computer are first transferred to the primary memory before it
is actually run. Whenever the results are saved, again they get stored in the secondary memory.
The secondary memory is slower and cheaper than the primary memory. Some of the commonly
used secondary memory devices are Hard disk, CD, etc.,

Characteristics of the computer:


Speed
A computer works with much higher speed and accuracy compared to humans while performing
mathematical calculations. Computers can process millions (1,000,000) of instructions per second.
The time taken by computers for their operations is microseconds and nanoseconds.
Accuracy
Computers perform calculations with 100% accuracy. Errors may occur due to data inconsistency
or inaccuracy.
Diligence
A computer can perform millions of tasks or calculations with the same consistency and accuracy.
It doesn’t feel any fatigue or lack of concentration. Its memory also makes it superior to that of
human beings.
Versatility
Versatility refers to the capability of a computer to perform different kinds of works with same
accuracy and efficiency.
Reliability
A computer is reliable as it gives consistent result for similar set of data i.e., if we give same set
of input any number of times, we will get the same result.
Automation
Computer performs all the tasks automatically i.e. it performs tasks without manual intervention.
Mass storage:
A computer has built-in memory called primary memory where it stores data. Secondary storage
are removable devices such as CDs, pen drives, etc., which are also used to store data.
Network Capability:
Computers are used in a network to communicate resources from one computer to another
computer.
Easy to use:-
A computer can be used by anyone without requiring much knowledge and programming skills
to work on it.
No IQ:-
The trend today is to make Computers intelligent by introducing artificial intelligence (AI) and
data Science, Still do not have any decision making capabilities on their own. So their IQ level is
zero.
Generations of computer
The modern computer took its shape with the arrival of time. It had been around the 16th century
when the evolution of the computer started. The initial computer faced many changes, obviously
for the betterment. It continuously improved itself in terms of speed, accuracy, size, and price to
urge the form of the fashionable day computer.
First Generation of Computer:(1942-55)
The technology behind the primary generation computers was a fragile glass device, which was
called a vacuum tube. These computers were very heavy and really large. These weren’t very
reliable and programming on them was a tedious task as they used low-level programming
language and used no OS. First-generation computers were used for calculation, storage, and
control purpose. They were too bulky and large that they needed a full room and consume a lot
of electricity.
Examples of some main first-generation computers are mentioned below.
 ENIAC: Electronic Numerical Integrator and Computer, built by J. Presper Eckert and
John V. Mauchly was a general-purpose computer. It had been cumbersome, and large,
and contained 18,000 vacuum tubes.
 EDVAC: Electronic Discrete Variable Automatic Computer was designed by von
Neumann. It could store data also as instruction and thus the speed was enhanced.
 UNIVAC: Universal Automatic Computer was developed in 1952 by Eckert and
Mauchly.
Second Generation (1955-1965)
The invention of semiconductor transistors revolutionized the development of computer
technology. The transistor was smaller in size, generate less heat, and consume less electricity
than its predecessor, vacuum tubes of First Generation. Transistor was faster than vacuum tubes.
It used magnetic core as primary memory; and magnetic tape and magnetic disc as secondary
memory. Instructions were written in machine language as well as assembly language. Batch
processing and Multiprogramming Operating system were used as system software. The concept
of the stored-program has evolved from second-generation where instructions were stored in
memory.
Examples of second generation are: UNIVAC
1108, CDC3600, CDC1604, IBM7094, IBM1620 etc.
Third Generation (1965-1971)
The integrated chips (ICs) were introduced in the Third Generation. Integrated chips can hold
multiple numbers of transistors in a single chip. The size of the computer drastically reduced due
to the use of an IC chip. Also, the speed and efficiency of the computer of the Third Generation
increased as compared to that of the Second Generation. High-level languages were used to give
instructions to the computer. This generation of computers was more reliable, compact, and
required less maintenance. Examples of the Third generation are IBM-260 series, Honeywell-
6000, PDP (Personal Data Processor), IBM-270/168, TDC-316, etc.
Fourth Generation (1971-1980)
The size of the integrated chips was getting smaller after the large scale integration (LSI) and
very large integration (VLSI) technique evolved. VLSI technique contained almost 5000
transistors in a single chip. Microprocessors were developed using the VLSI technique. The size
of the computer dramatically reduced and it has become more compact, reliable, generate less
heat, and consumes very less electricity. High-level languages like C, C++, and DBASE are used
for programming. Personal computers came into existence due to smaller sizes. Semiconductor
memory was used in place of magnetic core memory. The size of the secondary memory
reduced and storage capacity was increased. There was a major development in the field of
networking and the concept of the internet evolved during the Fourth Generation.
Examples of fourth-generation computers are DEC10, PDP11, STAR1000, CRAY-1
Supercomputer, etc.
Fifth Generation (1980-till date and beyond)
There was a huge development in the field of computer technology for the last ten decades. In
the 1950s, the computers were so large and bulky; as the technology changes, the size drastically
reduced and come with more reliability and compact in size. Till then, the computer was allowed
to perform specific task, until the user tells it to do. But after the development of artificial
intelligence, a computer can think like human beings. The software was developed to simulate
human thinking and reasoning. Parallel processing emerged as the major development where
CPUs are allowed to execute several instructions in parallel which increases the processing speed
and handles the multiple tasks simultaneously. High-level languages like C, C++, .Net, Object-
Oriented Programming, Java, Natural Language Processing are used in this generation.

Software and its types:


In a computer system, the software is basically a set of instructions or commands that tell a
computer what to do. In other words, the software is a computer program that provides a set of
instructions to execute a user’s commands and tell the computer what to do. For example
like MS-Word, MS-Excel, PowerPoint, etc.
Types of Software
It is a collection of data that is given to the computer to complete a particular task. The
following figure describes the types of software:

System Software
System software is software that directly operates the computer hardware and provides the basic
functionality to the users as well as to the other software to operate smoothly. Or in other words,
system software basically controls a computer’s internal functioning and also controls hardware
devices such as monitors, printers, and storage devices, etc. It is like an interface between
hardware and user applications, it helps them to communicate with each other because hardware
understands machine language(i.e. 1 or 0) whereas user applications are work in human-readable
languages like English, Hindi, German, etc. so system software converts the human-readable
language into machine language and vice versa.
Types of System Software
It has two subtypes which are:
1. Operating System: It is the main program of a computer system. When the computer
system ON it is the first software that loads into the computer’s memory. Basically, it
manages all the resources such as computer memory, CPU, printer, hard disk, etc., and
provides an interface to the user, which helps the user to interact with the computer
system. It also provides various services to other computer software. Examples of
operating systems are Linux, Apple macOS, Microsoft Windows, etc.
2. Language Processor: As we know that system software converts the human-readable
language into a machine language and vice versa. So, the conversion is done by the
language processor. It converts programs written in high-level programming
languages like Java, C, C++, Python, etc(known as source code), into sets of instructions
that are easily readable by machines(known as object code or machine code).
3. Device Driver: A device driver is a program or software that controls a device and helps
that device to perform its functions. Every device like a printer, mouse, modem, etc.
needs a driver to connect with the computer system eternally. So, when you connect a
new device with your computer system, first you need to install the driver of that device
so that your operating system knows how to control or manage that device.
Features of System Software
Let us discuss some of the features of System Software:
 System Software is closer to the computer system.
 System Software is written in a low-level language in general.
 System software is difficult to design and understand.
 System software is fast in speed(working speed).
 System software is less interactive for the users in comparison to application software.
Application Software
Software that performs special functions or provides functions that are much more than the basic
operation of the computer is known as application software. Or in other words, application
software is designed to perform a specific task for end-users. It is a product or a program that is
designed only to fulfill end-users’ requirements. It includes word processors, spreadsheets,
database management, inventory, payroll programs, etc.
Types of Application Software
There are different types of application software and those are:
1. General Purpose Software: This type of application software is used for a variety of
tasks and it is not limited to performing a specific task only. For example, MS-Word,
MS-Excel, PowerPoint, etc.
2. Customized Software: This type of application software is used or designed to perform
specific tasks or functions or designed for specific organizations. For example, railway
reservation system, airline reservation system, invoice management system, etc.
3. Utility Software: This type of application software is used to support the computer
infrastructure. It is designed to analyze, configure, optimize and maintains the system,
and take care of its requirements as well. For example, antivirus, disk fragmented,
memory tester, disk repair, disk cleaners, registry cleaners, disk space analyzer, etc.
Features of Application Software
Let us discuss some of the features of Application Software:
 An important feature of application software is it performs more specialized tasks like
word processing, spreadsheets, email, etc.
 Mostly, the size of the software is big, so it requires more storage space.
 Application software is more interactive for the users, so it is easy to use and design.
 The application software is easy to design and understand.
 Application software is written in a high-level language in general.
Difference between System Software and Application Software
Now, let us discuss some difference between system software and application software:

System Software Application Software

It is designed to manage the resources of the


It is designed to fulfill the requirements of
computer system, like memory and process
the user for performing specific tasks.
management, etc.

Written in a low-level language. Written in a high-level language.

Less interactive for the users. More interactive for the users.

Application software is not so important


System software plays vital role for the
for the functioning of the system, as it is
effective functioning of a system.
task specific.

It is independent of the application software to


It needs system software to run.
run.

Computer Languages: A language is the main medium of communicating between the Computer
systems and the most common are the programming languages
. • As we know a Computer only understands binary numbers that is 0 and 1 to perform various
operations but the languages are developed for different types of work on a Computer.
• A language consists of all the instructions to make a request to the system for processing a task.
• From the first generation and now fourth generation of the Computers there were several
programming languages used to communicate with the Computer.
Language Translators:
Language translators allow computer programmers to write sets of instructions in specific
programming languages. These instructions are converted by the language translator into
machine code. The computer system then reads these machine code instructions and executes
them. Hence, a language translator is a program that translates from one computer language to
another.
There are mainly three types of translators that are used to translate different programming
languages into machine-equivalent code:
Assembler:
An assembler translates assembly language into machine code.Assembly language consists of
mnemonics for machine op-codes, so assemblers perform a 1:1 translation from mnemonic to
direct instruction. For example, LDA #4 converts to 0001001000100100.
Conversely, one instruction in a high-level language will translate to one or more instructions at
the machine level.

The Benefits of Using Assembler


Here is a list of the advantages of using assembler:
 As a 1 to 1 relationship, assembly language to machine code translation is very fast.
 Assembly code is often very efficient (and therefore fast) because it is a low-level
language.
 Assembly code is fairly easy to understand due to the use of English, like in mnemonics.
 The Drawbacks of Using Assembler
 Assembly language is written for a certain instruction set and/or processor.
 Assembly tends to be optimized for the hardware it is designed for, meaning it is often
incompatible with different hardware.
 Lots of assembly code is needed to do a relatively simple task, and complex programs
require lots of programming time.

Compiler
A compiler is a computer program that translates code written in a high-level language into a
low-level language, machine code.The most common reason for translating source code is to
create an executable program (converting from high-level language into machine language).
Advantages of using a compiler
 Source code is not included; therefore, compiled code is more secure than interpreted
code.
 Tends to produce faster code and is better at interpreting source code.
 Because the program generates an executable file, it can be run without the need for the
source code.
Disadvantages of using a compiler
 Before a final executable file can be created, object code must be generated; this can be a
time-consuming process.
 The source code must be 100% correct for the executable file to be produced.
Interpreter
An interpreter program executes other programs directly, running through the program code and
executing it line-by-line. As it analyses every line, an interpreter is slower than running compiled
code, but it can take less time to interpret program code than to compile and then run it. This is
very useful when prototyping and testing code.
Interpreters are written for multiple platforms; this means code written once can be immediately
run on different systems without having to recompile for each. Examples of this include flash-
based web programs that will run on your PC, Mac, gaming console, and mobile phone.
Advantages of using an interpreter
 Easier to debug (check errors) than a compiler.
 It is easier to create multi-platform code, as each different platform would have an
interpreter to run the same code.
 Useful for prototyping software and testing basic program logic.
Disadvantages of using an interpreter
 Source code is required for the program to be executed, and this source code can be read,
making it insecure.
 Due to the on-line translation method, interpreters are generally slower than compiled
programs.
input and output devices of computer:
Input and output devices are essential components of a computer system that enable
communication between the user and the computer. Here are some common examples of input
and output devices:
Input Devices:
1. Keyboard: A standard keyboard allows users to input text and commands into the
computer by pressing keys.
2. Mouse: A mouse is a pointing device that allows users to interact with graphical user
interfaces by moving a cursor on the screen and clicking on icons or buttons.
3. Touchscreen: Touchscreens allow users to input data and interact with the computer by
directly touching the display screen.
4. Scanner: Scanners are used to convert physical documents or images into digital format,
allowing them to be stored or manipulated on a computer.
5. Digital Camera: Digital cameras capture still images and videos, which can be
transferred to a computer for editing or storage.
6. Microphone: Microphones capture audio input, allowing users to record sound, make
voice calls, or use voice recognition software.
7. Joystick: Joysticks are often used for gaming and simulation applications, allowing users
to control movement and actions in digital environments.
8. Webcam: Webcams capture video in real-time, allowing users to conduct video calls,
record videos, or stream content online.
9. Barcode Reader: Barcode readers scan barcodes on products or documents, providing a
quick and accurate way to input data into the computer.
Output Devices:
1. Monitor: A monitor, also known as a display or screen, visually presents information to
the user. It displays text, images, videos, and other graphical content generated by the
computer.
2. Printer: Printers produce hard copies of digital documents or images on paper. There are
various types of printers, including inkjet, laser, and dot matrix printers.
3. Speaker: Speakers output audio, allowing users to hear sound effects, music, and other
audio elements from the computer.
4. Headphones: Headphones provide audio output in a private manner, allowing users to
listen to audio without disturbing others.
5. Projector: Projectors display computer-generated images and presentations on large
screens or surfaces, making them useful for classrooms, boardrooms, and auditoriums.
6. Plotter: Plotters are specialized printers used for producing high-quality, large-scale
graphical output, often used in engineering and design applications.
These devices work together to enable users to interact with and receive information from the
computer system.
Computer Hardware:
The physical components of a computer is called Computer Hardware. The following are
various components of hardware:
1. Central Processing Unit (CPU):
The CPU is the brain of the computer. It performs most of the processing inside the computer
and carries out instructions of a computer program by performing basic arithmetic, logical,
control, and input/output (I/O) operations.
2. Motherboard:
The motherboard is the main circuit board of the computer. It houses the CPU, memory, and
other essential components. It provides connections between the CPU, memory, storage devices,
and other peripherals.
3. Memory (RAM):
Random Access Memory (RAM) is the computer's short-term memory. It stores data and
instructions that the CPU needs while it is actively working on tasks. RAM is volatile memory,
meaning it loses its content when the power is turned off.
4. Storage Devices:
 Hard Disk Drive (HDD): HDDs are traditional storage devices that use magnetic storage
to store data. They provide large storage capacities but are relatively slower compared to
newer technologies.
 Solid State Drive (SSD): SSDs use flash memory to store data. They are faster, more
durable, and energy-efficient compared to HDDs. SSDs are commonly used in modern
computers for faster performance.
 Optical Drives: Devices like DVD drives and Blu-ray drives are used for reading and
writing optical discs.
5. Graphics Processing Unit (GPU):
The GPU is responsible for rendering images and videos. It's crucial for gaming, video editing,
and other graphics-intensive tasks. Some CPUs have integrated graphics, but for high-
performance tasks, dedicated GPUs are used.
6. Power Supply Unit (PSU):
The PSU converts electrical power from an outlet into usable power for the computer. It supplies
power to all components of the computer.
7. Input Devices:
 Keyboard: Used for entering text and commands.
 Mouse: Used for pointing, clicking, and selecting items on the screen.
 Other Input Devices: Includes touchpads, stylus pens, and game controllers.
8. Output Devices:
 Monitor: Displays visual output from the computer.
 Printer: Produces hard copies of documents.
 Speakers/Headphones: Output audio for listening.
9. Networking Components:
 Network Interface Card (NIC): Enables the computer to connect to a network.
 Modem: Converts digital data from a computer into analog signals for transmission over
telephone lines (mostly used for dial-up internet connections).
10. Cooling Systems:
 Fans and Heat Sinks: Keep the CPU, GPU, and other components cool by dissipating
heat generated during operation.
These components work together to enable a computer to perform various tasks, from simple
word processing to complex 3D rendering and scientific calculations. Modern computers often
consist of a combination of these components to meet specific performance and functionality
requirements.
Switched-Mode Power Supply
In a personal computer, SMPS stands for Switched-Mode Power Supply. It is a type of power
supply unit (PSU) that converts the incoming AC (alternating current) voltage from the electrical
outlet into the DC (direct current) voltage required by the computer's internal components.
Here's how an SMPS works in a personal computer:
1. AC-to-DC Conversion: The SMPS takes the
AC voltage from the power outlet and converts
it into high-frequency AC. This AC voltage is
then rectified and filtered to produce a stable
DC voltage.
2. Switching Circuit: The SMPS utilizes a
switching circuit that rapidly switches the DC
voltage on and off at a high frequency. This
high-frequency switching allows for more
efficient power conversion compared to
traditional linear power supplies.
3. Voltage Regulation: The SMPS includes
voltage regulation circuitry to ensure that the output DC voltage remains stable even with
fluctuations in the input voltage or varying loads. This regulation is crucial to provide
consistent and reliable power to the computer's components.
4. Multiple Output Rails: Modern SMPS units typically provide multiple output rails with
different voltage levels to power various components of the computer. Common rails
include +3.3V, +5V, and +12V. Each rail is responsible for supplying power to specific
components like the motherboard, CPU, memory, storage devices, and expansion cards.
5. Cooling: SMPS units often incorporate cooling mechanisms such as fans or heat sinks to
dissipate heat generated during the power conversion process. Efficient cooling helps
6. Maintain the SMPS's temperature within safe limits and ensures reliable operation.
SMPS units are widely used in personal computers and other electronic devices due to their
compact size, high efficiency, and ability to handle a wide range of input voltages. They provide
the necessary power to all the components of a computer, ensuring stable and reliable operation.
An SMPS adjusts output voltage and current between different electrical configurations by
switching the basics of typically lossless storage such as capacitors and inductors. Ideal
switching concepts determined by transistors controlled outside of their active state that have no
resistance when ‘on’ and carry no current when ‘off.’ It is the idea why switches with an ideal
function will operate with 100 per cent output, that is, all input energy is provided to the load; no
power is wasted as dissipated heating. In fact, such ideal systems do not exist, which is why a
switching power source can not be 100 per cent proficient, but it is still a vital improvement in
effectiveness over a linear regulator.
Benefits of SMPS
 The switch-mode power source is small in scale.
 The SMPS is very lightweight.
 SMPS power consumption is typically 60 to 70 per cent, which is ideal for use.
 SMPS is strongly anti-interference.
 The SMPS production range is large.
Limitations of SMPS
 The complexity of SMPS is very large.
 The production reflection is high and its control is weak in the case of SMPS.
 Use of SMPS can only be a step-down regulator.
 In SMPS, the voltage output is just one.
Here are some common types of SMPS:
Buck Converter: A buck converter, also known as a step-down converter, is one of the simplest
forms of SMPS. It steps down the input voltage to a lower output voltage. The switching element
(usually a transistor) is connected in series with the load, and energy is stored in an inductor
during the ON state of the transistor and released to the load during the OFF state.
Boost Converter: A boost converter, or step-up converter, is used to increase the input voltage
to a higher output voltage. It works opposite to a buck converter. The switching element is
connected in series with the input voltage source, and an inductor stores energy during the ON
state and releases it to the output during the OFF state.
Buck-Boost Converter: A buck-boost converter can step up or step down the input voltage,
depending on the desired output. It can provide a lower or higher voltage than the input voltage.
It combines the functionality of both buck and boost converters by using a combination of
inductors, capacitors, and switches.
These are some of the common types of SMPS used in various applications. Each type has its
advantages, disadvantages, and suitable applications depending on the voltage requirements,
power levels, efficiency, and other factors.
UPS
An Uninterruptible Power Supply (UPS) is an electrical device that provides backup power to
connected devices or systems during a power outage or when the main power source fails. It is
designed to ensure that critical equipment or sensitive electronic devices continue to operate
without interruption, allowing users to save their work, safely shut down systems, or continue
operations until the power is restored or an alternate power source is activated.
The primary function of a UPS is to provide temporary power when the main power supply
becomes unavailable or unstable. It acts as a bridge between the main power source (such as the
electrical grid or a generator) and the connected devices, offering protection against power
disturbances, voltage fluctuations, surges, sags, and complete power failures.
A typical UPS consists of several key components:
1. Battery: The UPS contains one or more rechargeable batteries that store electrical energy
when the main power supply is available. These batteries supply power to connected
devices when the main power source fails.
2. Inverter: The inverter converts the direct current (DC) power stored in the batteries into
alternating current (AC) power, which is the standard form of electrical power used by
most devices and systems.
3. Charger: The charger replenishes the battery's energy when the main power supply is
restored. It ensures that the batteries remain charged and ready to supply power during an
outage.
4. Automatic Voltage Regulator (AVR): The AVR feature of a UPS helps regulate and
stabilize the voltage provided to connected devices. It protects against voltage spikes,
drops, or fluctuations that can damage sensitive equipment.
5. Control and Monitoring: A UPS often includes a control panel or software interface for
users to configure settings, monitor battery status, view power load, and receive
notifications about power events.
UPS systems come in various sizes and capacities, ranging from small units designed for
personal computers to large-scale systems used to back up entire data centers or critical
infrastructure. The runtime of a UPS, i.e., the duration it can provide power to connected devices
during an outage, depends on factors such as the battery capacity and the power consumption of
the equipment.
Overall, the purpose of a UPS is to provide reliable and uninterrupted power to protect valuable
equipment, prevent data loss, and ensure the continuity of operations in various settings,
including homes, offices, medical facilities, telecommunications networks, and industrial
environments.
There are several types of UPS systems, each with its own working principle. Here's a brief
overview of the working principles of some common types of UPS:
1. Standby UPS (Offline UPS):
 The standby UPS is the simplest and most economical type.
 It remains in standby mode, allowing the connected devices to run directly from
the mains power.
 When a power outage occurs or the voltage goes out of range, the UPS switches
to battery power.
 The transfer time between mains power and battery power is usually a few
milliseconds, causing a brief interruption in power supply.
2. Line-Interactive UPS:
 The line-interactive UPS incorporates an automatic voltage regulator (AVR) in
addition to the features of a standby UPS.
 The AVR regulates the incoming voltage and compensates for under-voltage or
over-voltage conditions.
 During normal operation, the UPS uses the mains power to supply the connected
devices while charging the battery.
 In case of a power outage or voltage fluctuation, the UPS switches to battery
power or adjusts the voltage through the AVR to maintain a stable output.
3. Online UPS (Double Conversion UPS):
 The online UPS provides the highest level of protection and is commonly used for
critical applications.
 It continuously converts the incoming AC power to DC power, which charges the
battery and powers the connected devices.
 The DC power is then converted back to AC power using an inverter to supply a
stable and clean output.
 Since the load is always powered by the inverter, the online UPS offers a
seamless transition during power disruptions, eliminating any transfer time.
 This provides the highest level of protection against power disturbances,
including voltage sags, surges, and frequency variations.
Unit-II

Computer Management and Servicing


Assembling a PC:
Assembling a PC involves several steps and requires some knowledge about computer components
and their compatibility. The following are the important steps to help in assembling a PC:
1. Gather the necessary components:
Before we start assembling, make sure we have all the required components. The basic
components include a motherboard, processor (CPU), RAM, storage device (hard drive or
SSD), power supply unit (PSU), graphics card (optional), and a computer case.
Additionally, we'll need peripherals like a monitor, keyboard, mouse, and speakers.
2. Prepare workspace: Choose a clean and well-lit area to work. Make sure we have enough
space to lay out the components and tools.
3. Install the CPU(Processor): Start by installing the CPU onto the motherboard. Lift the
CPU retention arm on the motherboard's socket, align the CPU correctly (noting the
notches and markings), gently place it in the socket, and then lower the retention arm to
secure it. Be cautious not to apply excessive force.
4. Install the CPU cooler: Depending on our CPU and cooling solution, we may need to
install a CPU cooler. This typically involves attaching a heat sink or fan to the CPU using
thermal paste for heat transfer.
5. Install the RAM: Locate the RAM slots on the motherboard and open the latches on both
ends. Align the notch on the RAM module with the slot, then firmly but gently press it
down until the latches close and lock the RAM in place.
6. Install the storage devices: Connect our storage devices, such as a hard drive or SSD, to
the motherboard using SATA cables. Make sure to attach the cables securely to both the
storage device and the motherboard.
7. Install the graphics card: If we have a dedicated graphics card, locate the appropriate
PCIe slot on the motherboard. Remove the corresponding expansion slot cover on the back
of the case, align the graphics card with the slot, and firmly push it into place. Secure it
with screws if necessary.
8. Connect the power supply: Place the power supply unit into the case and align it properly.
Connect the necessary power cables from the PSU to the motherboard, CPU, storage
devices, and graphics card (if applicable). Ensure that all connections are secure.
9. Connect case components: Connect the front panel connectors from the case to the
motherboard. These include the power switch, reset switch, HDD LED, power LED, and
any USB or audio connectors. Refer to our motherboard's manual for specific instructions
on connecting these cables.
10. Cable management: Neatly organize and secure the cables using cable ties or Velcro
straps. Good cable management helps with airflow and reduces clutter.
11. Close the case: Carefully place the side panels back onto the case and secure them using
the provided screws. Ensure that all edges are aligned correctly to avoid any interference
with components.
12. Connect peripherals: Connect our monitor, keyboard, mouse, speakers, and any other
peripherals to the appropriate ports on the back of the PC.
13. Power on: Double-check all connections and ensure everything is secure. Then, plug in
the power cord, switch on the power supply, and press the power button on the PC to turn
it on.
If everything is assembled correctly, our PC should boot up and display the BIOS screen. From
there, we can install an operating system, update drivers, and start using our newly assembled PC.
Disassembling a pc:
Disassembling a PC can be a delicate process, and it's important to handle the components with
care to avoid any damage. The following is a step-by-step guide on how to disassemble a typical
desktop PC:
1. Preparation:
 Shut down our computer and unplug it from the power source.
 Remove any connected peripherals such as monitors, keyboards, and mice.
2. Opening the Case:
 Place the PC on a stable surface with ample working space.
 Remove the screws or fasteners securing the side panel of the case.
 Slide off or gently lift the side panel to expose the internal components.
3. Disconnecting Power Supply:
 Locate the power supply unit (PSU) at the back of the PC.
 Unplug all power cables connected to the motherboard, hard drives, and other
components.
 Unscrew and remove the PSU from the case if necessary.
4. Removing Storage Drives:
 Identify the storage drives (hard drives or solid-state drives).
 Disconnect the SATA or power cables connected to the drives.
 Remove the screws or brackets securing the drives in place.
 Gently slide out the drives from their slots.
5. Detaching Data Cables:
 Identify the various data cables connected to the motherboard.
 Disconnect the SATA cables connected to the storage drives.
 Unplug any other data cables, such as those connected to optical drives or
expansion cards.
6. Removing Expansion Cards:
 Identify the expansion cards, such as graphics cards or sound cards.
 Remove the screw(s) securing each card to the case.
 Gently press down on the release latch of the PCI-E slot.
 Carefully pull the card straight out of the slot.
7. Disconnecting Memory Modules:
 Locate the memory modules (RAM) on the motherboard.
 Press the latches at the ends of the memory slots to release the modules.
 Firmly grip the sides of a module and pull it out gently and evenly.
8. Detaching CPU Cooler:
 If we plan to remove the CPU cooler, check if it's secured by screws or clips.
 Unscrew the screws or unclip the cooler as per the manufacturer's instructions.
 Gently twist the cooler from side to side to break the thermal paste seal.
 Lift the cooler straight up to separate it from the CPU.
9. Disconnecting Front Panel Connectors:
 Locate the cables that connect the front panel buttons and indicators to the
motherboard.
 Carefully unplug each cable from its corresponding pin header on the motherboard.
10. Removing the Motherboard:
 Identify the screws or standoffs securing the motherboard to the case.
 Unscrew the screws or remove the standoffs using a screwdriver.
 Lift the motherboard gently, supporting it from underneath, and remove it from the
case.
11. Final Checks:
 Inspect all the components and cables to ensure nothing is still connected.
 Clean any dust or debris from the components using compressed air if necessary.
By following these steps, we should be able to disassemble our PC safely. Remember to keep track
of the screws and components we remove, as they will be needed when reassembling the computer.
2.1.1 Introduction to BIOS/CMOS Setup, POST (Power On Self-Test):

The Basic Input/Output System (BIOS) is a firmware embedded in a computer's motherboard. It


is responsible for initializing and configuring various hardware components during the boot
process. The BIOS performs a crucial function known as the Power On Self-Test (POST), which
checks the system's hardware to ensure it is functioning correctly before the operating system takes
control.
When you turn on your computer, the BIOS is the first software that runs. It performs a series of
tests on different hardware components such as the CPU, memory, storage devices, and
input/output systems. The POST helps detect any potential issues, such as faulty hardware or
improper configurations, and provides error codes or beep sounds to indicate problems if they are
detected.
2.1.2 Demonstration of BIOS/CMOS Configuration (Date, Time, Enable/Disable Devices):
To access and configure the BIOS settings, you need to press a specific key during the boot
process. Common keys for entering the BIOS setup include Del, F2, F10, or Esc, but it can vary
depending on the computer manufacturer.
Once inside the BIOS setup, you can navigate through different menus and options using the
keyboard. The exact layout and available options may vary between BIOS versions and
manufacturers, but typically, you will find options to configure various settings, including:
1. Date and Time: You can set the current date and time in the BIOS setup. This information
is used by the operating system and other applications.
2. Device Configuration: The BIOS allows you to enable or disable different hardware
devices connected to your computer. For example, you can enable or disable USB ports,
network adapters, audio devices, or SATA/IDE controllers.
3. Boot Options: You can configure the boot sequence or boot order, which determines the
order in which devices are checked for an operating system to load. This allows you to
prioritize booting from specific devices such as the hard drive, CD/DVD drive, USB drive,
or network.
Remember to save any changes you make before exiting the BIOS setup. Typically, you can save
changes by pressing F10 or choosing the appropriate option within the BIOS interface.
2.1.3 Dual BIOS Feature:
The Dual BIOS feature is found on certain motherboards and serves as a backup mechanism in
case the primary BIOS becomes corrupted or fails. It provides an extra layer of protection and
helps avoid situations where a faulty BIOS prevents the computer from booting.
With Dual BIOS, the motherboard has two separate BIOS chips: a primary (main) BIOS and a
backup BIOS. The primary BIOS is the one that gets utilized during normal system operation.
Meanwhile, the backup BIOS remains inactive but contains a duplicate copy of the primary BIOS.
If the primary BIOS becomes corrupt or encounters an issue, the Dual BIOS feature automatically
detects the problem during the POST and switches to the backup BIOS. This allows the computer
to boot using the backup BIOS and ensures that the system remains operational.
Once the backup BIOS takes control, you may need to restore the primary BIOS or update it to a
working version to prevent future issues. The exact procedure for restoring or updating the BIOS
can vary depending on the motherboard manufacturer and model.
2.1.4 BIOS/CMOS Setup, Booting Sequence/Boot Order:
The BIOS/CMOS setup includes the configuration options that control various aspects of your
computer's hardware and boot process. One important setting is the boot sequence or boot order,
which determines the order in which devices are checked for an operating system to load during
startup.
To configure the boot order, you can access the BIOS setup as described earlier and navigate to
the Boot Options or Boot Order menu. In this menu, you will see a list of available devices such
as the hard drive, CD.
Operating System:
An operating system (OS) is a software program that manages computer hardware and software
resources and provides services for computer programs. It acts as an intermediary between the user
and the computer hardware, allowing users to interact with the system and run applications
efficiently.
The primary functions of an operating system include:
1. Process Management: The OS manages the execution of processes (or programs) on the
computer. It allocates system resources, such as CPU time, memory, and input/output
devices, to different processes, ensuring that they run smoothly without interfering with
each other.
2. Memory Management: The OS is responsible for managing the computer's memory
resources. It allocates memory space to different processes and keeps track of which parts
of the memory are currently in use. It also handles memory swapping (transferring data
between RAM and storage devices) to optimize memory usage.
3. File System Management: The OS provides a hierarchical structure for organizing and
storing files on a computer's storage devices. It manages file creation, deletion, and access
permissions. It also handles tasks such as file input/output and storage device management.
4. Device Management: The OS controls and coordinates the use of various input/output
devices, such as keyboards, mice, printers, and storage devices. It provides device drivers
that enable communication between the hardware devices and software applications.
5. User Interface: The OS provides a user interface (UI) that allows users to interact with
the computer system. It can be a command-line interface (CLI) where users type
commands, or a graphical user interface (GUI) that uses windows, icons, and menus to
facilitate user interaction.
6. Security: The OS implements security measures to protect the computer system from
unauthorized access, malware, and other threats. It enforces user authentication, access
control policies, and safeguards system resources.
There are different types of operating systems, including:
 Single-User, Single-Tasking: These operating systems allow only one user to run a single
program at a time. Older versions of MS-DOS are examples of such operating systems.
 Single-User, Multi-Tasking: These operating systems enable a single user to run multiple
programs simultaneously. Modern desktop operating systems like Windows, macOS, and
Linux fall into this category.
 Multi-User: These operating systems support multiple users running programs
concurrently. They provide features for user authentication, access control, and resource
sharing. Examples include Unix, Linux server editions, and mainframe operating systems.
 Real-Time: Real-time operating systems are designed for systems that require precise
timing and quick response. They are used in applications such as industrial automation,
robotics, and aerospace.
 Embedded: Embedded operating systems are tailored for specific devices or applications,
such as smartphones, tablets, digital cameras, and home appliances.
1. Definition and Types of Operating Systems: An operating system (OS) is a software that
manages computer hardware and software resources and provides common services for
computer programs. It acts as an intermediary between the hardware and software,
allowing users to interact with the computer system.
There are several types of operating systems:
1. MS-DOS (Microsoft Disk Operating System): MS-DOS is a command-line-based
operating system developed by Microsoft. It was widely used in the early days of personal
computers.
2. Windows 9x/XP/Vista/7/8: These are various versions of the Microsoft Windows operating
system. They are graphical user interface (GUI)-based operating systems designed for
personal computers.
3. Linux: Linux is an open-source operating system based on the Unix operating system. It is
known for its stability, security, and flexibility. Linux is widely used in servers, embedded
systems, and as a desktop operating system.
4. macOS: macOS is the operating system developed by Apple Inc. It is designed specifically
for Apple's Macintosh computers and provides a user-friendly graphical interface.
5. Android: Android is a mobile operating system based on the Linux kernel. It is primarily
used in smartphones, tablets, and other mobile devices.
2.3.2 Process of Booting the Operating System: The booting process is the initial set of operations
performed by a computer when it is powered on. Here is a general overview of the booting process:
1. Power-on: When the computer is turned on, the power supply provides electricity to the
components.
2. Power-on self-test (POST): The computer's firmware (BIOS or UEFI) performs a self-test
to check hardware components such as the CPU, memory, and storage devices. It ensures
that the hardware is functioning correctly.
3. Boot loader: The boot loader is a small program stored in the computer's firmware. It
locates and loads the operating system's kernel into memory. In Windows systems, the boot
loader is typically the Windows Boot Manager.
4. Kernel initialization: The operating system's kernel is loaded into memory, and it starts
initializing the necessary components, drivers, and services.
5. User space initialization: Once the kernel initialization is complete, the operating system
starts initializing the user space environment, including user processes, services, and the
graphical user interface (GUI).
6. Login screen: The user is presented with a login screen or desktop environment where they
can enter their credentials and start using the computer.
2.3.3 Windows XP/Windows 7 Activation and Automatic Updating Procedures:
Windows XP and Windows 7 are older versions of the Windows operating system, and the
activation and automatic updating procedures differ for each version.
For Windows XP:
 Activation: Windows XP requires activation to verify the authenticity of the operating
system installation. Users need to enter a valid product key during installation or after
installation through the activation wizard.
 Automatic Updates: In Windows XP, users can configure the Automatic Updates feature
to download and install important updates from Microsoft. They can choose from different
options, such as automatic installation or manual installation after receiving notification.
For Windows 7:
 Activation: Windows 7 also requires activation to ensure the legitimacy of the operating
system installation. Users need to enter a valid product key during installation or after
installation through the activation wizard.
 Automatic Updates: In Windows 7, users can configure the Windows Update feature to
automatically download and install important updates. They can choose the frequency of
updates and also enable optional updates if desired.
2.4 Computer Management:
Computer Management: Computer Management is a Windows utility that provides a
centralized location to manage various aspects of a computer system. It includes several
tools and features, such as Disk Management and Defragmentation.
 Disk Management: Disk Management is a tool within Computer Management that allows
users to manage disks and partitions on their computer. It enables tasks such as creating,
deleting, formatting, and resizing partitions. Users can also assign drive letters and change
the properties of disks.
 Defragmentation: Defragmentation is the process of reorganizing fragmented files on a
hard drive to optimize disk performance. In Windows, the Disk Defragmenter tool is
available within Computer Management. It analyzes the disk, identifies fragmented files,
and rearranges them to improve access speed and efficiency.
Services and Applications: Within Computer Management, there is a section dedicated to
managing services and applications on the computer.
 Services: Services are background processes that run independently of user interaction and
provide specific functionalities to the operating system or other applications. The Services
section in Computer Management allows users to view, start, stop, and configure various
services.
 Applications: The Applications section in Computer Management provides information
about applications running on the computer. It includes details about running processes,
their resource usage, and the ability to end or switch between applications.
2.4.3 Local Users and Groups: The Local Users and Groups section in Computer Management
allows administrators to manage user accounts and groups on a local computer.
 Users: The Users section provides tools to create, modify, and delete user accounts on the
local computer. Administrators can assign user privileges, manage passwords, and control
access to resources.
 Groups: The Groups section allows administrators to create and manage groups of users.
Groups provide an efficient way to assign permissions and rights to multiple users
simultaneously. Users can be added or removed from groups, and group policies can be
applied to control access and settings.
2.4.4 Advanced System Settings: Advanced System Settings provides access to system-related
configurations and settings.
 Performance: Within Advanced System Settings, the Performance tab allows users to
adjust visual effects, virtual memory settings, and processor scheduling to optimize system
performance.
 Environment Variables: This section allows users to set environment variables that affect
the behavior of the operating system and applications. Variables like PATH control the
locations where the system looks for executable files.
2.4.5 Device Manager, Task Manager, Windows Registry:
 Device Manager: Device Manager is a Windows tool accessible through Computer
Management that provides information about hardware devices installed on the computer.
It allows users to view, manage, and update device drivers, troubleshoot hardware issues,
and enable or disable devices.
 Task Manager: Task Manager is a utility that provides information about running
processes, system performance, and resource usage. It allows users to monitor and manage
applications, end processes, and adjust system settings.
 Windows Registry: The Windows Registry is a hierarchical database that stores settings,
configurations, and options for the Windows operating system and installed applications.
It can be accessed and modified through the Registry Editor, which is a tool available
within Computer Management. The Registry Editor allows users to view and edit registry
keys and values.
2.5 Partitioning:
Partitioning of Hard Drive - Primary, Extended, Logical partitions using Partition Tools:
Partitioning is the process of dividing a hard drive into separate sections or partitions. There are
different types of partitions that can be created using partitioning tools:
 Primary Partition: A primary partition is a basic partition that can be used to install an
operating system or store data. A hard drive can have up to four primary partitions, or three
primary partitions and one extended partition.
 Extended Partition: An extended partition is a special type of partition that can be further
divided into logical partitions. It serves as a container for logical partitions when more than
four partitions are required on a hard drive.
 Logical Partition: Logical partitions are created within an extended partition. They are used
to organize data and can be assigned drive letters to make them accessible.
Partitioning tools such as Disk Management in Windows, Disk Utility in macOS, or third-party
tools like EaseUS Partition Master or GParted can be used to create, delete, resize, and manage
these different types of partitions on a hard drive.
Unit-III
Overview of Networking
3.1 Introduction:
Networking refers to the practice of connecting multiple devices and systems together to
facilitate communication and data sharing. It involves the use of hardware, software, and
protocols to establish and maintain connections between devices, enabling them to exchange
information.
At a fundamental level, networking relies on the concept of data packets, which are small units
of data that are transmitted over a network. These packets are routed through different network
devices, such as routers and switches, to reach their destination.
Networking can be implemented using various technologies and protocols, depending on the
scale and requirements of the network. Local Area Networks (LANs), Metropolitan Area
Networks (MANs), and Wide Area Networks (WANs) are common types of networks that differ
in terms of their geographical coverage.
3.2 Classification of Networks
Networks can be categorized into various types based on their scale, purpose, and geographical
coverage. Here are some common categories of networks:
 Local Area Network (LAN): A LAN is a network that spans a relatively small
geographical area, such as an office building, school, or home. LANs are commonly used
to connect computers, printers, servers, and other devices within a limited area. Ethernet
and Wi-Fi are commonly used technologies for implementing LANs.
 Metropolitan Area Network (MAN): A MAN is a network that covers a larger
geographical area, such as a city or a town. It interconnects multiple LANs within a
specific region. MANs are usually operated by service providers and can provide high-
speed connectivity over fiber optic or wireless links.
 Wide Area Network (WAN): A WAN is a network that spans a large geographical area,
often connecting multiple cities or countries. The internet is the most well-known
example of a WAN. WANs utilize various technologies, including leased lines, satellite
links, and internet connections, to enable communication over long distances.
3.3 Hardware and Software Components, Wi-Fi, Bluetooth:
Networking involves both hardware and software components to enable communication between
devices. Some key components include:
 Network Interface Cards (NICs): These are hardware devices that allow devices to
connect to a network. NICs provide a physical interface for transmitting and receiving
data.
 Routers: Routers are network devices that forward data packets between different
networks. They determine the best path for data transmission and handle the routing of
packets across networks.
 Switches: Switches are devices that connect multiple devices within a network. They
forward data packets to their intended destination based on MAC addresses.
 Wi-Fi: Wi-Fi is a wireless networking technology that allows devices to connect to a
network without the need for physical cables. It enables wireless communication between
devices within the range of a Wi-Fi access point.
 Bluetooth: Bluetooth is a short-range wireless communication technology that allows
devices to connect and exchange data over short distances. It is commonly used for
connecting devices like smartphones, laptops, and peripherals (e.g., keyboards, mice) to
each other.
3.5 Network Communication Standards: Network communication standards define the rules
and protocols that govern how devices communicate and exchange data within a network. Some
widely used network communication standards include:
 Ethernet: Ethernet is a commonly used standard for wired LANs. It specifies the physical
and data link layer protocols for wired network communication. Ethernet standards define
parameters such as data transfer rates, cable types, and network topologies.
 TCP/IP: TCP/IP (Transmission Control Protocol/Internet Protocol) is the foundational
protocol suite of the internet. It provides a set of protocols for addressing, routing, and
transmitting data packets across networks. TCP is responsible for reliable data delivery,
while IP handles the addressing and routing of packets.
 Wi-Fi Standards: Wi-Fi standards, such as 802.11a/b/g/n/ac/ax, define wireless
communication protocols for wireless LANs. These standards specify the frequencies,
data rates, and modulation techniques used for wireless transmission.
 Bluetooth Standards: Bluetooth standards define the protocols and specifications for
wireless communication over short distances. Bluetooth versions, such as Bluetooth 4.0,
5.0, and so on, introduce new features, improved data rates, and better power efficiency.
These standards ensure compatibility and interoperability among networking devices, enabling
seamless communication across different networks and technologies.
NETWORK MODELS:
The OSI (Open Systems Interconnection) reference model and the TCP/IP (Transmission
Control Protocol/Internet Protocol) reference model are two different but related networking
models. Let's explore each model separately:
1. OSI Reference Model:
The OSI reference model is a conceptual framework that standardizes the functions of a
communication system into seven layers. Each layer performs specific tasks and interacts
with the layers above and below it. The layers are as follows:
 Physical Layer: Deals with the physical transmission of raw data bits over a
communication channel.
 Data Link Layer: Provides error-free transmission of data frames over a physical link
and handles physical addressing.
 Network Layer: Manages the routing of data packets between different networks,
including addressing and routing protocols.
 Transport Layer: Ensures reliable end-to-end data delivery, including segmentation,
flow control, and error recovery.
 Session Layer: Establishes, manages, and terminates connections between applications,
providing synchronization and checkpointing services.
 Presentation Layer: Translates data to a format that the application layer can
understand, including encryption, compression, and data formatting.
 Application Layer: Provides services directly to the end-user applications, such as
email, web browsing, file transfer, etc.
The OSI model helps in understanding network protocols and provides a framework for the
development of interoperable networking systems. However, it is mainly used as a reference
model for educational and conceptual purposes, as most real-world networks follow the TCP/IP
model.
2. TCP/IP Reference Model: The TCP/IP reference model is a practical implementation of
the protocols used on the Internet and is the foundation of modern networking. It consists
of four layers:
 Network Interface Layer: Equivalent to the combination of the physical and data link
layers in the OSI model, it deals with the physical transmission and link-level addressing.
 Internet Layer: Corresponds to the network layer in the OSI model. It handles the
routing of IP packets across different networks.
 Transport Layer: Provides reliable and connection-oriented data transport services,
similar to the transport layer in the OSI model. It includes protocols like TCP
(Transmission Control Protocol) and UDP (User Datagram Protocol).
 Application Layer: Equivalent to the combination of the session, presentation, and
application layers in the OSI model. It encompasses various protocols for specific
applications, such as HTTP (Hypertext Transfer Protocol), FTP (File Transfer Protocol),
SMTP (Simple Mail Transfer Protocol), etc. LAN Cables, Connectors, The TCP/IP
model is widely used in the implementation of the Internet and other network-based
applications. It is known for its simplicity and efficiency, and its protocols are the
backbone of global network communication.
 It's important to note that the TCP/IP model does not strictly align with the seven-layer
OSI model. The TCP/IP model combines and condenses some of the functions found in
different layers of the OSI model into its four layers for practicality.
LAN cables and connectors
LAN cables and connectors are essential components for establishing wired network
connections. They are commonly used to connect devices such as computers, routers,
switches, and other networking equipment. Let's explore LAN cables and connectors in
more detail:
LAN Cables:
 Ethernet Cables: The most common type of LAN cable is an Ethernet cable, which is
used to transmit data over a wired network. Ethernet cables come in various categories,
such as Cat5e, Cat6, Cat6a, and Cat7, each with different capabilities and speeds.
 Twisted Pair Cables: Ethernet cables are typically twisted pair cables, consisting of
multiple pairs of wires twisted together. The twisting reduces electromagnetic
interference and enhances signal quality.
 Shielded vs. Unshielded: Ethernet cables can be either shielded (STP) or unshielded
(UTP). Shielded cables have an extra layer of shielding to protect against interference,
while unshielded cables do not.
 Patch Cables: Patch cables are shorter Ethernet cables used for connecting devices within
a local area network (LAN), such as connecting a computer to a switch or a router. They
are available in various lengths, typically ranging from a few inches to a few feet.
Connectors:
 RJ-45 Connector: The RJ-45 (Registered Jack-45) connector is the most common
connector used with Ethernet cables. It resembles a larger version of a telephone
connector (RJ-11). The RJ-45 connector has eight pins and is used to terminate the ends
of Ethernet cables.
 Keystone Jack: Keystone jacks are modular connectors that snap into wall plates, patch
panels, or surface-mount boxes. They provide a standardized interface for Ethernet
connections and are commonly used in structured cabling systems.
 BNC Connector: BNC (Bayonet Neill-Concelman) connectors are often used for coaxial
cables in older networking setups or for specialized applications like video surveillance
systems. They use a bayonet-style coupling mechanism.
 Fiber Optic Connectors: In fiber optic networking, connectors such as LC, SC, and ST
connectors are used to terminate fiber optic cables. These connectors allow the precise
alignment of fiber strands for efficient light transmission.
Wireless Network Adapter:

A wireless network adapter, also known as a Wi-Fi adapter or wireless NIC (Network Interface
Card), is a device that allows a computer or other device to connect to a wireless network. It
enables the device to communicate wirelessly with routers, access points, and other wireless
devices.
Wireless network adapters come in various forms, including USB adapters, PCIe (Peripheral
Component Interconnect Express) cards, and integrated adapters built into laptops or other
devices. They typically use Wi-Fi technology, such as 802.11ac or 802.11ax (Wi-Fi 5 or Wi-Fi
6), to establish a wireless connection.
USB wireless network adapters are convenient and easy to use. They can be plugged into a USB
port on a computer or laptop, providing wireless connectivity without the need for internal
hardware installation. USB adapters are often portable, making them useful for devices without
built-in Wi-Fi capabilities or for upgrading older devices to support newer Wi-Fi standards.
PCIe wireless network adapters are internal cards that are installed in the PCIe slots of a desktop
computer's motherboard. They offer higher performance and stability compared to USB adapters.
PCIe adapters typically have external antennas for better signal reception and can provide faster
data transfer rates, especially when using the latest Wi-Fi standards.
Integrated wireless network adapters are commonly found in laptops, tablets, smartphones, and
other devices. They are built-in components that are designed specifically for the device and
cannot be easily removed or upgraded. Integrated adapters vary in terms of performance and
capabilities depending on the device and its manufacturer.
When choosing a wireless network adapter, there are a few key factors to consider:
1. Compatibility: Ensure that the adapter is compatible with your device's operating system
and has the necessary drivers available.
2. Wireless Standards: Look for adapters that support the latest Wi-Fi standards (e.g.,
802.11ac or 802.11ax) to benefit from faster speeds and improved performance.
3. Data Transfer Speed: Consider the maximum data transfer speed offered by the adapter.
Higher speeds are desirable for activities such as online gaming, HD video streaming,
and large file transfers.
4. Antenna Configuration: Some adapters come with external antennas that can be
adjusted for better signal reception. This is especially useful if you have a weak wireless
signal in your area.
5. Security: Ensure that the adapter supports the latest security protocols, such as WPA3, to
protect your wireless connections.
6. Brand and Reviews: Consider reputable brands and read reviews to get an idea of the
adapter's reliability and performance.
3.9 Functions of LAN Tools: LAN tools are essential for managing and troubleshooting local
area networks (LANs). The following are the functionalities of different LAN tools:
3.9.1 Anti-Magnetic Mat:
 Provides an insulated surface for working with sensitive electronic components.
 Protects against static electricity, which can damage electronic devices.
 Helps prevent the attraction of magnetic materials that could interfere with electronic
signals.
3.9.2 Anti-Magnetic Gloves:
 Shield against static electricity and magnetic fields.
 Protect the hands of technicians when handling sensitive electronic components.
 Prevent the transfer of oils, dirt, or other contaminants to delicate equipment.
3.9.3 Crimping Tool:
 Used to attach connectors to the ends of network cables.
 Enables the crimping (squeezing) of connectors onto the cable wires securely.
 Essential for creating reliable and properly terminated network cables.
3.9.4 Cable Tester:
 Checks the continuity and integrity of network cables.
 Identifies faulty cables, such as breaks, shorts, or miswires.
 Verifies if cables are properly terminated and connected.
3.9.5 Cutter:
 Used to cut network cables to the desired length.
 Provides clean and precise cuts for proper cable installation.
 Allows for easy cable management and organization.
3.9.6 Loopback Plug:
 Used for testing and troubleshooting network connections.
 Simulates the presence of a network device by creating a loopback.
 Helps determine if a network port or connection is functioning correctly.
3.9.7 Toner Probe:
 Used for tracing and identifying network cables.
 Helps locate specific cables in a bundle or in a network infrastructure.
 Consists of a tone generator and a probe to detect the generated tone.
3.9.8 Punch Down Tool:
 Used to terminate network cables into patch panels, keystone jacks, or connectors.
 Allows for secure and reliable connections by pushing the cable wires into the
appropriate slots.
 Ensures proper seating and termination of the cable.
3.9.9 Protocol Analyzer:
 Monitors and analyzes network traffic and protocols.
 Helps diagnose and troubleshoot network issues.
 Provides insights into network performance, packet loss, latency, and security
vulnerabilities.
3.9.10 Multi-meter:
 Measures electrical properties such as voltage, current, and resistance.
 Useful for troubleshooting network equipment and checking power supply levels.
 Can assist in identifying electrical faults or irregularities in network devices.
These LAN tools are commonly used by network technicians and IT professionals to install,
maintain, and troubleshoot local area networks (LANs).

3.10 Network Topologies

Network topologies refer to the arrangement or layout of the various devices and components in
a computer network. Different network topologies determine how nodes or devices are
connected and communicate with each other. Here are some common network topologies:
1. Mesh Topology: A mesh topology provides a direct connection between all devices in
the network. Each device is connected to every other device, creating a redundant and
fault-tolerant network. Mesh topologies are highly reliable but can be expensive and
complex to implement.
2. Star Topology: In a star topology, all devices are connected to a central hub or switch.
Each device has a dedicated point-to-point connection to the hub, enabling easy
management and fault isolation. If the hub fails, however, the connected devices lose
connectivity.
3. Bus Topology: In a bus topology, all devices are connected to a common communication
medium, known as a bus or backbone. Each device receives all the data transmitted on
the bus but only processes data intended for itself. It is a simple and inexpensive topology
but can be prone to congestion and single-point-of-failure issues.

4. Ring Topology: In a ring topology, devices are connected in a closed loop or ring, where
each device is connected to two other devices. Data travels around the ring in one
direction, and each device receives and forwards the data to the next device. Failure of
one device can disrupt the entire network.
Note: All diagrams refer in running notes.
Each network topology has its own advantages and disadvantages, and the choice of topology
depends on factors such as the network size, cost, scalability, reliability requirements, and the
type of communication needed within the network
Unit-IV
NETWORK ADDRESSING SUBNETTING
Introduction:

o Network Addressing is one of the major responsibilities of the network layer.


o Network addresses are always logical, i.e., software-based addresses.
o A host is also known as end system that has one link to the network. The boundary between
the host and link is known as an interface. Therefore, the host can have only one interface.
o A router is different from the host in that it has two or more links that connect to it. When
a router forwards the datagram, then it forwards the packet to one of the links. The
boundary between the router and link is known as an interface, and the router can have
multiple interfaces, one for each of its links. Each interface is capable of sending and
receiving the IP packets, so IP requires each interface to have an address.
o A network address always points to host / node / server or it can represent a whole network.
Network address is always configured on network interface card and is generally mapped
by system with the MAC address (hardware address or layer-2 address) of the machine for
Layer-2 communication.
o IP addressing provides mechanism to differentiate between hosts and network. Because IP
addresses are assigned in hierarchical manner, a host always resides under a specific
network. The host which needs to communicate outside its subnet, needs to know
destination network address, where the packet/data is to be sent.
o

o Hosts in different subnet need a mechanism to locate each other. This task can be done by
DNS. DNS is a server which provides Layer-3 address of remote host mapped with its
domain name or FQDN. When a host acquires the Layer-3 Address (IP Address) of the
remote host, it forwards all its packet to its gateway. A gateway is a router equipped with
all the information which leads to route packets to the destination host.
Network address can be of one of the following:
o Unicast (destined to one host)
o Multicast (destined to group)
o Broadcast (destined to all)
IP Address:
IP address stands for “Internet Protocol address.” The Internet Protocol is a set of rules for
communication over the internet, such as sending mail, streaming video, or connecting to a
website. An IP address identifies a network or device on the internet. The internet protocols
manage the process of assigning each unique device its own IP address. (Internet protocols do
other things as well, such as routing internet traffic.) This way, it’s easy to see which devices on
the internet are sending, requesting, and receiving what information.
IP addresses are like telephone numbers, and they serve the same purpose. When you contact
someone, your phone number identifies who you are, and it assures the person who answers the
phone that you are who you say you are. IP addresses do the exact same thing when you’re
online — that’s why every single device that is connected to the internet has an IP address.

There are two types of IP addresses: IPv4 and IPv6. It’s easy to recognize the difference if you
count the numbers. IPv4 addresses contain a series of four numbers, ranging from 0 (except the
first one) to 255, each separated from the next by a period — such as 5.62.42.77.
IPv6 addresses are represented as eight groups of four hexadecimal digits, with the groups
separated by colons. A typical IPv6 address might look like this:
2620:0aba2:0d01:2042:0100:8c4d:d370:72b4.
An IP address has two parts: the network ID, comprising the first three numbers of the address,
and a host ID, the fourth number in the address. So on your home network — 192.168.1.1, for
example – 192.168.1 is the network ID, and the final number is the host ID.
The Network ID indicates which network the device is on. The Host ID refers to the specific device
on that network
You may not always want the outside world to know exactly which device and network you're
using. In this case, it’s possible to mask your IP address from the outside world through a Virtual
Private Network (VPN). When you use a VPN, it prevents your network from revealing your
address.
TCP/IP Addressing scheme
IP addresses are only one part of the internet’s architecture. After all, having a postal address for
your house is meaningless unless there’s a post office responsible for delivering the mail. In
internet terms, IP is one part of TCP/IP.
The Transmission Control Protocol/Internet Protocol (TCP/IP) is a set of rules and procedures
for connecting devices across the internet. TCP/IP specifies how data is exchanged: Data is
broken down into packets and passed along a chain of routers from origin to
destination. This is the basis for all internet communication.
TCP defines how applications communicate across the network. It manages how a message is
broken down into a series of smaller packets, which are then transmitted over the internet and
reassembled in the right order at the destination address.
The IP portion of the protocol directs each packet to the right destination. Each gateway
computer on the network checks this IP address to determine where to forward the message.
Introduction of Classful IP Addressing
IP address is an address having information about how to reach a specific host, especially outside
the LAN. An IP address is a 32 bit unique address having an address space of 2 32.
Generally, there are two notations in which IP address is written, binary notation and dotted
decimal notation
Binary Notation:

Dotted Decimal Notation:

Classful Addressing
The 32 bit IP address is divided into five sub-classes. These are:
 Class A
 Class B
 Class C
 Class D
 Class E
Each of these classes has a valid range of IP addresses. Classes D and E are reserved for
multicast and experimental purposes respectively. The order of bits in the first octet determine
the classes of IP address.
IPv4 address is divided into two parts:
 Network ID
 Host ID
The class of IP address is used to determine the bits used for network ID and host ID and the
number of total networks and hosts possible in that particular class. Each ISP or network
administrator assigns IP address to each device that is connected to its network.

IP addresses are globally managed by Internet Assigned Numbers Authority(IANA) and


Regional Internet registries(RIR).
Class A:
IP address belonging to class A are assigned to the networks that contain a large number of
hosts.
 The network ID is 8 bits long.
 The host ID is 24 bits long.
The higher order bit of the first octet in class A is always set to 0. The remaining 7 bits in first
octet are used to determine network ID. The 24 bits of host ID are used to determine the host in
any network. The default subnet mask for class A is 255.x.x.x. Therefore, class A has a total
of:
 2^7-2= 126 network ID(Here 2 address is subtracted because 0.0.0.0 and 127.x.y.z are
special address. )
 2^24 – 2 = 16,777,214 host ID
IP addresses belonging to class A ranges from 1.x.x.x – 126.x.x.x

Class B:
IP address belonging to class B are assigned to the networks that ranges from medium-sized to
large-sized networks.
 The network ID is 16 bits long.
 The host ID is 16 bits long.
The higher order bits of the first octet of IP addresses of class B are always set to 10. The
remaining 14 bits are used to determine network ID. The 16 bits of host ID is used to
determine the host in any network. The default sub-net mask for class B is 255.255.x.x. Class
B has a total of:
 2^14 = 16384 network address
 2^16 – 2 = 65534 host address
IP addresses belonging to class B ranges from 128.0.x.x – 191.255.x.x.

Class C:
IP address belonging to class C are assigned to small-sized networks.
 The network ID is 24 bits long.
 The host ID is 8 bits long.
The higher order bits of the first octet of IP addresses of class C are always set to 110. The
remaining 21 bits are used to determine network ID. The 8 bits of host ID is used to determine
the host in any network. The default sub-net mask for class C is 255.255.255.x. Class C has a
total of:
 2^21 = 2097152 network address
 2^8 – 2 = 254 host address
IP addresses
belonging
to class C ranges
from 192.0.0.x

223.255.255.x.

Class D:
IP address belonging to class D are reserved for multi-casting. The higher order bits of the first
octet of IP addresses belonging to class D are always set to 1110. The remaining bits are for
the address that interested hosts recognize.
Class D does not posses any sub-net mask. IP addresses belonging to class D ranges from
224.0.0.0 – 239.255.255.255.

Class E:
IP addresses belonging to class E are reserved for experimental and research purposes. IP
addresses of class E ranges from 240.0.0.0 – 255.255.255.254. This class doesn’t have any
sub-net mask. The higher order bits of first octet of class E are always set to 1111.

Class full addressing:


Subnetting:
When the IP system was first introduced, finding a network and sending data to it was easier
because the number of users on the internet was limited. As the number of users on the internet is
increasing, sending a data packet to the computer you want in a network is becoming quite difficult
these days. When a network is large enough to support an organization, network performance
becomes a significant concern.
An organization can use IP subnets to split more extensive networks for logical (firewalls, etc.) or
physical reasons (smaller broadcast domains, etc.). In other words, routers make routing decisions
based on subnets.

Subnetting is a method of dividing a single physical network into logical sub-


networks (subnets). Subnetting allows a business to expand its network without requiring a new
network number from its Internet service provider. Subnetting helps to reduce the network traffic
and also conceals network complexity. Subnetting is necessary when a single network number
must be assigned to several portions of a local area network (LAN).

The purpose of subnetting is to establish a computer network that is quick, efficient, and
robust. As networks grow in size and complexity, traffic must find more efficient pathways.
Bottlenecks and congestion would arise if all network traffic travelled across the system at the
same time, utilizing the same path, resulting in slow and wasteful backlogs. By creating a subnet,
you can limit the number of routers that network traffic must pass through. An engineer will
effectively establish smaller mini-routes within a larger network to allow traffic to go the shortest
distance possible.

Working of Subnets in Computer Networks


Subnetting, as we all know, separates the network into small subnets. While each subnet permits
communication between the devices connected to it, subnets are connected together by routers.
The network technology being utilized and the connectivity requirements define the size of a
subnet. Each organization is responsible for selecting the number and size of the subnets it
produces, within the constraints of the address space available for its use.
o For the construction of the subnets, we usually check the MSB (Most Significant Bit) bits of the
host ID and if found wrong we make it right. In order to create two network subnets, we fix one of
the host's MSB (Most Significant Bit) bits in the table below. We are unable to alter network bits
since doing so would alter the entire network.

We need a subnet mask to identify a subnet, which is created by substituting the number "1" for
each Network ID bit and the amount of bits we reserve for Host ID to create the subnet. A data
packet from the internet is intended to be forwarded to the specified subnet network using the
subnet mask.

A part of an address should be used as the Subnet ID is also specified by the subnet mask. In order
to apply the subnet mask to the whole network address, a binary AND operation is utilized. When
performing an AND operation, it is assumed that the result will be "true" if both inputs are. If not,
"false" is presented. This is only possible when both bits are 1.

The Subnet ID results from this. The Subnet ID is used by routers to choose the best route among
the sub - networks.

o The two components that make up an IP address are the Network Prefix (sometimes called
the Network ID) and the Host ID. Depending on whether the address is Class A, B, or C,
either the Network Prefix or the Host ID must be separated. A Class B IPv4 address,
172.16.37.5, is seen in the image below. The Network Prefix is 172.16.0.0, and the Host
ID is 37.5.

Internet Protocol Addressing:

Internet protocols make communication on the internet possible. Every device connected to the
internet is assigned an IP address. This enables the identification and location of networked
computers on the internet.

Internet protocols also enable the transportation of data items on the internet. The data
items are transmitted in form of independent messages whose contents are not guaranteed.

There are two versions of internet protocol, i.e., internet protocol version 4 and internet protocol
version 6, used at the network layer of the OSI model.

Internet protocol version 4

Internet protocol version 4, just as its name suggests, was the fourth version of the IP suite
developed by the DARPA and released for use in 1982. It is a significant protocol of the standard
networking protocols on the internet and all the other packet switching networks.

IPv4 being a connectionless protocol, does not guarantee if data is delivered, neither does it
arrange the data packets properly. Some packets may also have duplicates, so the organization is
all addressed by the layer that handles transport, i.e., TCP or the UDP. IPv4 also has special and
multicast addresses for private networks with approximately eighteen million addresses.
IP Header

IPv4 address digits are separated with decimal points as shown below:

Examples of IPv4 addresses

172.16.254.1

169.254.255.255

IPv4 addressing

Three different types of addressing modes supported in IPv4, namely:

 Unicast addressing mode

Unicast addressing mode only allows for data to be sent to a specific host at a time. The address
of the data destination is a 32-bit IP address of the host device.

 Broadcast addressing mode

Broadcast addressing mode involves the transmission of a data packet to a network of hosts with
the address of the destination indicated as a special broadcast address. The packet sent can be
processed by any host on the network.

 Multicast addressing mode

This involes the use of both Unicast addressing mode and Broadcast addressing mode. The
packet is not addressed to any specific host or any network of hosts and is processed by more
than one host device on the network.

Internet protocol version 6

Internet protocol version 6 is the newest version of the internet protocol suite as of now
developed to replace the fourth version, the IPv4.
IPv6 was brought into existence by the IETF because IPv4 had exhausted its addresses. IPv6 was
intended to replace IPv4. However, this has not been the case; IPv4 has continued to live on.

IP Header

IPv6 uses an address of 128 bits hence allowing 2^128 addresses. IPv6 is not designed to work
together with IPv4; thus, there can not be any communication.

In IPv6, addresses are represented in 8 groups with colons in between every four digits that are to
base 16, i.e., hexadecimal digits. The address is a long line of digits that can be shortened using a
shortening technique to not cause confusion.

Example

2001:0db8:0001:0000:0000:0ab9:COA8:0102 is shortened to 2001:db8:1::ab9:COA8:102. both the addresses


refer to the same machine on the internet only difference is one is condensed to reduce its length.

There are three methods of transition from IPv4 to IPv6, as stated below:

1. Dual stacking- This involves having both the IPv4 and IPv6 on the same device.
2. Tunneling- Involves communication of users with IPv6 to users with IPv4 to reach IPv6.
3. Network Addressing Translation- Involves communication of hosts with different IP versions.

Classfull Addressing and classless Addressing:

Classful addressing divides the IPv4 address space (0.0.0.0-255.255.255.255) into 5 IP address
classes: A, B, C, D, and E. However, class A networks, along with class B networks and class C
networks, are used for network hosts. Class D networks, which cover the 224.0.0.0-
239.255.255.255 IP address range, are reserved for multicasting, and class E (240.0.0.0-
255.255.255.255) is reserved for “future use.”

The table below details the default network mask (subnet mask), IP address ranges, number of
networks, and number of addresses per network of each address class.
Number of
IPv4 address Network Number of
IPv4 addresses IPv4 address range
class Mask IPv4 Networks
per network

0.0.0.0 –
A 255.0.0.0 128 16,777,214
127.255.255.255
128.0.0.0 –
B 255.255.0.0 16,384 65,534
191.255.255.255
192.0.0.0 –
C 255.255.255.0 2,097,152 254
223.255.255.255

As we can see, Class A continues to use the first 8-bits of an address, and may be suitable for
very large networks. Class B is for networks much smaller than Class A, but still large in their
own right. Class C addresses are suitable for small networks.

What are the limitations of classful IP addressing?

As you can probably guess, the internet is hungry for IP addresses. While classful IP addressing
was much more efficient than the older “first 8-bits” method of chopping up the IPv4 address
space, it still wasn’t enough to keep up with growth.

As internet popularity continued to surge past 1981, it became clear that allocating blocks of
16,777,216, 65,536, or 256 addresses simply wasn’t sustainable. Addresses were being wasted in
too-large blocks, and it was clear there’d be a tipping point where we ran out of IP address space
altogether.

One of the best ways to understand why this was a problem is to consider an organization that
needed a network just slightly bigger than a Class C. For example, suppose our example
organization needs 500 IP addresses. Going up to a Class B network means wasting 65,034
addresses (65,534 usable Class B host addresses minus 500). Similarly, if it needed just 2 public
IP addresses, a Class C would waste 252 (254 usable addresses – 2).

Any way you look at it, IP addresses under the IPv4 protocol were running out, either through
waste or the upper limits of the system.

Did you know? There’s a calculated limit of 4,294,967,296 IPv4 addresses, and they were
exhausted on April 21, 2017.

classless addressing:

Classless addressing is an IPv4 addressing architecture that uses variable-length subnet


masking.
In 1993 Classless Inter-Domain Routing (CIDR) introducing the concept of classless addressing.
You see, with classful addressing, the size of networks is fixed. Each address range has a default
subnet mask. Classless addressing, however, decouples IP address ranges from a default subnet
mask, allowing for variable-length subnet masking (VLSM).

Using classless addressing and VLSM, addresses can be allocated much more efficiently. This is
because network admins get to pick network masks, and in turn, blocks of IP addresses that are
the right size for any purpose.

Working of classless addressing:

At a high level, classless addressing works by allowing IP addresses to be assigned arbitrary


network masks without respect to “class.” That means /8 (255.0.0.0), /16 (255.255.0.0), and /24
(255.255.255.0) network masks can be assigned to any address that would have traditionally
been in the Class A, B, or C range. Additionally, that means that we’re no longer tied down to /8,
/16, and /24 as our only options, and that’s where classless addressing gets very interesting.

Going back to our example organization, if we need 500 IP addresses, using a subnet calculator
(we built one!) tells us a /23 block is much more efficient than a Class B allocation. /23 gives us
510 usable host addresses. That means by switching to classless addressing, we’ve avoided
wasting over 65,000 addresses. Similarly, if we need just the two hosts, a /30 saves 250
addresses.
Unit-V

NETWORK PROTOCOLS AND MANAGEMENT


Protocols in computer Network:
A protocol is a set of rules that governs how a network and networking devices are
connected and communicated each other. There are several protocols that manages network and
network components.
HTTP
o HTTP stands for Hyper Text Transfer Protocol.
o It is a protocol used to access the data on the World Wide Web (www).
o The HTTP protocol can be used to transfer the data in the form of plain text, hypertext,
audio, video, and so on.
o This protocol is known as HyperText Transfer Protocol because of its efficiency that
allows us to use in a hypertext environment where there are rapid jumps from one
document to another document.
o HTTP is similar to the FTP as it also transfers the files from one host to another host. But,
HTTP is simpler than FTP as HTTP uses only one connection, i.e., no control connection
to transfer the files.
o HTTP is used to carry the data in the form of MIME-like format.
o HTTP is similar to SMTP as the data is transferred between client and server. The HTTP
differs from the SMTP in the way the messages are sent from the client to the server and
from server to the client. SMTP messages are stored and forwarded while HTTP
messages are delivered immediately.
Features of HTTP:
o Connectionless protocol: HTTP is a connectionless protocol. HTTP client initiates a
request and waits for a response from the server. When the server receives the request,
the server processes the request and sends back the response to the HTTP client after
which the client disconnects the connection. The connection between client and server
exist only during the current request and response time only.
o Media independent: HTTP protocol is a media independent as data can be sent as long
as both the client and server know how to handle the data content. It is required for both
the client and server to specify the content type in MIME-type header.
o Stateless: HTTP is a stateless protocol as both the client and server know each other only
during the current request. Due to this nature of the protocol, both the client and server do
not retain the information between various requests of the web pages.
HTTP Transactions
Client server

The above figure shows the HTTP transaction between client and server. The client initiates a
transaction by sending a request message to the server. The server replies to the request message
by sending a response message.
Messages
HTTP messages are of two types: request and response. Both the message types follow the same
message format.
Request Message: The request message is sent by the client that consists of a request line,
headers, and sometimes a body.
Response Message: The response message is sent by the server to the client that consists of a
status line, headers, and sometimes a body.

Uniform Resource Locator (URL)


o A client that wants to access the document in an internet needs an address and to facilitate
the access of documents, the HTTP uses the concept of Uniform Resource Locator
(URL).
o The Uniform Resource Locator (URL) is a standard way of specifying any kind of
information on the internet.
o The URL defines four parts: method, host computer, port, and path.

o Method: The method is the protocol used to retrieve the document from a server. For
example, HTTP.
o Host: The host is the computer where the information is stored, and the computer is
given an alias name. Web pages are mainly stored in the computers and the computers are
given an alias name that begins with the characters "www". This field is not mandatory.
o Port: The URL can also contain the port number of the server, but it's an optional field. If
the port number is included, then it must come between the host and path and it should be
separated from the host by a colon.
o Path: Path is the pathname of the file where the information is stored. The path itself
contain slashes that separate the directories from the subdirectories and files.
File Transfer Protocol:
File Transfer Protocol (FTP) is a standard network protocol used for transferring files between a
client and a server on a computer network. It is a reliable and widely used protocol for file
transfers over the Internet.
FTP operates on the client-server model, where the client initiates a connection to the server and
requests to transfer files. The server, which is usually a dedicated FTP server software, listens for
incoming connections on TCP port 21. Once the connection is established, the client can
authenticate itself with the server using a username and password.
FTP supports two modes of operation: Active mode and Passive mode.
In active mode, the client specifies a port for the server to establish a data connection back to the
client. In passive mode, the client establishes both the control and data connections, and the
server listens for incoming data connections.
Once the connection is established, the client can perform various operations on the server's file
system, such as uploading (putting) files from the client to the server, downloading (getting) files
from the server to the client, renaming files, deleting files, creating directories, and listing the
contents of directories.
FTP can be used through command-line interfaces, dedicated FTP client software, or integrated
into web browsers. Many operating systems and software packages include built-in FTP clients
for basic file transfers.
It's worth noting that FTP is an unencrypted protocol, meaning that the data transferred between
the client and server is not protected from eavesdropping. However, there is an extension to FTP
called FTP Secure (FTPS) that adds SSL/TLS encryption to the FTP communication, providing a
secure channel for file transfers. Another secure alternative is SSH File Transfer Protocol
(SFTP), which runs over a secure shell (SSH) connection.
While FTP has been widely used in the past, its usage has decreased in recent years due to
security concerns and the availability of more secure file transfer protocols. Nonetheless, FTP
still remains in use in certain scenarios where security is not a critical requirement or in legacy
systems.
File Operations:
FTP provides various operations for managing files on the server. These operations include
uploading files from the client to the server (put), downloading files from the server to the client
(get), deleting files (delete), renaming files (rename), creating directories (mkdir), removing
directories (rmdir), changing directories (cd), and listing directory contents (ls).
Simple Mail Transfer Protocol (SMTP):
SMTP is a protocol used for sending and receiving email messages. It is responsible for the
transfer of email from a source to its destination across networks. SMTP operates on the
application layer of the TCP/IP protocol stack and uses TCP as its transport protocol. It typically
works in a client-server model, where the client initiates a connection with the server and sends
the email message. The server then relays the message to the intended recipient's email server.
SMTP uses port 25 for communication.
Components of SMTP

First, we will break the SMTP client and SMTP server into two components such as user agent
(UA) and mail transfer agent (MTA). The user agent (UA) prepares the message, creates the
envelope and then puts the message in the envelope. The mail transfer agent (MTA) transfers this
mail across the internet.
SMTP allows a more complex system by adding a relaying system. Instead of just having one
MTA at sending side and one at receiving side, more MTAs can be added, acting either as a
client or server to relay the email.
The relaying system without TCP/IP protocol can also be used to send the emails to users, and
this is achieved by the use of the mail gateway. The mail gateway is a relay MTA that can be
used to receive an email.

Working of SMTP
Composition of Mail: A user sends an e-mail by composing an electronic mail message using a
Mail User Agent (MUA). Mail User Agent is a program which is used to send and receive mail.
The message contains two parts: body and header. The body is the main part of the message
while the header includes information such as the sender and recipient address. The header also
includes descriptive information such as the subject of the message. In this case, the message
body is like a letter and header is like an envelope that contains the recipient's address.
Submission of Mail: After composing an email, the mail client then submits the completed e-
mail to the SMTP server by using SMTP on TCP port 25.

Delivery of Mail: E-mail addresses contain two parts: username of the recipient and domain
name. For example, [email protected], where "vivek" is the username of the recipient and
"gmail.com" is the domain name.
If the domain name of the recipient's email address is different from the sender's domain name,
then MSA will send the mail to the Mail Transfer Agent (MTA). To relay the email, the MTA
will find the target domain. It checks the MX record from Domain Name System to obtain the
target domain. The MX record contains the domain name and IP address of the recipient's
domain. Once the record is located, MTA connects to the exchange server to relay the message.

Receipt and Processing of Mail: Once the incoming message is received, the exchange server
delivers it to the incoming server (Mail Delivery Agent) which stores the e-mail where it waits
for the user to retrieve it.
Access and Retrieval of Mail: The stored email in MDA can be retrieved by using MUA (Mail
User Agent). MUA can be accessed by using login and password.
Address Resolution Protocol:
Address Resolution Protocol (ARP) is a protocol or procedure that connects an ever-changing
Internet Protocol (IP) address to a fixed physical machine address, also known as a media access
control (MAC) address, in a local-area network (LAN).
This mapping procedure is important because the lengths of the IP and MAC addresses differ,
and a translation is needed so that the systems can recognize one another. The most used IP
today is IP version 4 (IPv4). An IP address is 32 bits long. However, MAC addresses are 48 bits
long. ARP translates the 32-bit address to 48 and vice versa.
There is a networking model known as the Open Systems Interconnection (OSI) model. First
developed in the late 1970s, the OSI model uses layers to give IT teams a visualization of what is
going on with a particular networking system. This can be helpful in determining which layer
affects which application, device, or software installed on the network, and further, which IT or
engineering professional is responsible for managing that layer.
The MAC address is also known as the data link layer, which establishes and terminates a
connection between two physically connected devices so that data transfer can take place. The IP
address is also referred to as the network layer or the layer responsible for forwarding packets of
data through different routers. ARP works between these layers.

When a new computer joins a local area network (LAN), it will receive a unique IP address to
use for identification and communication. Packets of data arrive at a gateway, destined for a
particular host machine. The gateway, or the piece of hardware on a network that allows data to
flow from one network to another, asks the ARP program to find a MAC address that matches
the IP address. The ARP cache keeps a list of each IP address and its matching MAC address.
The ARP cache is dynamic, but users on a network can also configure a static ARP
table containing IP addresses and MAC addresses.
ARP caches are kept on all operating systems in an IPv4 Ethernet network. Every time a device
requests a MAC address to send data to another device connected to the LAN, the device verifies
its ARP cache to see if the IP-to-MAC-address connection has already been completed. If it
exists, then a new request is unnecessary. However, if the translation has not yet been carried
out, then the request for network addresses is sent, and ARP is performed.
An ARP cache size is limited by design, and addresses tend to stay in the cache for only a few
minutes. It is purged regularly to free up space. This design is also intended for privacy and
security to prevent IP addresses from being stolen or spoofed by cyber attackers. While MAC
addresses are fixed, IP addresses are constantly updated.

Reverse Address Resolution Protocol:


Reverse Address Resolution Protocol (RARP) is a network protocol used to map a physical or
MAC address to an IP address. Unlike traditional Address Resolution Protocol (ARP), which
resolves IP addresses to MAC addresses, RARP performs the opposite process.
RARP is primarily used in older systems or diskless workstations that need to obtain their IP
address dynamically. In such cases, a device may have a unique MAC address but lacks a pre-
configured IP address. RARP allows these devices to request an IP address from a RARP server
on the network based on their MAC address.
Here's a simplified overview of how RARP works:
1. The device without an IP address sends a RARP request broadcast packet on the network.
2. A RARP server on the network receives the request and checks its database for the
corresponding IP address associated with the MAC address provided in the request.
3. If the server finds a match, it sends a RARP reply packet containing the IP address back
to the requesting device.
4. The device receives the reply and configures its IP address based on the information
provided.
RARP operates at the data link layer of the OSI model and uses Ethernet frames for
communication. It relies on the broadcast nature of the network, which means that the RARP
request is sent to all devices on the local network segment.
While RARP was widely used in the past, it has largely been replaced by other dynamic IP
address assignment methods such as Dynamic Host Configuration Protocol (DHCP). DHCP
provides a more flexible and scalable solution for dynamically assigning IP addresses to devices
on a network.
Dynamic Host Configuration Protocol
Dynamic Host Configuration Protocol (DHCP) is a network management protocol used to
dynamically assign an IP address to any device, or node, on a network so they can communicate
using IP (Internet Protocol). DHCP automates and centrally manages these configurations. There
is no need to manually assign IP addresses to new devices. Therefore, there is no requirement for
any user configuration to connect to a DHCP based network.
DHCP can be implemented on local networks as well as large enterprise networks. DHCP is the
default protocol used by the most routers and networking equipment. DHCP is also called RFC
(Request for comments) 2131.
Working of DHCP:
DHCP runs at the application layer of the TCP/IP protocol stack to dynamically assign IP addresses
to DHCP clients/nodes and to allocate TCP/IP configuration information to the DHCP clients.
Information includes subnet mask information, default gateway, IP addresses and domain name
system addresses.
DHCP is based on client-server protocol in which servers manage a pool of unique IP addresses,
as well as information about client configuration parameters, and assign addresses out of those
address pools.
DHCP does the following:
o DHCP manages the provision of all the nodes or devices added or dropped from the
network.
o DHCP maintains the unique IP address of the host using a DHCP server.
o It sends a request to the DHCP server whenever a client/node/device, which is configured
to work with DHCP, connects to a network. The server acknowledges by providing an IP
address to the client/node/device.
Components of DHCP
When working with DHCP, it is important to understand all of the components. Following are the
list of components:
o DHCP Server: DHCP server is a networked device running the DHCP service that holds
IP addresses and related configuration information. This is typically a server or a router but
could be anything that acts as a host, such as an SD-WAN appliance.
o DHCP client: DHCP client is the endpoint that receives configuration information from a
DHCP server. This can be any device like computer, laptop, IoT endpoint or anything else
that requires connectivity to the network. Most of the devices are configured to receive
DHCP information by default.
o IP address pool: IP address pool is the range of addresses that are available to DHCP
clients. IP addresses are typically handed out sequentially from lowest to the highest.
o Subnet: Subnet is the partitioned segments of the IP networks. Subnet is used to keep
networks manageable.
o Lease: Lease is the length of time for which a DHCP client holds the IP address
information. When a lease expires, the client has to renew it.
o DHCP relay: A host or router that listens for client messages being broadcast on that
network and then forwards them to a configured server. The server then sends responses
back to the relay agent that passes them along to the client. DHCP relay can be used to
centralize DHCP servers instead of having a server on each subnet.
DNS-DOMAIN NAME SYSTEM
An application layer protocol defines how the application processes running on different systems,
pass the messages to each other.
o DNS stands for Domain Name System.
o DNS is a directory service that provides a mapping between the name of a host on the
network and its numerical address.
o DNS is required for the functioning of the internet.
o Each node in a tree has a domain name, and a full domain name is a sequence of symbols
specified by dots.
o DNS is a service that translates the domain name into IP addresses. This allows the users
of networks to utilize user-friendly names when looking for other hosts instead of
remembering the IP addresses.
o For example, suppose the FTP site at EduSoft had an IP address of 132.147.165.50, most
people would reach this site by specifying ftp.EduSoft.com. Therefore, the domain name
is more reliable than IP address.
DNS is a TCP/IP protocol used on different platforms. The domain name space is divided into
three different sections: generic domains, country domains, and inverse domain.
Generic Domains
o It defines the registered hosts according to their generic behavior.
o Each node in a tree defines the domain name, which is an index
to the DNS database.
o It uses three-character labels, and these labels describe the
organization type.

Country Domain
The format of country domain is same as a generic domain, but it uses two-character country
abbreviations (e.g., us for the United States) in place of three character organizational
abbreviations.
Inverse Domain
The inverse domain is used for mapping an address to a name. When the server has received a
request from the client, and the server contains the files of only authorized clients. To determine
whether the client is on the authorized list or not, it sends a query to the DNS server and ask for
mapping an address to the name.
Working of DNS
o DNS is a client/server network communication protocol. DNS clients send requests to the.
server while DNS servers send responses to the client.
o Client requests contain a name which is converted into an IP address known as a forward
DNS lookups while requests containing an IP address which is converted into a name
known as reverse DNS lookups.
o DNS implements a distributed database to store the name of all the hosts available on the
internet.
o If a client like a web browser sends a request containing a hostname, then a piece of
software such as DNS resolver sends a request to the DNS server to obtain the IP address
of a hostname. If DNS server does not contain the IP address associated with a hostname,
then it forwards the request to another DNS server. If IP address has arrived at the resolver,
which in turn completes the request over the internet protocol.
SNMP
o SNMP stands for Simple Network Management Protocol.
o Simple Network Management Protocol (SNMP) is an application-layer protocol for
monitoring and managing network devices on a local area network (LAN) or wide area
network (WAN).
o The purpose of SNMP is to provide network devices, such as routers, servers and printers,
with a common language for sharing information with a network management system
(NMS).
o SNMP has two components Manager and agent.
o The manager is a host that controls and monitors a set of agents such as routers.
o It is an application layer protocol in which a few manager stations can handle a set of
agents.
o The protocol designed at the application level can monitor the devices made by different
manufacturers and installed on different physical networks.
o It is used in a heterogeneous network made of different LANs and WANs connected by
routers or gateways.
Managers & Agents
o A manager is a host that runs the SNMP client program while the agent is a router that runs
the SNMP server program.
o Management of the internet is achieved through simple interaction between a manager and
agent.
o The agent is used to keep the information in a database while the manager is used to access
the values in the database. For example, a router can store the appropriate variables such
as a number of packets received and forwarded while the manager can compare these
variables to determine whether the router is congested or not.
o Agents can also contribute to the management process. A server program on the agent
checks the environment, if something goes wrong, the agent sends a warning message to
the manager.
Management with SNMP has three basic ideas:
o A manager checks the agent by requesting the information that reflects the behavior of the
agent.
o A manager also forces the agent to perform a certain function by resetting values in the
agent database.
o An agent also contributes to the management process by warning the manager regarding
an unusual condition.
Management Components
o Management is not achieved only through the SNMP protocol but also the use of other
protocols that can cooperate with the SNMP protocol. Management is achieved through
the use of the other two protocols: SMI (Structure of management information) and
MIB(management information base).
o Management is a combination of SMI, MIB, and SNMP. All these three protocols such as
abstract syntax notation 1 (ASN.1) and basic encoding rules (BER).

SMI
The SMI (Structure of management information) is a component used in network management. Its
main function is to define the type of data that can be stored in an object and to show how to
encode the data for the transmission over a network.
MIB
o The MIB (Management information base) is a second component for the network
management.
o Each agent has its own MIB, which is a collection of all the objects that the manager can
manage. MIB is categorized into eight groups: system, interface, address translation, ip,
icmp, tcp, udp, and egp. These groups are under the mib object.
SNMP defines five types of messages: GetRequest, GetNextRequest, SetRequest, GetResponse,
and Trap.
Network Management
Network management is the procedure of administering, managing and working a data network
using a network management system. Current network management systems use software and
hardware to constantly collect and analyze data and push out configuration changes for increasing
performance, reliability, and security.

It involves configuring monitoring and possibly reconfiguring components in a network with the
goal of providing optimal performance, minimum downtime, proper security, accountability and
flexibility.
Functions of Network Management:
Network Management involves monitoring and controlling a network system so that it can
operate properly without downtime. So, the function performed by a network management system
are categorized as follows –
Fault management:
Fault management is the procedure of technology used to manage the administrator who prevents
faults within a networked system so that the availability of downtime is reduced by identifying,
isolating and fixing any malfunctions that occur. It can support active and passive components to
discover fault.
Configuration management
It refers to the process of initially configuring a network and then adjusting it in response to
changing networks requirements. This function is important because improper configuration may
cause the network to work suboptimal or may not work at all. The configuration involves the
parameters at the network interface like IP address, DHCP, DNS, server address etc.
Network management:
Network management is the procedure of maintaining and organizing the active and passive
network elements. It will support the services to maintain network elements and network
performance monitoring and management. It recognizes the fault, Investigate, Troubleshoot,
Configuration Management and OS changes to fulfil all the user requirements. It allows computers
in a network to communicate with each other, control networks and allow troubleshooting or
performance enhancements.
Data logging and report
Data logs record all the records and interactions that pass through a specific point in a system,
between keyboard and display. If any system failure appears, the administrator can go to the log
and view what might have created it.
Accounting management of network resources
To keep a record of usage of network resources known as accounting management. Like to
examine and to determine how to better allocate resources. One might examine the type of traffic
or level of traffic at a particular port. It can also monitor activities of users about password & user
id and authentication for the usage of resources.
Performance Management
It involves monitoring network utilization, end to end response time & performance of resources
at various points in a network. For example, to keep track of switched interfaces in an Ethernet.
Network Security
It refers to the process of providing security on network and network resources. It involves
managing the security services on a resource by using access control, authentication,
confidentiality, integrity and non-repudiation.
Network Monitoring and Troubleshooting.
In today's world, the term network monitoring is widespread throughout the IT industry. Network
monitoring is a critical IT process where all networking components like routers, switches,
firewalls, servers, and VMs are monitored for fault and performance and evaluated continuously
to maintain and optimize their availability. One important aspect of network monitoring is that it
should be proactive. Finding performance issues and bottlenecks proactively helps in identifying
issues at the initial stage. Efficient proactive server monitoring can prevent network downtime or
failures.
How to perform network monitoring effectively
For an efficient network monitoring, need to cut-off the unnecessary load to the network monitor,
at every step possible by:
 Monitoring only the essentials
 Optimizing the monitoring interval
 Choosing the right protocol
 Setting thresholds

Monitoring essential network devices


Faulty network devices impact network performance. This can be eliminated through early
detection and this is why network device monitoring is of utmost importance. In effective network
monitoring, the first step is to identify the devices and the related performance metrics to be
monitored. Devices like desktops and printers are not critical and do not require frequent
monitoring whereas servers, routers and switches perform business critical tasks but at the same
time have specific parameters that can be selectively monitored.
Optimizing the network monitoring interval
Both the critical and non-critical devices however require monitoring, hence the second step,
configuring monitoring interval. Monitoring interval determines the frequency at which the
network devices and its related metrics are polled to identify the performance and availability
status. Setting up monitoring intervals can help to take the load off the network monitoring and
reporting tools and in turn, your resources. The interval depends on the type of network device or
parameter being monitored. Availability status of devices have to be monitored the least interval
of time preferably every minute. CPU and memory stats can be monitored once in every 5 minutes.
The monitoring interval for other metrics like Disk utilization can be extended and is sufficient if
it is polled once every 15 minutes. Monitoring every device at the least interval will only add
unnecessary load to the network and is not quite necessary.
Choosing the right network protocol
With the devices identified and the monitoring intervals established, selecting the right network
protocol is the next step. When monitoring a network and its devices, a common good practice is
to adopt a secure and non-bandwidth consuming network management protocol to minimize the
impact it has on network performance. Most of the network devices and Linux servers support
SNMP(Simple Network Management Protocol) and CLI protocols and Windows devices support
WMI protocol. SNMP is one of the widely accepted network protocols to manage and monitor
network elements. Most of the network elements come bundled with an SNMP agent. They just
need to be enabled and configured to communicate with the network management system (NMS).
Allowing SNMP read-write access gives one complete control over the device. Using SNMP, one
can replace the entire configuration of the device. The best network monitor helps the administrator
take charge of the network by setting SNMP read/write privileges and restricting control for other
users.
Setting up monitoring thresholds
Network downtime can cost a lot of money. In most cases, the end-user reports a network issue to
the network monitoring team. The reason behind this is a poor approach to a proactive enterprise
network monitor. The key challenge in real time network monitoring is to identify performance
bottlenecks proactively. This is where thresholds play a major role in network monitoring
application. Threshold limits vary from device to device based on the business use case.
Trouble Shooting
Network troubleshooting refers to the combined measures and processes used to identify, locate,
and resolve network problems located anywhere along a network, from WAN to LAN.
It's a logical process that network engineers or IT professionals use to resolve network problems
and improve network performance. Essentially, to fix, you need to troubleshoot them. When
troubleshooting a network, many IT pros will use a network troubleshooting software or various
network troubleshooting tools to help with the process.
Network troubleshooting involves a range of techniques and tools, such as network performance
monitoring, network analyzers, ping and traceroute utilities, and network performance testing
tools. Network administrators and technicians use these tools to identify and diagnose network
issues, which may include slow network speeds, connectivity problems, security breaches, and
other issues.
Some common steps involved in network troubleshooting include identifying the problem, testing
the network, isolating the issue, analyzing network logs, and implementing a solution. This process
may require collaboration among network administrators, IT support staff, and other stakeholders
to identify and resolve the issue.
Effective network troubleshooting is critical for maintaining reliable network performance,
minimizing downtime, and ensuring the security of network resources.
There are many different use cases for network troubleshooting. Here are a few examples:
1. Slow network speeds: When users experience slow network speeds, network
troubleshooting can be used to identify the cause of the problem. This could be due to a
congested network, a faulty switch, or a misconfigured router.
2. Dropped connections: When users experience dropped connections, network
troubleshooting can help identify the source of the problem. This could be due to
interference from other wireless devices, a faulty network card, or a weak signal.
3. Network outages: When the network goes down, network troubleshooting can be used to
quickly identify and resolve the issue. This could be due to a power outage, a failed piece
of network hardware, or a configuration issue.
4. Security breaches: Network troubleshooting can also be used to identify and address
security breaches. For example, if a user's computer is infected with malware, network
troubleshooting can be used to isolate the infected device and prevent the malware from
spreading to other devices on the network.
5. Software compatibility issues: When new software is installed on a network,
compatibility issues can arise. Network troubleshooting can help identify and resolve these
issues, ensuring that the new software works as intended and does not cause any network
problems.
Remote Network Monitoring (RMON):
RMON stands for Remote Network Monitoring. It is an extension of the Simple Network
Management Protocol (SNMP) that allows detailed monitoring of network statistics for Ethernet
networks.
RMON was initially developed to address remote site and local area network (LAN) segment
management from a centralized location. The RMON standard determines a group of functions
and statistics exchanged between RMON compatible network probes and console managers.

ROM Versions
There are two ROM Versions which are as follows −
RMON1 MIB
It has defined 10 MIB groups for basic network monitoring. It operates on the MAC layer and the
physical layer.
 Statistics MIB Group − It contains a statistic measured by the probe for each monitored
interface on this device. It includes statistics on packets dropped, packets sent, bytes, sent,
broadcast packets, multicast packets, CRC errors, giants, packet fragments.
 History − It records periodic statistical samples from a network and stores them for
retrieval. It contains the number of samples, items sampled in different periods.
 Alarm − It periodically takes statistical samples and compares them with the threshold set
for events generation. It includes an alarm table & implementation of event group, Alarm
type, interval, starting threshold, stop threshold.
 HOST − It contains statistics associated with each host discovered on the network.
Statistics contains Host address, packets & bytes that are received and transmitted,
broadcast packets, multicast packets, error packets.
 HOST top N − It prepares tables that describe the top hosts. It contains statistics on hosts,
sample, and start and stop period, rate base duration.
 Matrix − It stores and retrieves statistics for conversations between sets of two addresses.
Its elements are source & destination address pairs, their packets, bytes & errors for each
pair.
 Filters − It enables packets to be matched by a filter equation for capturing packets or
events. Its elements are bit-filter type, filter expression, conditional expression to other
filters.
 Packet Capture − It enables packets to be captured after they flow through a channel. Its
elements are the buffer size for captured packets, full status, and the number of captured
packets.
 Events − It controls the generation and notification of events from a device. Its elements
are event type, description, last time event sent.
 Token ring − It supports token rings.
RMON 2 MIB Group
It operates on the above protocol layers of the OSI model: application, presentation, session, and
transport & Network layer.
 Protocol Delivery − It is a simple and interoperable way for an RMON 2 application to
establish which protocols a particular RMON 2 agent implements.
 Protocol Distribution − It maps the data collected by a probe to the correct protocol name
displayed to the network manager.
 Address Mapping − It helps address translation from the MAC layer to network layer
addresses that are easier to read. It also supports the SNMP management platform and will
lead to improved topology.
 Network Layer host − It contains statistics for network layer traffic to or from each host.
 Network Layer Matrix − It contains network layer traffic statistics for conversations
between pairs of hosts.
 Application Layer Host − It contains statistics for application layer traffic to or from each
host.
 Application Layer Matrix − It stores and retrieves application layer statistics for
conversation between sets of two addresses.
 Probe Configuration − It provides a standard way to remotely configure probe parameters
such as trap destination and out-of-band management.
 User History Collection − It contains periodic samples of user-specified variables.

Composed By: D. CHANDRA MOULI., MCA. WCF,


Department of Computer Science
Vijayam Science and Arts Degree College, Chittoor.

You might also like