Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
88 views62 pages

Com Hardware1 EEC117

The document summarizes the five generations of computers from the 1940s to present. The first generation used vacuum tubes, were enormous in size, expensive, and unreliable. The second generation introduced transistors, making computers smaller, cheaper, and more reliable. The third generation used integrated circuits, resulting in even smaller size and lower costs. The fourth generation used microprocessors on a single chip, leading to widespread personal computer use. The fifth generation focuses on artificial intelligence and parallel processing.

Uploaded by

pdaramola344
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
88 views62 pages

Com Hardware1 EEC117

The document summarizes the five generations of computers from the 1940s to present. The first generation used vacuum tubes, were enormous in size, expensive, and unreliable. The second generation introduced transistors, making computers smaller, cheaper, and more reliable. The third generation used integrated circuits, resulting in even smaller size and lower costs. The fourth generation used microprocessors on a single chip, leading to widespread personal computer use. The fifth generation focuses on artificial intelligence and parallel processing.

Uploaded by

pdaramola344
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 62

COMPUTER HARDWARE 1

LECTURE NOTE
MODULE ONE
History/Evolution of Computer
The evolution of computers sometime discussed in reference to the different
generations of computing devices. Each generation of computer is characterized by a
major technological development that fundamentally changed the way computers
operate, resulting in increasingly smaller, cheaper, and more powerful and more
efficient and reliable devices.

Below is a summary of each generation and the developments that led to the current
devices that were used.

First Generation (1940-1956): Vacuum Tubes


The first computers used vacuum tubes for circuitry and magnetic drums for memory,
and were often enormous, taking up entire rooms. They were very expensive to
operate and in addition, it uses great deal of electricity, generated a lot of heat, which
was often the cause of malfunctions. First generation computers relied on machine
language, the lowest-level programming language understood by computers, to
perform operations, and they could only solve one problem at a time. Input was based
on punched cards and paper tape, and output was displayed on printouts. The
UNIVAC (Universal Automatic Computer) and ENIAC (Electronic Numerical Integrator
and Calculator) computers are examples of first-generation computing devices. The
UNIVAC was the first commercial computer delivered to a business client, the U.S.
Census Bureau in 1951.

The main features of first generation are:


 Vacuum tube technology
 Unreliable
 Supported machine language only
 Very costly
 Generated lot of heat
 Slow input and output devices
 Huge size
 Need of A.C.
 Non-portable
 Consumed lot of electricity

1|Page
Some computers of this generation were:
 ENIAC
 EDVAC (Electronic Discrete Variable Automatic Computer)
 UNIVAC
 IBM-701 (International Business Machine)
 IBM-650

ENIAC Computer

EDVAC Computer

Second Generation (1956-1963): Transistors


Transistors replaced vacuum tubes and ushered in the second generation of computers.
The transistor was invented in 1947 but did not see widespread use in computers until
the late 50s. The transistor was far superior to the vacuum tube, allowing computers to
become smaller, faster, cheaper, more energy-efficient and more reliable than their
first-generation predecessors. Though the transistor still generated a great deal of heat
that subjected the computer to damage, it was a vast improvement over the vacuum

2|Page
tube. Second-generation computers still relied on punched cards for input and
printouts for output.
Second-generation computers moved from cryptic binary machine language to
symbolic, or assembly language, which allowed programmers to specify instructions
in words.
High level programming languages were also being developed at this time, such as
early versions of COBOL and FORTRAN. These were also the first computers that
stored their instructions in their memory, which moved from a magnetic drum to
magnetic core technology. The first computers of this generation were developed for
the atomic energy industry.

The main features of second generation are:


 Use of transistors
 Reliable in comparison to first generation computers
 Smaller size as compared to first generation computers
 Generated less heat as compared to first generation computers
 Consumed less electricity as compared to first generation computers
 Faster than first generation computers
 Still very costly
 A.C. needed
 Supported machine and assembly languages
Some computers of this generation were:
 IBM 1620
 IBM 7094
 CDC 1604 (Control Data Corporation)
 CDC 3600
 UNIVAC 1108

IBM 1620

3|Page
Third Generation (1964-1971): Integrated Circuits
The development of the integrated circuit was the hallmark of the third generation of
computers. Transistors were miniaturized and placed on silicon chips, called
semiconductors, which drastically increased the speed and efficiency of computers.
Instead of punched cards and printouts, users interacted with third generation
computers through keyboards and monitors. These were interfaced with an operating
system, which allowed the device to run many different applications at one time with
a central program that monitored the memory. Computers for the first time became
accessible to many because they were smaller and cheaper than their predecessors.

The main features of third generation are:


 IC used
 More reliable in comparison to previous two generations
 Smaller size
 Generated less heat
 Faster
 Lesser maintenance
 Still costly
 A.C needed
 Consumed lesser electricity
 Supported high-level language
Some computers of this generation were:
 IBM-360 series
 Honeywell-6000 series
 PDP (Personal Data Processor)
 IBM-370/168
 TDC-316 (Technical Domain Controller)

Fourth Generation (1971-Present): Microprocessors


The microprocessor brought the fourth generation of computers, as it became possible
to build thousands of integrated circuits onto a single silicon chip. The Intel 4004 chip,
developed in 1971, located all the components of the computer, from the central
processing unit, memory, input/output controls - on a single chip.
In 1981 IBM introduced its first computer for the home user, and in 1984 Apple
introduced the Macintosh. Microprocessors also moved out of the realm of desktop
computers and into many areas of life as more and more everyday products began to
use microprocessors.

As these small computers became more powerful, they could be linked together to
form networks, which eventually led to the development of the Internet. Fourth
generation computers also saw the development of Graphical User Interface (GUI),
the mouse and handheld devices.

4|Page
The main features of fourth generation are:
 VLSI technology used
 Very cheap
 Portable and reliable
 Use of PC's
 Very small size
 Pipeline processing
 No A.C. needed
 Concept of internet was introduced
 Great developments in the fields of networks
 Computers became easily available
Some computers of this generation were:
 DEC 10
 STAR 1000
 PDP 11
 CRAY-1(Super Computer)
 CRAY-X-MP(Super Computer)

A DEC Computer

Fifth Generation - 1980-Present (Artificial Intelligent)


The fifth generation computer came into board from1980-till date. In the fifth generation, the
VLSI (Very large Scale Integration) technology became ULSI (Ultra Large Scale Integration)
technology, resulting in the production of microprocessor chips having ten million electronic
components. This generation is based on parallel processing hardware and AI (Artificial
Intelligence) software. AI is an emerging branch in computer science, which is able to mimic
human being and making computers think like human beings. All the high-level languages
like C and C++, Java, .Net etc. are used in this generation.

5|Page
AI includes:
 Robotics
 Neural networks
 Game Playing
 Development of expert systems to make decisions in real life situations.
 Natural language understanding and generation.

The main features of fifth generation are:


 ULSI technology
 Development of true artificial intelligence
 Development of Natural language processing
 Advancement in Parallel Processing
 Advancement in Superconductor technology
 More user friendly interfaces with multimedia features
 Availability of very powerful and compact computers at cheaper rates
Some computer types of this generation are:
 Desktop
 Laptop
 NoteBook
 UltraBook
 ChromeBook

6|Page
Major Sub-units of a Computer

1. System Case
2. Power Supply
3. Fan
4. Motherboard
5. Drives
6. Cards
7. Ram
8. Processor
9. Peripheral Devices

System Case: The computer case (also called a tower or housing) is the box that encloses
many of the parts shown below. It has attachment points, slots and screws that allow these
parts to be fitted onto the case. The case is also sometimes called the CPU, since it houses
the CPU (central processing unit or processor), but this designation can lead to confusion.
Please see the description of the processor, below.

Power Supply: The power supply is used to connect all of the parts of the computer
described below to electrical power. It is usually is found at the back of the computer case.

Fan: A fan is needed to disperse the significant amount of heat that is generated by the
electrically powered parts in a computer. It is important for preventing overheating of the
various electronic components. Some computers will also have a heat sink (a piece of fluted
metal) located near the processor to absorb heat from the processor.

Motherboard: The motherboard is a large electronic board that is used to connect the power
supply to various other electronic parts, and to hold these parts in place on the computer. The
computer’s memory (RAM, described below) and processor are attached to the motherboard.
Also found on the motherboard is the BIOS (Basic Input and Output System) chip that is
responsible for some fundamental operations of the computer, such as linking hardware and
software. The motherboard also contains a small battery (that looks like a watch battery) and
the chips that work with it to store the system time and some other computer settings.

Drives: A computer’s drives are the devices used for long term storage of information. The
main storage area for a computer is its internal hard drive (also called a hard disk). The
computer should also have disk drives for some sort of removable storage media. A floppy
disk drive was very common until recent years, and is still found on many older desk top
computers. It was replaced by CD-ROM and DVD drives, which have higher storage
capacities. The current standard is a DVD-RW drive, which can both read and write
information using both CD and DVD disks. The USB ports (described later) on a computer
can also be used to connect other storage devices such as flash drives and external hard
drives.

Cards: This term is used to describe important tools that allow your computer to connect and
communicate with various input and output devices. The term “card” is used because these
items are relatively flat in order to fit into the slots provided in the computer case. A
computer will probably have a sound card, a video card, a network card and a modem.

7|Page
RAM: RAM is the abbreviation for random access memory. This is the short term memory
that is used to store documents while they are being processed. The amount of RAM in a
computer is one of the factors that affect the speed of a computer. RAM attaches to the
motherboard via some specific slots. It is important to have the right type of RAM for a
specific computer, as RAM has changed over the years.

Processor: The processor is the main “brain” of a computer system. It performs all of the
instructions and calculations that are needed and manages the flow of information through a
computer. It is also called the CPU (central processing unit), although this term can also be
used to describe a computer case along with all of the hardware found inside it. Another
name for the processor is a computer “chip” although this term can refer to other lesser
processors (such as the BIOS). Processors are continually evolving and becoming faster and
more powerful. The speed of a processor is measured in megahertz (MHz) or gigahertz
(GHz). An older computer might have a processor with a speed of 1000 MHz (equivalent to
1 GHz) or lower, but processors with speeds of over 2 GHz are now common. One processor
company, Intel, made a popular series of processors called Pentium. Many reconditioned
computers contain Pentium II, Pentium III and Pentium 4 processors, with Pentium 4 being
the fastest of these.

Peripheral Devices: Peripheral hardware is the name for the computer components that are
not found within the computer case. This includes input devices such as a mouse,
microphone and keyboard, which carry information from the computer user to the
processor, and output devices such as a monitor, printer and speakers, which display or
transmit information from the computer back to the user.

Power
Supply

Drives
Fan
Housing.
The
processor
is
underneat RAM
h on the
motherboa
rd.
Cards

Inside of a Desktop Computer Case

8|Page
List of Computer Ports
i. Serial Port
ii. Parallel Port
iii. USB Port
iv. VGA Port
v. PS/2 port
vi. Multimedia Port
vii. Ethernet Port
Basic Operation of Computer
How a Computer Works
A computer is a fabulous instrument that turns human inputs into electronic information that
it then can store or share/distribute through various output devices. A computer performs (if
instructed to do so), using information that a user provides (such as a typed sentence). Data
are input into the computer via keyboard, mouse or microphone and any other input devices.
The information is digitized and becoming a simple code that the computer can store. The
information is stored as a part of the computer’s memory and further processing of
information is done by the control and arithmetic and logic unit. Information is shared via
monitor, printer or projector.
Below is the block diagram of how a computer operates;

CPU

Storage Unit
Input Output
Via keyboard, mouse or microphone Information is shared via monitor, printer or pl

Control Unit

Arithmetic and Logic Unit

Block Diagram of Computer Operation

9|Page
Five basic operations of Computer are tabulated below;

Sn Operation Description
1. Take Input The process of entering data and
instructions into the computer
system.
2. Store Data Saving data and instructions so that
they are available for processing as
and when required.
3. Processing Data Performing arithmetic, and logical
operations on data in order to
convert them into useful
information.
4. Output Information The process of producing useful
information or results for the user,
such as a printed report or visual
display.
5. Control the Directs the manner and sequence in
workflow which all of the above operations are
performed.

10 | P a g e
Block Diagram Showing the interconnection of the Sub-units of the motherboard

Memory: This is a computer device that is used in storing information.

TYPES OF MEMORY MODULE


a) SIMM: Single Inline Memory Module
This consists of 72 pins ranging from 4MB-64MB that will fill a corresponding slot,
some early computer uses 30/32 pins memory modules (1MB-4MB). SIMM slots are

11 | P a g e
divided into banks 0, 1, 2.3.4 depends on the mainboard architecture, it can also be
labelled SIMM 1, SIMM 2 and so on.
SIMM Installation Procedure
 Insert the SIMM memory vertically into the SIMM slot.
Push in so that the clip at the side is close.
b) DIMM: Double Inline memory module
This is a type of memory module that provides 168pins corresponding to the slot provided on
the motherboard.
Kinds of DIMM Specifica tion
i. DRAM: (Dual Random Access Memory) These are kinds of DIMM technology
available in 16MB, 32MB, and 64MB and as a single module.
ii. SDRAM: Synchronous Dual Random Access Memory ranging from 64MB,
128MB, Hz clock speed.

Types of Memory
1. RAM
2. ROM
RAM (Random-Access Memory)
This is the umbrella term for all memory that can be read from or written to in a nonlinear
fashion. However, it has come to refer specifically to chip-based memory, since all chip-
based memory is random-access. It is not the opposite of ROM. The computer can read
ROM; it can read and write to RAM.
SIMM (Single in-line Memory module)
DIMM (dual in-line memory module). SIMM and DIMM refer not to memory types, but to
modules (circuit boards plus chips) in which RAM is packaged. SIMMs, the older of the two,
offers a data path of 32 bits. Because Pentiums are designed to handle a much wider data path
than that, SIMMs must be used in pairs on Pentium motherboards (they can be used singly on
boards based on 486 or slower processors). DIMMs, which are of more recent origin, offer a
64-bit path, which makes them more suitable for use with the Pentium and other more recent
processors. From a buyer's standpoint, the good news is that one DIMM will handle the work
of two SIMMs and thus can be used singly on a Pentium motherboard. DIMMs are more
economical in the long run, because you can add one at a time to your system.
DRAM (Dynamic RAM)

12 | P a g e
Dynamic RAM is the standard main memory type in computers today and is what you're
referring to when you tell someone your PC has 32MB of RAM. In DRAM, information is
stored as a series of charges in a capacitor. Within a millisecond of being electronically
charged, the capacitor discharges and needs to be refreshed to retain its values. This constant
refreshing is the reason for the use of the term dynamic.
SDRAM (Synchronous Dynamic RAM)
Resources galore are being poured into SDRAM development, and it has begun making its
appearance in the PC ads. The reason for its increasing popularity is twofold. First, SDRAM
can handle bus speeds of up to 100 MHz, and these are fast approaching. Second, SDRAM is
synchronized with the system clock itself, a technical feat that has eluded PC engineers until
now. SDRAM technology allows two pages of memory to be opened simultaneously, a new
standard for SDRAM is being developed by the Association at Santa Clara University
(California) along with many industry leaders. Called SLDRAM, this technology improves
on SDRAM by offering a higher bus speed and by using packets (small packs of data) to take
care of address requests, timing, and commands to the DRAM. The result is less reliance on
improvements in DRAM chip design, and ideally a lower-cost solution for high-performance
memory. Watch for SLDRAM in the near future.
SRAM (Static Random-access memory)
The difference between SRAM and DRAM is that where DRAM must be refreshed
constantly, SRAM stores data without an automatic refresh. The only time a refresh occurs,
in fact, is when a write command is performed. If the write command doesn't occur, nothing
in the SRAM changes, which is why it's called static. The benefit of SRAM is that it's much
faster than DRAM, reaching speeds of 12 ns as compared with BEDO's 50 ns. The
disadvantage is that SRAM is much more expensive than DRAM. SRAM's most common use
in PCs is in the second-level cache, also called the L2 cache.
L2 cache
Caching is the art of predicting what data will be requested next and having that data already
in hand, thus speeding execution. When your CPU makes a data request, the data can be
found in one of four places: the L1 cache, the L2 cache, main memory, or in a physical
storage system (such as a hard disk). L1 cache exists on the CPU, and is much smaller than
the other three. The L2 cache (second-level cache) is a separate memory area, and is
configured with SRAM. Main memory is much larger and consists of DRAM, and the
physical storage system is much larger again but is also much, much slower than the other
storage areas. The data search begins in the L1 cache, and then moves out to the L2 cache,

13 | P a g e
then to DRAM, and then to physical storage. Each level consists of progressively slower
components. The function of the L2 cache is to stand between DRAM and the CPU, offering
faster access than DRAM but requiring sophisticated prediction technology to make it useful.
The term cache hit refers to a successful location of data in L2, not L1. The purpose of a
cache system is to bring the speed of accessing memory as close as possible to the speed of
the CPU itself.
VRAM (Video RAM)
VRAM is aimed precisely at video performance, and you'll find it primarily on video
accelerator cards or on motherboards that incorporate video technology. VRAM is used to
store the pixel values of a graphical display, and the board's controller reads continuously
from this memory to refresh the display. Its purpose is not only to give you faster video
performance than you'd get with a standard video board, but to reduce strain on the CPU.
VRAM is dual-ported memory; there are two access ports to the memory cells, with one used
to constantly refresh the display and the other used to change the data that will be displayed.
Two ports means a doubling of bandwidth, and faster video performance as a result. By
comparison, DRAM and SRAM have only one access port.
SGRAM (synchronous graphics RAM)
Unlike VRAM, despite the fact that its primary use is on video accelerator cards, SGRAM is
a single-ported RAM type. It speeds performance through a dual-bank feature, in which two
memory pages can be opened simultaneously; it therefore approximates dual-porting.
SGRAM is proving to be a significant player in 3-D video technology because of a block-
write feature that speeds up screen fills and allows fast memory clearing. Three-dimensional
video requires extremely fast clearing, in the range of 30 to 40 times per second.

ROM (Read Only Memory)

Read Only Memory is a type of memory that can permanently or semi permanently hold data.
It is called read-only because it is either impossible or difficult to write to. ROM is also often
called non-volatile memory because any data stored in ROM remains even if the power is
turned off. As such, ROM is an ideal place to put the PC’s start-up instructions—that is, the
software that boots the system (the BIOS).PROM
Programmable ROM. Using special equipment, it is possible to program these chips once.
The PROM is created blank, and then the program can be added later. Once the program has
been set it cannot be changed.
EPROM
14 | P a g e
Erasable PROM. By exposing the EPROM to ultraviolet light for an extended time (15
minutes), the EPROM will be reset to all zeros. Then it can be reprogrammed.
EEPROM
Electrically Erasble PROM. Instead of using ultraviolet light, these chips can be erased by
applying electric pulses to it. Chips such as these can be used to store the BIOS of a
computer. In this way the BIOS can be upgraded using a software program, instead of
replacing the chip.

Memory

Memory (RAM) Upgrade


Memory upgrade refers to process of increasing the size of a memory from lower size into a
much better to enhance the performance of a computer. Example: Replacement of 64MB with
128MB, 128MB with 256MB, 512MB with 1GB e.t.c.

PORTS
A Port is generally a place for physically connecting to some other device of a computer
i. Serial Port
ii. Parallel Port
iii. USB Port
iv. VGA Port
v. PS/2 port
vi. Multimedia Port
vii. Ethernet Port e.t.c

CPU
The CPU consists of;
1. Processor/internal memory
2. Control Unit
3. Arithmetic and Logical Unit

15 | P a g e
A microprocessor is a computer processor that incorporates the functions of a computer's
central processing unit (CPU) on a single integrated circuit (IC) or at most a few integrated
circuits. The microprocessor is a multipurpose, programmable device that accepts digital data
as input, processes it according to instructions stored in its memory, and provides results as
output. Microprocessors contain both combinational logic and sequential digital logic.
Microprocessors operate on numbers and symbols represented in the binary numeral system.

The integration of a whole CPU onto a single chip or on a few chips greatly reduced the cost
of processing power. Before microprocessors, small computers had been implemented using
racks of circuit boards with many medium- and small-scale integrated circuits.
Microprocessors integrated this into one or a few large-scale ICs. Continued increases in
microprocessor capacity have since rendered other forms of computers almost completely
obsolete (see history of computing hardware), with one or more microprocessors used in
everything from the smallest embedded systems and handheld devices to the largest.

POWER SUPPLY

The power supply Unit also refer to as Power pack is used to provide electric power to all
peripherals inside the computer system.

There are two types of Power Pack;

AT (Advanced Technology)

ATX (Advanced Technology Extended)

Power Supply Unit

Motherboard Bios
All motherboards must have a special chip containing software called the ROM BIOS. This
ROM chip contains the startup programs and drivers used to get the system running and act
as the interface to the basic hardware in the system. When you turn on a system, the power on
self test (POST) in the BIOS also tests the major components in the system. Additionally, you
can run a setup program to store system configuration data in the CMOS memory, which is

16 | P a g e
powered by a battery on the motherboard. This CMOS RAM is often called NVRAM
(nonvolatile RAM) because it runs on about 1 millionth of an amp of electrical current and
can store data for years when powered by a tiny lithium battery. The BIOS is a collection of
programs embedded in one or more chips, depending on the design of your computer. That
collection of programs is the first thing loaded when you start your computer, even before the
operating system. Simply put, the
BIOS in most PCs has four main functions:
 POST (power on self-test). The POST tests your computer’s processor, memory,
chipset, video adapter, disk controllers, disk drives, keyboard, and other crucial
components.
 Setup. The system configuration and setup program is usually a menu-driven program
activated by pressing a special key during the POST, and it enables you to configure
the motherboard and chipset settings along with the date and time, passwords, disk
drives, and other basic system settings. You also can control the power-management
settings and boot-drive sequence from the BIOS Setup, and on some systems, you can
also configure CPU timing and clock-multiplier settings. Some older 286 and 386
systems did not have the Setup program in ROM and required that you boot from a
special setup disk.
 Bootstrap loader -A routine that reads the first physical sector of various disk drives
looking for a valid master boot record (MBR). If one meeting certain minimum
criteria (ending in the signature bytes 55AAh) is found, the code within is executed.
The MBR program code then continue the boot process by reading the first physical
sector of the bootable volume, which is the start of the volume boot record (VBR).
The VBR then loads the first operating system startup file, which is usually IO.SYS
(DOS/Windows 9x/Me) or NTLDR (Windows NT/2000/XP), upon which the
operating system is then in control and continues the boot process.
 BIOS (basic input/output system) -This refers to the collection of actual drivers
used to act as a basic interface between the operating system and your hardware when
the system is booted and running. When running DOS or Windows in safe mode, you
are running almost solely on ROM based BIOS drivers because none are loaded from
disk.

17 | P a g e
MODULE TWO

PC PORTS AND CONNECTORS


PC Ports
In a computer there are various connectors and ports, which help in establishing a
communication path between the CPU and various peripherals devices. Before learning about
the various available connectors, it is essential to be familiar with the following terms:
1. Cable is a wire
2. Socket is the female side of a connector.
3. Pin is the male side of a connector.
4. Port is generally a place for physically connecting to some other device usually with a
socket.
Serial Port. This port for use with 9 pin connectors is no longer commonly used, but is
found on many older computers. It was used for printers, mice, modems and a variety of
other digital devices.
Parallel Port. This long and slender port is also no
longer commonly used, but was the most common way
of attaching a printer to a computer until the
introduction of USB ports. Serial Port (left)
Parallel Port (right)

PS/2 Ports

USB Ports

VGA Port
TRS (mini-jack) Ports

Phone/Modem Jacks (top)


Ethernet Port (bottom)

USB Ports

Figure - Back of Desktop Computer Showing Ports

18 | P a g e
VGA. The Video Graphics Array port is found on most computers today and is used to
connect video display devices such as monitors and projectors. It has three rows of holes, for
a 15 pin connector.
PS/2. Until recently, this type of port was commonly used to connect keyboards and mice to
computers. Most desktop computers have two of these round ports for six pin connectors,
one for the mouse and one for the keyboard.
USB. The Universal Serial Bus is now the most common type of port on a computer. It was
developed in the late 1990s as a way to replace the variety of ports described above. It can be
used to connect mice, keyboards, printers, and external storage devices such as DVD-RW
drives and flash drives. It has gone through three different models (USB 1.0, USB 2.0 and
USB 3.0), with USB 3.0 being the fastest at sending and receiving information. Older USB
devices can be used in newer model USB ports.
TRS. TRS (tip, ring and sleeve) ports are also known as ports for mini-jacks or audio jacks.
They are commonly used to connect audio devices such as headphones and microphones to
computers.
Ethernet. This port, which looks like a slightly wider version of a port for a phone jack, is
used to network computers via category 5 (CAT5) network cable. Although many computers
now connect wirelessly, this port is still the standard for wired networked computers. Some
computers also have the narrower port for an actual phone jack. These are used for modem
connections over telephone lines.
Parallel Port
Parallel Port Parallel ports can be used to connect a host of popular computer peripherals like:
• Printers • Scanners • CD burners • External hard drives • Iomega Zip removable drives
• Network adapters • Tape backup drives
Parallel ports were originally developed by IBM as a way to connect a printer to PC. Parallel
ports are also known as LPT ports. When a PC sends data to a printer or any other device
using a parallel port, it sends 8 bits of data (1 byte) at a time. These 8 bits are transmitted
parallel to each other all at once. The standard parallel port is capable of sending 50 to 100
kilobytes of data per second. The original specification for parallel ports was unidirectional,
meaning that data only traveled in one direction for each pin. With the introduction of the
PS/2 in 1987, IBM offered a new bi-directional parallel port design. This mode is commonly
known as Standard Parallel Port (SPP) and has completely replaced the original design. Bi-
directional communication allows each device to receive data as well as transmit it.

Parallel Port
Serial Port
Serial ports, also called communication (COM) ports, support sequential data transmission
and are bi-directional. As explained above, bi-directional communication allows each device
19 | P a g e
to receive data as well as transmit it. The name "serial" comes from the fact that a serial port
"serializes" data. That is, it takes a byte of data and transmits the 8 bits in the byte one at a
time serially one after the other. The main advantage is that a serial port needs only one wire
to transmit the 8 bits (while a parallel port needs 8 because all 8 bits are sent in one go). The
disadvantage is that it takes 8 times longer to transmit the data than it would if there were 8
wires. Serial ports lower cable costs and make cables smaller. A serial port is commonly used
to connect external modems, scanners or the older computer mouse to the computer. It comes
in two versions, 9-pin and 25-pin. 25-pin COM connector is the older version while the 9-pin
connector is the current standard. Data travels over a serial port at 115 Kb per second. The
following is a 9-pin serial port.

Serial Port
USB (Universal Serial Bus) Port
In the past, connecting devices to computers had been a real headache. Printers connected to
parallel printer ports, and most computers only came with one. Things like Zip drives, which
need a high-speed connection into the computer, would use the parallel port as well, often
with limited success and not much speed. The earlier version of Serial Port (COM Port) had 9
pins in it. Modems used the serial port, but so did some printers and a variety of odd things
like Palm Pilots and digital cameras. Most computers have at most two serial ports, and they
are very slow in most cases. Devices that needed faster connections came with their own
cards, which had to fit in a card slot inside the computer's case. Unfortunately, the number of
card slots is limited and a Ph.D. was needed to install the software for some of the cards.
USB, introduced in 1997 is a plug and play peripheral connection, which was invented to
solve all these headaches. It is used to connect various devices, for example, digital joystick,
a scanner, digital speakers, digital cameras, or a PC telephone etc. to the computer. USB is
generally a two-and-a half-inch long port on the back of computers or built into a hatch on
the front of a computer. The Universal Serial Bus provides a single, standardized, easy-to-
use way to connect up to 127 devices to a computer. Just about every peripheral made now
comes in a USB version. A sample list of USB devices that you can buy today includes:
• Printer • Scanner • Mic • Joystick • Flight yoke • Digital camera • WebCam • Scientific data
acquisition device • Modem • Speaker • Telephone • Video phone • Storage device such as
Zip drive • Network connection
Connecting a USB device to a computer is as simple as finding the USB connector on the
back of the machine and plugging the USB connector into it. If it is a new device, the
operating system auto-detects it and asks for the driver disk. If the device has already been
installed, the computer activates it and starts talking to it. USB devices can be connected and
disconnected at any time.
Many USB devices come with their own built-in cable, and the cable has an "A" connection
on it. If not, then the device has a socket on it that accepts a USB "B" connector.

20 | P a g e
A Typical ‘A’ Connector
The USB standard uses "A" and "B" connectors to avoid confusion: "A" connectors connect
towards the computer while the "B" connectors connect to individual devices. By using
different connectors it is impossible to ever get confused. Connect any USB cable's "B"
connector into a device, and it is sure to work. The Universal Serial Bus is the hottest product
in the computer market because of the following features:
 The computer acts as the host.
 Up to 127 devices can connect to the host, either directly or by way of USB hubs.
 Individual USB cables can run as long as 5 meters; with hubs, devices can be up to 30
meters (six cables' worth) away from the host.
 With USB 2, the bus has a maximum data rate of 480 megabits per second.
 A USB cable has two wires for power (+5 volts and ground) and a twisted pair of
wires to carry the data. A typical "B" connector
 On the power wires, the computer can supply up to 500 milliamps of power at 5 volts.
 Low-power devices (such as mice) can draw their power directly from the bus. High-
power devices (such as printers) have their own power supplies and draw minimal
power from the bus. Hubs can have their own power supplies to provide power to
devices connected to the hub.
 USB devices are hot-swappable, meaning you can plug them into the bus and unplug
them any time.
 Many USB devices can be put to sleep by the host computer when the computer
enters a power-saving mode.
Summary

USB 1.0 supports 1.5Mbps


USB 1.1 supports 12Mbps
USB 2.0 supports up to 480Mbps
USB 3.0 supports up to 4.8Gbps

Firewire Port
This port was originally created by Apple and standardized in 1995 as the specification IEEE
1394 High Performance Serial Bus and is very similar to Universal Serial Bus (USB). The
most important features of Firewire port are:
 Fast transfer of data - the latest version achieves speeds up to 800 Mbps. At some
time in the future, that number is expected to jump to an unbelievable 3.2 Gbps

21 | P a g e
 Ability to put lots of devices on the bus. It is possible to connect up to 63 devices to a
FireWire bus. Windows operating systems (98 and later) and Mac OS (8.6 and later)
both support it.
 Hot-pluggable ability - they can be connected and disconnected at any time, even with
the power on.
 Provision of power through the cable - FireWire allows devices to draw their power
from their connection.
 Plug-and-play performance - if you connect a new FireWire device to your computer,
the operating system auto-detects it and asks for the driver disc. If you've already
installed the device, the computer activates it and starts talking to it.
 Low cabling cost
 Low implementation cost
 Ease of use.

Fire wire Ports


The key difference between FireWire and USB is that FireWire is intended for devices
working with a lot more data -- things like camcorders, DVD players and digital audio
equipment. Implementing FireWire costs a little more than USB, which led to the adoption of
USB as the standard for connecting most peripherals that do not require a high-speed bus.
Speed aside, the big difference between FireWire and USB 2.0 is that USB 2.0 is host-based,
meaning that devices must connect to a computer in order to communicate. FireWire is peer-
to-peer, meaning that two FireWire cameras can talk to each other without going through a
computer.
PS/2 Port
IBM developed the PS/2 port. It is also called a mouse port. It is used to connect a computer
mouse or keyboard. A PS/2 connector is a round connector with 6 pins. Nowadays few
computers have two PS/2 ports, one for keyboard and one for mouse. A colour code is used
to identify them.

PS/2 Port for mouse

Keyboard Port
22 | P a g e
In earlier computers the keyboard was connected using a 5-pin DIN connector with a small
notch on one side. The purpose of keeping the notch was to avoid a wrong connection. With
the advent of the PS/2, this socket has become obsolete.

Monitor Socket
This connector is used to attach a computer display monitor to a computer's video card. The
connector has 15 holes.
Audio/Speaker and Microphone Socket
At the back of the computer system we can find three small sockets of blue, green and pink
colours used to connect speakers, audio input devices and microphones to the PC
respectively. The connectors for microphone and speakers look like as shown in the adjacent
figure. They are colour coded to help in troubleshooting.

BUS

A bus is a set of signal pathways that allow information to travel between components inside
or outside of a computer.

Types of Bus

External bus or Expansion bus allows the CPU to talk to the other devices in the computer
and vice versa. It is called that because it's external to the CPU.

Address bus allows the CPU to talk to a device. It will select the particular memory address
that the device is using and use the address bus to write to that particular address.

Data bus allows the device to send information to and from the CPU

Control bus allows for control of information from the CPU to the Address bus, Data bus
and external bus.

23 | P a g e
Peripheral Bus Diagram

Booting: Booting can be defined as the process of Turning ON a computer system.

MODULE THREE

PRINTER

A printer is a device that is used in printing information or documents in hard-copy. The


printing is done on a sheet of paper.

Printers are considered standard PC components which are referred to as the computer
peripheral. They are the part of computer that is external to the computer. The most common
add-ons printers are manufactured in several popular forms. Like other devices, each type has
unique advantages and disadvantages.

When evaluating printers, user should keep the following issues in mind:

24 | P a g e
 Printer resolution: Resolution is usually measured in dots per inch (dpi). This
indicates the number of vertical and horizontal dots that can be printed; the higher the
resolution, the better the print quality.
 Speed: This is usually given in pages printed per minute, where the page consists of
plain text with five percent of the printable page covered in ink or toner.
 Graphics and printer-language support: If the device is used to print graphics, it
should support one or more of the popular printer languages, such as Adobe
PostScript and Hewlett Packard's LaserJet PCL (Printer Control Language).
 Paper capacity: The number and types of paper trays available, the number of pages
that can be placed in them, and the sizes of pages that can be printed all vary widely
among printers. Some smaller units hold as few as 10 sheets, while high-volume
network printers hold several reams in different sizes. Some printers can also be set to
automatically choose which tray to use based on the type of paper best suited for a
job.
 Duty Cycle: This is the number of sheets of paper the printer is rated to print per
month. It is based on a plain-text page with five percent coverage and does not
include graphics.
 Printer memory: Laser printers that will be used to print complex graphics and full-
color images require larger amounts of memory than those which print simple text
only. In many cases, this memory can be added as an option.
 Cost of paper: Will a printer require special paper? Some printers must use special
paper to produce high-quality (photo-quality) images or even good text. Some paper
stocks are too porous for ink-jet printers and will cause the ink to smear or distort,
causing a blurred image.
 Cost of consumables: When comparing the cost of various printers, be sure to
calculate and compare the total cost per page for printing, rather than just the cost of a
replacement ink or toner cartridge.

Common Printer Terms

The following are some of the terms used during printer operation:

 ASCII (American Standard Code for Information Interchange): A standard code


representing characters as numbers, used by most computers and printers.

25 | P a g e
 Font: A collection of characters and numbers in a given size, usually expressed in
style name and size, and expressed in points (pts.)—for example, Times Roman 10 pt.
bold (One point equals 1/72 inch.) Although many people think that bold and italic
are variations of the same font, technically they are different fonts. Some printers are
sold with limited fonts, such as bold only or no-bold varieties of the typefaces.
 LPT (Line Printer Terminal) Port: Term that describes parallel printer ports on a
computer.
 PCL: Hewlett-Packard's Printer-Control Language for printers.
 PostScript: The most common page-description language (PDL). A method of
describing the contents of a page as scalable elements, rather than bitmapped pixels
on the page. The printer is sent a plain ASCII file containing the PostScript program;
the PostScript interpreter in the printer makes the conversion from scale to bitmap at
print time.
 Resolution Enhancement: Technology that improves the appearance of images and
other forms of graphics by using such techniques as modifying tonal ranges,
improving halftone placement, and smoothing the jagged edges of curves.
 Portrait: The vertical orientation of printing on a piece of paper so that the text or
image is printed across the 8.5-inch width of the paper.
 Landscape: The horizontal orientation of printing on a piece of paper so that the text
or image is printed across the 11-inch width of the paper.
 Duplexing: The ability to print on both sides of a page. This cuts operating costs and
allows users to create two-sided documents quickly.

Printers can be grouped into two namely: Impact and Non-impact printer

1. Impact Printers - These printers have a mechanism that touches the paper in order to
create an image. The Dot Matrix Printer is an example of this type.

2. Non-impact Printers - These printers do not touch the paper when creating an image.

The inkjet and laser printers are examples of this type.

26 | P a g e
2.5.1 Dot Matrix Printer

Printers in this category print the characters / images using dots through inked ribbon. These
printers are very economic and require very less maintenance cost. The print quality of the
dot matrix printer is decided by the quantity of pins it has. The number of the pins can vary
from nine to twenty four, depending on the kind of dot matrix printer. When compared to the
other kind of printers, like the laser printers or the ink jet printer, the dot matrix printer is
much more expensive. The dot matrix printer has a tendency to make a lot of noise when
compared to the other kinds of printers. This is why the dot matrix printer is not very popular
among customers. Quality of print in this category is not very high but is highly suitable for
printing situations requiring multiple copies.

27 | P a g e
These can print through hammering pattern of dots on the printing ribbon and can thus print
multiple copies of document if multiple papers separated by carbon papers are inserted in it.
Note that this feature is not available with any other category of printers.

Basic Maintenance of Dot-Matrix Printer

Troubleshooting a dot-matrix printer usually requires a reference manual. If a manual for a


particular printer is unavailable, check the printer itself for instructions (sometimes there are
diagrams inside the printer). Usually, a thorough inspection of the mechanical parts will
uncover the problem..

Dot Matrix Printer and output performance

2.5.2 Inkjet/Deskjet/Bubble jet Printer

Printers in this category are most popular. These printers are very low priced with high
running/maintenance cost. These printers work on liquid ink technology and print the image
using circuit-controlled jet of ink. An inkjet sprays the ink onto the paper in tiny droplets to
form text and graphics. Printing speed of these printers is not very high compared to Laser
Printers. These printers are suitable for people having less printing jobs with a desirable print

28 | P a g e
quality. These printers are available in 'Coloured' and 'Black & White' options. Different
companies have branded their products using the same technology with different names e.g.

 Hewlett Packard (hp) manufactures DeskJet Printers


 Epson manufactures Inkjet Printers
 Canon manufactures Bubble Jet Printers

2.5.3 Laser Printer

These printers use a technique, which is a combination of laser and Xerox technology. The
technology involves dry powder based ink, which is adhered to a drum through magnetic
force, and when a paper is passed through the drum it releases ink on that paper. These are
the fastest available printers in the category and are most suitable for uses involving high-
speed quality. All laser printers follow one basic engine design, similar to the ones used in
most office copiers. They are non-impact devices that precisely place a fine plastic powder
(the toner) on paper. Although they cost more to purchase than most ink-jet printers, they are
much cheaper to operate per page, and the "ink" is permanent.

29 | P a g e
2.5.4 Basic Troubleshooting and Repair of a Printer

Symptom Possible Cause

Printer does not function at all. No AC power is getting to printer. Fuse is blown.

Device does not print although Printer is not online. Printer is out of paper. Printer
power is on. cable is disconnected.

Printer won't go online. Printer is out of paper. (Check connections.)

Paper slips around platen Paper is not being gripped properly. (Adjust paper-
feed selector for size and type of paper.)

Head moves, but does not print. Ribbon is not installed properly or is out of ink.

Head tears paper as it moves. Pins are not operating properly. (Check pins; if any
are frozen, the head needs to be replaced.)

Paper bunches up around There is no reverse tension on paper.


platen.

Paper has "dimples." Paper is misaligned, or the tractor feed wheels are not
locked in place.

"Paper/Error" indicator flashes There is an overload condition.


continuously.

Printout is double-spaced or Printer configuration switch is improperly set. (Make


there is no spacing between sure it isn't set to output a carriage return or linefeed
lines. after each line.)

Printer cannot print characters Printer configuration switch is improperly set.


above ASCII code 127.

Print mode cannot be changed. Printer configuration switch is improperly set.

Note: If visual inspection of the printer does not turn up an obvious fault, proceed to the
printer's self-test program. In most cases, you can initiate this routine by holding down a
specified combination of control keys on the printer (check the owner's manual for diagnostic
procedures) while you turn it on. If a test page prints successfully, the problem is most likely
associated with the computer, the cabling, or the network.

30 | P a g e
Figure 1.9:1 Printer and Faxes Dialog Box

Figure 2.9.2: Selection of Printer Name Dialog Box

31 | P a g e
Figure 2.9.3 Dialog Box showing Printing Preference

Soft-copy: Soft-copy is a data/information that can only be viewed in the computer. It is only
seen via the monitor screen and cannot be touched.

Hard-copy: These are information that can be seen and handled with the hand. They are
printed out on the sheet of paper via the printer machine.

MODULE FOUR

MONITORS
Monitors, commonly called Visual Display Unit (VDU), are the main output device of a
computer. It forms images from tiny dots, called pixels that are arranged in a rectangular
form. The sharpness of the image depends upon the number of pixels.
There are two kinds of viewing screen used for monitors.
a. Cathode-Ray Tube (CRT)
b. Flat- Panel Display (LCD & LED)

Most people use computer monitors daily at work and at home. And while these come in a variety
of shapes, designs, and colours, they can also be broadly categorized into three types.

CRT (Cathode Ray Tube) Monitors


32 | P a g e
These monitors employ CRT technology, which was used most commonly in the
manufacturing of television screens. With these monitors, a stream of intense high energy
electrons is used to form images on a fluorescent screen. A cathode ray tube is basically a
vacuum tube containing an electron gun at one end and a fluorescent screen at another end.
While CRT monitors can still be found in some organizations, many offices have stopped
using them largely because they are heavy, bulky, and costly to replace should they break. It
is suitable for people who cannot afford the new technology of monitor, because it is cheaper .
The following are some disadvantages of CRT:

1. Large in Size
2. High power consumption

LCD (Liquid Crystal Display) Monitors


The LCD monitor incorporates one of the most advanced technologies available today.
Typically, it consists of a layer of colour or monochrome pixels arranged schematically
between a couple of transparent electrodes and two polarizing filters. Optical effect is made
possible by polarizing the light in varied amounts and making it pass through the liquid
crystal layer. The two types of LCD technology available are the active matrix of TFT and a
passive matrix technology. TFT generates better picture quality and is more secure and
reliable. Passive matrix, on the other hand, has a slow response time and is slowly becoming
outdated.
The advantages of LCD monitors include:
1. Compact size which makes them lightweight.
2. They also don’t consume much electricity as CRT monitors, and
3. They can be run off on batteries which make them ideal for laptops.

Images transmitted by these monitors don’t get geometrically distorted and have little flicker.
However, this type of monitor does have disadvantages, such as:

1. Its relatively high price,


2. An image quality which is not constant when viewed from different angles and a
monitor resolution that is not always constant, meaning any alterations can result in
reduced performance.

LED (Light-Emitting Diodes) Monitors


LED monitors are the latest types of monitors in the market today. These are flat panel, or
slightly curved displays which make use of light-emitting diodes for back-lighting, instead of
cold cathode fluorescent (CCFL) back-lighting used in LCDs. LED monitors are said to use
much lesser power than CRT and LCD and are considered far more environmentally friendly.
The advantages of LED monitors are:

33 | P a g e
1. They produce images with higher contrast,
2. They have less negative environmental impact when disposed,
3. They are more durable than CRT or LCD monitors, and feature a very thin design.
4. They also don’t produce much heat while running.
The only downside is that they can be more expensive, especially for the high-end monitors
like the new curved displays that are being released.

2.4 Screen Resolution


It refers to the number of pixel that can be displayed. A pixel (a Portmanteau word from
Picture element) is one of the many tiny dots that make up the representation of a picture in a
computer’s memory. Usually the dots are so small and so numerous that when display on
paper or displayed on computer monitor, they appear to merge into a smooth image. The
colour and intensity of each dots is chosen individually by the computer to represent a small
area of the picture. For example, a resolution of 640x640 indicates that the screen can be
covered by 640 dots wide and 640 dots high. Resolution determines the quality of graphic
display on the screen and also on the printed pages.
2.4.1 Display Properties Setting
The Display Properties dialog box allows the user to customize his/her desktop by changing
the screen saver, colors, font sizes and screen resolution. Open the Display Properties dialog
box by choosing the Display option in the Control Panel or by right clicking anywhere on the
desktop and choosing Properties option from the pop-up menu. The Display Properties dialog
box is displayed. The Display Properties dialog box has five tabs explained below:
 Themes tab - To apply a predefined theme (collection of desktop backgrounds,
sounds, icons etc) from an available list.
 Desktop tab - To change the desktop background image.
 Screen Saver tab - To change the screen saver.
 Appearance tab - To change the appearance of various windows, dialog boxes and
icons.
 Settings tab - To set the screen resolution and Color quality properties.

Functions and Operations of Monitors

34 | P a g e
Fig 2.5 Display properties for Appearance on windows

Fig.2.6. Display Properties for Screen setting

35 | P a g e
Graphics Card

People across the globe are constantly become more dependent on technology and
computers. There are countless physical components that are inside of a computer to make it
function. A graphics card, or video card, is most simply described as the component that is
responsible for displaying an image on a monitor that the user sees. In relation to computer
video computing, an image display is a combination of hundreds or thousands of individual
pixels. It is the graphics card of a computer that tells the monitor exactly what colour, and for
how long, to activate each pixel. Without a graphics card to perform necessary display
calculations for a computer, the work load would be too much for a computer to handle. This
circuit board is responsible for the visual outputs that will be displayed on the monitor. Nowadays,
graphics cards have their own memory modules and processor chips, by which they lessen the load of
CPU and RAM, hence enabling us to see very detailed graphics and high quality animations and video

Physical Components of a Graphics Card

To better understand how a graphics card physically functions, it is beneficial to have


an understanding of the hardware that makes-up a video card. A graphics card only has a
handful of individually critical piece, although there is hundreds of sub-components that
make-up a video card.

Typical Graphics Card

36 | P a g e
The Video RAM, commonly known as VRAM is the same thing as volatile computer RAM,
except it is attached to the video card and can only be used by the video card for graphic
related memory holding. It is the physical memory that stores the images while the card is
processing and generating the pictures.
The Monitor Cable connects the video card on the computer to the display. A desktop
computer comprise of a computer unit, an external mouse and keyboard, and a monitor. The
card transmits the image signal through these cords so the user can see the images on the
display.

Graphics Card to Display Output Connections


There are three common methods to connect video cards to computer peripherals, known as a
display or a monitor. The three primary methods are HDMI, DVI, and VGA.
High-Definition Multimedia Interface (HDMI) is the most recent of the three connection
methods, and it is rapidly becoming the most common too. HDMI has the unique ability to
transfer both high definition audio and video through the same one cord, whereas DVI and
VGA can only transmit video signals. HDMI cords are becoming ever more common in all
types of video cards: from computer video cards to the displays on televisions and video
game consoles.
Digital Visual Interface (DVI) is a digital based standard design for displays such as high
definition monitors, flat panels LCD and plasma screens, and video projectors. DVI was one
of the earlier forms of high definition video output and it is still commonly used today. DVI,
like every form of video connection methods has a unique pin set on both the male and
female adapters of the video card and the connection cable.
Video Graphics Array (VGA) is one of the oldest standard analog based video displays
which was adopted in the late 1980s. It is the least refined of all three connection methods
and VGA was known to have problems with image distortion and sampling errors when
evaluating pixels. VGA connection also becomes continually more blurry as the display size
increases, which is why most modern displays use either HDMI or DVI connection methods.

The Role of a Graphics Card in an Image Display


A graphics card works in constant conjunction with the central processing unit (CPU) and the
RAM of a computer. The CPU is essentially the central command of a computer and RAM is
a form of volatile memory. The graphics card creates an image out of binary data received

37 | P a g e
from the CPU and produces a 3-D image displayed on the monitor. The graphics card
accomplishes this task by simultaneously utilizing four main components of a computer:
 A motherboard connection to serve as a physical data transfer connection
 A CPU processor to perform the calculations to physically decide what to do with
each individual pixel
 Memory to temporarily store pictures and movements of each pixel
 A physical display, or monitor, so the user can actually see the result
To create a 3-D image, a graphics card first creates a virtual outline of an object out of only
straight lines. Then it rasterizes, the process of converting a vector image into a bitmap
image, or simply put, fills in the remaining in-between pixels. After the image has become a
virtually solid object, the graphics card then adds lighting, texture, and color to the object.
Rasterization is a common technique when creating 3-D objects for video games and
computer aided design modelling because it is the only way to provide speed and efficiency
while delivering the image. The graphics card performs all of the necessary calculations to
process the images so the computer can handle the virtual load of delivering moving pictures
and images on a screen.

Card Graphics Processing Unit (GPU)


Just like a central processing unit (CPU) of a computer a graphics card has an equivalent
graphics processing unit (GPU). It is essentially the brain of a graphics card, just like the
CPU, or chip, is to the computer as a whole. A GPU is a specialized electronic circuit
“designed to rapidly manipulate and alter memory to accelerate the building of images in a
frame buffer intended for output to a display”. Modern GPUs are very efficient at
manipulating computer graphics which is enabled by their highly parallel structure to the
CPU’s algorithms.

38 | P a g e
MO DULE FIVE

INTRODUCTIONTO NETWORKING

What is a Network: A computer network consists of a collection of computers,


printers and other equipment that is connected together so that they can communicate
with each other and also share resources.

Every network includes:


 At least two computers, Server or Client workstation.
 Networking Interface Card's (NIC)
 A connection medium, usually a wire or cable, although wireless communication
between networked computers and peripherals is also possible.
 Network Operating system software, such as Microsoft Windows NT or 2000, Novell
NetWare, UNIX and Linux.

TYPES OF NETWORK

Generally, networks are distinguished based on their geographical span. A network


can be as small as distance between your mobile phone and its Bluetooth headphone
and as large as the Internet itself, covering the whole geographical world, i.e. the
Earth.

Personal Area Network: A Personal Area Network or simply PAN is smallest


network which is very personal to a user. This may include Bluetooth enabled devices
or infra-red enabled devices. PAN has connectivity range up to 10 meters. PAN may
include wireless computer keyboard and mouse, Bluetooth enabled headphones,
wireless printers and TV remotes for example.

Personal Area Network | Bluetooth]

Local Area Network: LANs are networks usually confined to a geographic area, such
as a single building or a college campus.

39 | P a g e
LANs can be small, linking as few as three computers, but often link hundreds of
computers used by thousands of people. The development of standard networking
protocols and media has resulted in worldwide proliferation of LANs throughout
business and educational organizations. Resources like Printers, File Servers, Scanners
and internet is easy sharable among computers.

Local Area Network]

Local Area Networks are composed of inexpensive networking and routing equipment. It
may contain local servers serving file storage and other locally shared applications. It mostly
operates on private IP addresses and generally do not involve heavy routing. LAN works
under its own local domain and controlled centrally. LAN uses either Ethernet or Token-ring
technology. Ethernet is most widely employed LAN technology and uses Star topology while
Token-ring is rarely seen. LAN can be wired or wireless or in both forms at once.

Metropolitan Area Network: MAN, generally expands throughout a city such as cable TV
network. It can be in form of Ethernet, Token-ring, ATM or FDDI. Metro Ethernet is a
service which is provided by ISPs. This service enables its users to expand their Local Area
Networks. For example, MAN can help an organization to connect all of its offices in a City.

Metropolitan Area Network

40 | P a g e
Backbone of MAN is high-capacity and high-speed fiber optics. MAN works in between
Local Area Network and Wide Area Network. MAN provides uplink for LANs to WANs or
Internet.

Wide Area Network: Wide area networking combines multiple LANs that are
geographically separate. This is accomplished by connecting the different LANs using
services such as dedicated leased phone lines, dial-up phone lines (both synchronous and
asynchronous), satellite links, and data packet carrier services. Wide area networking can be
as simple as a modem and remote access server for employees to dial into, or it can be as
complex as hundreds of branch offices globally linked using special routing protocols and
filters to minimize the expense of sending data sent over vast distances. This network
provides connectivity to MANs and LANs equipped with very high speed backbone, WAN
uses very expensive network equipment.

Wide Area Network

WAN may use advanced technologies like Asynchronous Transfer Mode (ATM), Frame
Relay and SONET. WAN may be managed under one or more than one administration.

Internet: The Internet is a system of linked networks that are worldwide in scope and
facilitate data communication services such as remote login, file transfer, electronic mail, the
World Wide Web and newsgroups. With the meteoric rise in demand for connectivity, the
Internet has become a communications highway for millions of users. The Internet was
initially restricted to military and academic institutions, but now it is a full-fledged conduit
for any and all forms of information and commerce. Internet websites now provide personal,
educational, political and economic resources to every corner of the planet.
Internet is serving many proposes and is involved in many aspects of life. Some of them are:
 Web sites
 E-mail
 Instant Messaging

41 | P a g e
 Blogging
 Social Media
 Marketing
 Networking
 Resource Sharing
 Audio and Video Streaming

Intranet: With the advancements made in browser-based software for the Internet, many
private organizations are implementing intranets. An intranet is a private network utilizing
Internet-type tools, but available only within that organization. For large organizations, an
intranet provides an easy access mode to corporate information for employees.

Categories of Network
Network can be divided in to two main categories:
i. Peer-to-peer.
ii. Server – based.

Peer-to-peer: In peer-to-peer networking there are no dedicated servers or hierarchy among


the computers. All of the computers are equal and therefore known as peers. Normally each
computer serves as Client/Server and there is no one assigned to be an administrator
responsible for the entire network. Peer-to-peer networks are good choices for needs of small
organizations where the users are allocated in the same general area, security is not an issue
and the organization and the network will have limited growth within the foreseeable future.

Server – based: The term Client/server refers to the concept of sharing the work involved in
processing data between the client computer and the most powerful server computer.
The client/server network is the most efficient way to provide:
 Databases and management of applications such as Spreadsheets, Accounting,
Communications and Document management.
 Network management.
 Centralized file storage.

The client/server model is basically an implementation of distributed or cooperative


processing. At the heart of the model is the concept of splitting application functions between
a client and a server processor. The division of labour between the different processors
enables the application designer to place an application function on the processor that is most
appropriate for that function. This lets the software designer optimize the use of processors--
providing the greatest possible return on investment for the hardware.

42 | P a g e
Client/server application design also lets the application provider mask the actual location of
application function. The user often does not know where a specific operation is executing.
The entire function may execute in either the PC or server, or the function may be split
between them. This masking of application function locations enables system implementers
to upgrade portions of a system over time with a minimum disruption of application
operations, while protecting the investment in existing hardware and software.

Network LAN Technology

Ethernet: Ethernet is the most popular physical layer LAN technology in use today. Other
LAN types include Token Ring, Fast Ethernet, Fiber Distributed Data Interface (FDDI),
Asynchronous Transfer Mode (ATM) and LocalTalk. Ethernet is popular because it strikes a
good balance between speed, cost and ease of installation. These benefits, combined with
wide acceptance in the computer marketplace and the ability to support virtually all popular
network protocols, make Ethernet an ideal networking technology for most computer users
today. The Institute for Electrical and Electronic Engineers (IEEE) defines the Ethernet
standard as IEEE Standard 802.3. This standard defines rules for configuring an Ethernet
network as well as specifying how elements in an Ethernet network interact with one another.
By adhering to the IEEE standard, network equipment and network protocols can
communicate efficiently.
Ethernet is a Local Area Network implementation technology which is widely deployed. This
technology was invented by Bob Metcalfe and D.R. Boggs in early 70s. It was standardized
in IEEE 802.3 in 1980. Ethernet is network technology which shares media. Network which
uses shared media has high probability of data collision. Ethernet uses CSMA/CD technology
to detect collisions. CSMA/CD stands for Carrier Sense Multi Access/Collision Detection.
When a collision happens in Ethernet, all its host rolls back and waits for some random
amount of time and then re-transmit data. Ethernet connectors, i.e. Network Interface cards
are equipped with 48-bits MAC address. This helps other Ethernet devices to identify and
communicate with remote devices in Ethernet. Traditional Ethernet uses 10BASE-T
specifications. 10 is for 10mpbs speed, BASE stands for using baseband and T stands for
Thick net or Thick Ethernet. 10BASE-T Ethernet provides transmission speed up to 10mbps
and uses Coaxial cable or Cat-5 Twisted Pair cable with RJ-5 connector. Ethernet follows
Star Topology with segment length up to 100 meters. All devices are connected to a
Hub/Switch in a Star Fashion.

43 | P a g e
Fast-Ethernet: To encompass need of fast emerging software and hardware technologies,
Ethernet extends itself as Fast-Ethernet. It can run on UTP, Optical Fiber and can be wireless
too. It can provide speed up to 100 mbps. This standard is named as 100BASE-T in IEEE
803.2 using Cat-5 Twisted pair cable. It uses CSMA/CD technique for wired media sharing
among Ethernet hosts and CSMA/CA (Collision Avoidance) technique for wireless Ethernet
LAN. Fast Ethernet on fiber is defined under 100BASE-FX standard which provides speed
up to 100mbps on fiber. Ethernet over Fiber can be extended up to 100 meters in half-duplex
mode and can reach maximum of 2000 meters in full-duplex over multimode fibers.

Giga-Ethernet: After being introduced in 1995, Fast-Ethernet could enjoy its high speed
status only for 3 years till Giga-Ethernet introduced. Giga-Ethernet provides speed up to 1000
mbits/seconds. IEEE802.3ab standardizes Giga-Ethernet over UTP using Cat-5, Cat-5e and
Cat-6 cables. IEEE802.3ah defines Giga-Ethernet over Fiber.
Token Ring: Token Ring is another form of network configuration which differs from
Ethernet in that all messages are transferred in a unidirectional manner along the ring at all
times. Data is transmitted in tokens, which are passed along the ring and viewed by each
device. When a device sees a message addressed to it, that device copies the message and
then marks that message as being read. As the message makes its way along the ring, it
eventually gets back to the sender who now notes that the message was received by the
intended device. The sender can then remove the message and free that token for use by

44 | P a g e
others. Various PC vendors have been proponents of Token Ring networks at different times
and thus these types of networks have been implemented in many organizations.

FDDI: FDDI (Fiber-Distributed Data Interface) is a standard for data transmission on fiber-
optic lines in a local area network that can extend in range up to 200 km (124 miles).
The FDDI protocol is based on the token ring protocol. In addition to being large
geographically, an FDDI local area network can support thousands of users.

Protocols
Network protocols are standards that allow computers to communicate. A protocol defines
how computers identify one another on a network, the form that the data should take in
transit, and how this information is processed once it reaches its final destination. Protocols
also define procedures for handling lost or damaged transmissions or" packets." TCP/IP (for
UNIX, Windows NT, Windows 95 and other platforms), IPX (for Novell NetWare), DECnet
(for networking Digital Equipment Corp. computers), AppleTalk (for Macintosh computers),
and NetBIOS/NetBEUI (for LAN Manager and Windows NT networks) are the main types of
network protocols in use today.
Although each network protocol is different, they all share the same physical cabling. This
common method of accessing the physical network allows multiple protocols to peacefully
coexist over the network media, and allows the builder of a network to use common hardware
for a variety of protocols. This concept is known as "protocol independence,"

Introduction to TCP/IP Networks:


TCP/IP-based networks play an increasingly important role in computer networks. Perhaps
one reason for its peculiarity is that they are based on an open specification that is not
controlled by any vendor.
What Is TCP/IP?

45 | P a g e
TCP stands for Transmission Control Protocol and IP stands for Internet Protocol. The term
TCP/IP is not limited just to these two protocols, however. Frequently, the term TCP/IP is
used to refer to a group of protocols related to the TCP and IP protocols such as the User
Datagram Protocol (UDP), File Transfer Protocol (FTP), Terminal Emulation Protocol
(TELNET), and so on.
The Origins of TCP/IP
In the late 1960s, DARPA (the Defense Advanced Research Project Agency), in the United
States, noticed that there was a rapid proliferation of computers in military communications.
Computers, because they can be easily programmed, provide flexibility in achieving network
functions that is not available with other types of communication equipment. The computers
then used in military communications were manufactured by different vendors and were
designed to interoperate with computers from that vendor only. Vendors used proprietary
protocols in their communication equipment. The military had a multi-vendor network but no
common protocol to support the heterogeneous equipment from different vendors.

IP Addressing:
An IP (Internet Protocol) address is a unique identifier for a node or host connection on an IP
network. An IP address is a 32 bit binary number usually represented as 4 decimal values,
each representing 8 bits, in the range 0 to 255(known as octets) separated by decimal points.
This is known as "dotted decimal" notation. Example: 140.179.220.200
It is sometimes useful to view the values in their binary form.
140 .179 .220 .200
10001100.10110011.11011100.11001000
Every IP address consists of two parts, one identifying the network and one identifying the
node. The Class of the address and the subnet mask determine which part belongs to the
network address and which part belongs to the node address.
Address Classes:
There are 5 different address classes. You can determine which class any IP address is in by
examining the first 4bits of the IP address.
Class A addresses begin with 0xxx, or 1 to 126 decimal.
Class B addresses begin with 10xx, or 128 to 191 decimal.
Class C addresses begin with 110x, or 192 to 223 decimal.
Class D addresses begin with 1110, or 224 to 239 decimal.
Class E addresses begin with 1111, or 240 to 254 decimal.
Addresses beginning with 01111111, or 127 decimal, are reserved for loopback and for
internal testing on a local machine. [You can test this: you should always be able to ping
127.0.0.1, which points to yourself, Class D addresses are reserved for multicasting. Class E
addresses are reserved for future use. They should not be used for host addresses.

46 | P a g e
Now we can see how the Class determines, by default, which part of the IP address belongs
to the network (N) and which part belongs to the node (n).
Class A -- NNNNNNNN.nnnnnnnn.nnnnnnn.nnnnnnn
Class B -- NNNNNNNN.NNNNNNNN.nnnnnnnn.nnnnnnnn
Class C -- NNNNNNNN.NNNNNNNN.NNNNNNNN.nnnnnnnn
In the example, 140.179.220.200 is a Class B address so by default the Network part of the
address (also known as the Network Address) is defined by the first two octets (140.179.x.x)
and the node part is defined by the last 2 octets (x.x.220.200).
In order to specify the network address for a given IP address, the node section is set to all
"0"s. In our example, 140.179.0.0 specifies the network address for 140.179.220.200. When
the node section is set to all "1"s, it specifies a broadcast that is sent to all hosts on the
network. 140.179.255.255 specifies the example of broadcast address. Note that this is true
regardless of the length of the node section.
Private Subnets: There are three IP network addresses reserved for private networks. The
addresses are 10.0.0.0/8, 172.16.0.0/12,and 192.168.0.0/16. They can be used by anyone
setting up internal IP networks, such as a lab or home LAN behind a NAT or proxy server or
a router. It is always safe to use these because routers on the Internet will never forward
packets coming from these addresses Subnetting an IP Network can be done for a variety of
reasons, including organization, use of different physical media(such as Ethernet, FDDI,
WAN, etc.), preservation of address space, and security. The most common reason is to
control network traffic. In an Ethernet network, all nodes on a segment see all the packets
transmitted by all the other nodes on that segment. Performance can be adversely affected
under heavy traffic loads, due to collisions and the resulting retransmissions. A router is used
to connect IP networks to minimize the amount of traffic e a bit-wise logical AND operation
between the IP address and the subnet mask results in the Network Address or Number.
For example, using our test IP address and the default Class B subnet mask, we get:
10001100.10110011.11110000.11001000 140.179.240.200 Class B IP Address
11111111.11111111.00000000.00000000 255.255.000.000 Default Class B Subnet Mask
10001100.10110011.00000000.00000000 140.179.000.000 Network Address

Default subnet masks:


Class A - 255.0.0.0 - 11111111.00000000.00000000.00000000
Class B - 255.255.0.0 - 11111111.11111111.00000000.00000000
Class C - 255.255.255.0 - 11111111.11111111.11111111.00000000

CIDR -- Classless Inter-Domain Routing.


CIDR was invented several years ago to keep the internet from running out of IP addresses.
The "classful" system of allocating IP addresses can be very wasteful; anyone who could

47 | P a g e
reasonably show a need for more than 254 host addresses was given a Class B address block
of 65533 host addresses. Even more wasteful were companies’ and organizations that were
allocated Class A address blocks, which contain over 16 Million host addresses! Only a tiny
percentage of the allocated Class A and Class B address space has ever been actually
assigned to a host computer on the Internet.
People realized that addresses could be conserved if the class system was eliminated. By
accurately allocating only the amount of address space that was actually needed, the address
space crisis could be avoided for many years.
This was first proposed in 1992 as a scheme called Supernetting.
The use of a CIDR notated address is the same as for a Classful address. Classful addresses
can easily be written in CIDR notation (Class A = /8, Class B = /16, and Class C = /24)
It is currently almost impossible for an individual or company to be allocated its own IP
address blocks. You will simply be told to get them from your ISP. The reason for this is the
ever-growing size of the internet routing table.
Just 5 years ago, there were less than 5000 network routes in the entire Internet. Today, there
are over 90,000. Using CIDR, the biggest ISPs are allocated large chunks of address space
(usually with a subnet mask of /19 or even smaller); the ISP's customers (often other, smaller
ISPs) are then allocated networks from the big ISP's pool. That way, all the big ISP's
customers (and their customers, and so on) are accessible via 1 network route on the Internet.
It is expected that CIDR will keep the Internet happily in IP addresses for the next few years
at least. After that, IPv6, with 128 bit addresses, will be needed. Under IPv6, even sloppy
address allocation would comfortably allow a billion unique IP addresses for every person on
earth.

Examining your network with commands:


PING
Ping is used to check for a response from another computer on the network. It can tell you a
great deal of information about the status of the network and the computers you are
communicating with.
Ping returns different responses depending on the computer in question. The responses are
similar depending on the options used.
Ping uses IP to request a response from the host. It does not use TCP. It takes its name from a
submarine sonar search - you send a short sound burst and listen for an echo - a ping -coming
back.
In an IP network, `ping' sends a short data burst - a single packet - and listens for a single
packet in reply. Since this tests the most basic function of an IP network (delivery of single
packet), it's easy to see how you can learn a lot fromsome `pings'.

48 | P a g e
To stop ping, type control-c. This terminates the program and prints out a nice summary of
the number of packets transmitted, the number received, and the percentage of packets lost,
plus the minimum, average, and maximum round-trip times of the packets.
Sample ping session
PING localhost (127.0.0.1): 56 data bytes
64 bytes from 127.0.0.1: icmp_seq=0 ttl=255 time=2 ms
64 bytes from 127.0.0.1: icmp_seq=1 ttl=255 time=2 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=255 time=2 ms
64 bytes from 127.0.0.1: icmp_seq=3 ttl=255 time=2 ms
64 bytes from 127.0.0.1: icmp_seq=4 ttl=255 time=2 ms
64 bytes from 127.0.0.1: icmp_seq=5 ttl=255 time=2 ms
64 bytes from 127.0.0.1: icmp_seq=6 ttl=255 time=2 ms
64 bytes from 127.0.0.1: icmp_seq=7 ttl=255 time=2 ms
64 bytes from 127.0.0.1: icmp_seq=8 ttl=255 time=2 ms
64 bytes from 127.0.0.1: icmp_seq=9 ttl=255 time=2 ms
localhost ping statistics
10 packets transmitted, 10 packets received, 0% packet loss round-trip min/avg/max = 2/2/2
msmeikro$

The Time-To-Live (TTL) field can be interesting. The main purpose of this is so that a packet
doesn't live forever on the network and will eventually die when it is deemed "lost." But for
us, it provides additional information. We can use the TTL to determine approximately how
many router hops the packet has gone through. In this case it's 255 minus Nhops, where N is
the TTL of the returning Echo Replies. If the TTL field varies in successive pings, it could
indicate that the successive reply packets are going via different routes, which isn't a great
thing.
The time field is an indication of the round-trip time to get a packet to the remote host. The
reply is measured in milliseconds. In general, it's best if round-trip times are fewer than 200
milliseconds. The time it takes a packet to reach its destination is called latency. If you see a
large variance in the round-trip times (which is called "jitter"), you are going to see poor
performance talking to the host.

NSLOOKUP
NSLOOKUP is an application that facilitates looking up hostnames on the network. It can
reveal the IP address of a host or, using the IP address, return the host name.
It is very important when troubleshooting problems on a network that you can verify the
components of the networking process. Nslookup allows this by revealing details within the
infrastructure.

49 | P a g e
NETSTAT
NETSTAT is used to look up the various active connections within a computer. It is helpful
to understand what computers or networks you are connected to. This allows you to further
investigate problems. One host may be responding well but another may be less responsive.

IPconfig
This is a Microsoft Windows NT, 2000 command. It is very useful in determining what could
be wrong with a network. This command when used with the /all switch, reveal enormous
amounts of troubleshooting information within the System e.g.
Windows 2000 IP Configuration
Host Name . . . . . . . . . . . . : cowder
Primary DNS Suffix . . . . . . . :
Node Type . . . . . . . . . . . . : Broadcast
IP Routing Enabled. . . . . . . . : No
WINS Proxy Enabled. . . . . . . . : No
WINS Proxy Enabled. . . . . . . . : No
Connection-specific DNS Suffix. :
Description . . . . . . . . . . . :
WAN (PPP/SLIP) Interface
Physical Address. . . . . . . . . : 00-53-45-00-00-00
DHCP Enabled. . . . . . . . . . . : No
IP Address. . . . . . . . . . . . : 12.90.108.123
Subnet Mask . . . . . . . . . . . : 255.255.255.255
Default Gateway . . . . . . . . . : 12.90.108.125
DNS Servers . . . . . . . . . . . : 12.102.244.2
204.127.129.2

Network Cables
In the network you will commonly find three types of cables used these are the, coaxial cable,
fiber optic and twisted-pair.

Coaxial Cable
Coaxial cable has two wires of copper. The core wire lies in center and is made of solid
conductor. Core is enclosed in an insulating sheath. Over the sheath the second wire is
wrapped around and that too in turn encased by insulator sheath. This all is covered by plastic
cover. Because of its structure coax cables are capable of carrying high frequency signals
than that of twisted pair cables. The wrapped structure provides it a good shield against noise
and cross talk. Coaxial cables provide high bandwidth rates of up to 450 mbps. There are

50 | P a g e
three categories of Coax cables namely, RG-59 (Cable TV), RG-58 (Thin Ethernet) and RG-
11 (Thick Ethernet. RG stands for Radio Government. Cables are connected using BNC
connector and BNC-T. BNC terminator is used to terminate the wire at the far ends.

Fiber Optics: Fiber Optic works on the properties of light. When light ray hits at critical
angle it tends to refracts at 90 degree. This property has been used in fiber optic. The core of
fiber optic cable is made of high quality glass or plastic. From one end of it light is emitted, it
travels through it and at the other end light detector detects light stream and converts it to
electric data form. Fiber Optic provides the highest mode of speed. It comes in two modes;
one is single mode fiber and second is multimode fiber. Single mode fiber can carries single
ray of light whereas multimode is capable of carrying multiple beams of light.

Twisted Pair Cables


These come in two flavors of unshielded and shielded.
Shielded Twisted Pair (STP)
It is more common in high-speed networks. The biggest difference you will see in the UTP
and STP is that the STP uses metallic shield-wrapping to protect the wire from interference.
Something else to note about these cables is that they are defined in numbers also. The bigger
the number the better the protection from interference. Most networks should go with no less
than a CAT 3 and CAT 5 is most recommended.
Now you know about cables we need to know about connectors. This is pretty important and
you will most likely need the RJ-45 connector. This is the cousin of the phone jack connector
and looks real similar with the exception that the RJ-45 is bigger. Most commonly your
connector are in two flavors and this is BNC (Bayonet Naur Connector) used in thicknets and
the RJ-45 used in smaller networks using UTP/STP.

51 | P a g e
Unshielded Twisted Pair (UTP)
This is the most popular form of cables in the network and the cheapest form that you can go
with. The UTP has four pairs of wires and all inside plastic sheathing. The biggest reason that
we call it Twisted Pair is to protect the wires from interference from themselves. Each wire is
only protected with a thin plastic sheath.

Ethernet Cabling
Now to familiarize you with more on the Ethernet and it's cabling we need to look at the 10's.
10Base2, is considered the thin Ethernet, thinnet, and thinwire which uses light coaxial cable
to create a 10 Mbps network. The cable segments in this network can't be over 185 meters in
length. These cables connect with the BNC connector. Also as a note these unused
connection must have a terminator, which will be a 50-ohm terminator.
10Base5, this is considered a thicknet and is used with coaxial cable arrangement such as the
BNC connector. The good side to the coaxial cable is the high-speed transfer and cable
segments can be up to 500 meters between nodes/workstations. You will typically see the
same speed as the 10Base2 but larger cable lengths for more versatility.
10BaseT, the “T” stands for twisted as in UTP (Unshielded Twisted Pair) and uses this for
10Mbps of transfer. The down side to this is you can only have cable lengths of 100 meters

52 | P a g e
between nodes/workstations. The good side to this network is they are easy to set up and
cheap! This is why they are so common an ideal for small offices or homes.
100BaseT is considered Fast Ethernet uses STP (Shielded Twisted Pair) reaching data
transfer of 100Mbps. This system is a little more expensive but still remains popular as the
10BaseT and cheaper than most other typenetworks. This on of course would be the cheap
fast version.
10BaseF, this little guy has the advantage of fiber optics and the F stands for just that. This
arrangement is a little more complicated and uses special connectors and NIC's along with
hubs to create its network.
An important part of designing and installing an Ethernet is selecting the appropriate Ethernet
medium. There are four major types of media in use today: Thickwire for 10BASE5
networks, thin coax for 10BASE2 networks, unshielded-twisted pair (UTP) for 10BASE-T
networks and fiber optic for 10BASE-FL or Fiber-Optic Inter-Repeater Link (FOIRL)
networks. This wide variety of media reflects the evolution of Ethernet and also points to the
technology's flexibility.
Thickwire was one of the first cabling systems used in Ethernet but was expensive and
difficult to use. This evolved tothin coax, which is easier to work with and less expensive.

Network Interface Cards: Network interface cards, commonly referred to as NICs, and are
used to connect a PC to a network. The NIC provides a physical connection between the
networking cable and the computer's internal bus.
Different computers have different bus architectures; PCI bus master slots are most
commonly found on 486/Pentium PCs and ISA expansion slots are commonly found on 386
and older PCs. NICs come in three basic varieties: 8-bit, 16-bit, and 32-bit. The larger the
number of bits that can be transferred to the NIC, the faster the NIC can transfer data to the
network cable.
Many NIC adapters comply with Plug-n-Play specifications. On these systems, NICs are
automatically configured without user intervention, while on non-Plug-n-Play systems,
configuration is done manually through a setup program and/or DIP switches.
Cards are available to support almost all networking standards, including the latest Fast
Ethernet environment. FastEthernet NICs are often 10/100 capable, and will automatically set
to the appropriate speed. Full duplex networking is another option, where a dedicated
connection to a switch allows a NIC to operate at twice the speed.

53 | P a g e
Network Topologies
A Network Topology is the way computer systems or network equipment connects to each
other. Topologies may define both physical and logical aspect of the network. Both logical
and physical topologies could be same or different in a same network.

Point-to-point: Point-to-point networks contains exactly two hosts (computer or switches or


routers or servers) connected back to back using a single piece of cable. Often, the receiving
end of one host is connected to sending end of the other end and vice-versa.

Point-to-point Topology
If the hosts are connected point-to-point logically, then may have multiple intermediate
devices. But the end hosts are unaware of underlying network and see each other as if they
are connected directly.

Bus Topology: In contrast to point-to-point, in bus topology all device share single
communication line or cable. All devices are connected to this shared line. Bus topology may
have problem while more than one hosts sending data at the same time. Therefore, the bus
topology either uses CSMA/CD technology or recognizes one host has Bus Master to solve
the issue. It is one of the simple forms of networking where a failure of a device does not
affect the others. But failure of the shared communication line make all other devices fail.

54 | P a g e
Bus Topology

Both ends of the shared channel have line terminator. The data is sent in only one direction
and as soon as it reaches the extreme end, the terminator removes the data from the line.

Star Topology: All hosts in star topology are connected to a central device, known as Hub
device, using a point-to-point connection. That is, there exists a point to point connection
between hosts and Hub. The hub device can be Layer-1 device (Hub / repeater) or Layer-2
device (Switch / Bridge) or Layer-3 device (Router / Gateway).

Star Topology
As in bus topology, hub acts as single point of failure. If hub fails, connectivity of all hosts to
all other hosts fails. Every communication happens between hosts, goes through Hub only.
Star topology is not expensive as to connect one more host, only one cable is required and
configuration is simple.

Ring Topology: In ring topology, each host machine connects to exactly two other machines,
creating a circular network structure. When one host tries to communicate or send message to
a host which is not adjacent to it, the data travels through all intermediate hosts. To connect
one more host in the existing structure administrator may need only one more extra cable.

55 | P a g e
Ring Topology
Failure of any host results in failure of the whole ring. Thus every connection in the ring is
point of failure. There exist methods which employs one more backup ring.

Mesh Topology: In this type of topology, a host is connected to one or two or more than two
hosts. This topology may have hosts having point-to-point connection to every other host or
may also have hosts which are having point to point connection to few hosts only.

Full Mesh Topology

Hosts in Mesh topology also work as relay for other hosts which do not have direct point-to-
point links. Mesh technology comes into two flavors:
 Full Mesh: All hosts have a point-to-point connection to every other host in the
network. Thus for every new host n(n-1)/2 cables (connection) are required. It
provides the most reliable network structure among all network topologies.
 Partially Mesh: Not all hosts have point-to-point connection to every other host.
Hosts connect to each other in some arbitrarily fashion. This topology exists where we
need to provide reliability to some host whereas others are not as such necessary.

Tree Topology: Also known as Hierarchical Topology is the most common form of network
topology in use present day. This topology imitates as extended Star Topology and inherits
properties of Bus topology. This topology divides the network in to multiple levels/layers of
network. Mainly in LANs, a network is bifurcated into three types of network devices. The

56 | P a g e
lowest most is access-layer where user’s computer are attached. The middle layer is known as
distribution layer, which works as mediator between upper layer and lower layer. The highest
most layer is known as Core layer, and is central point of the network, i.e. root of the tree
from which all nodes fork.

Tree Topology

Hybrid Topology: A network structure whose design contains more than one topology is
said to be Hybrid Topology. Hybrid topology inherits merits and demerits of all the
incorporating topologies.

Hybrid Topology
The above picture represents an arbitrarily Hybrid topology. The combining topologies may
contain attributes of Star, Ring, Bus and Daisy-chain topologies. Most WANs are connected
by means of dual Ring topology and networks connected to them are mostly Star topology
networks. Internet is the best example of largest Hybrid topology.

Network Models

The OSI Model: Open System Interconnection (OSI) reference model has become an
International standard and serves as a guide for networking. This model is the best known and
most widely used guide to describe networking environments. Vendors design network
products based on the specifications of the OSI model. It provides a description of how
network hardware and software work together in a layered fashion to make communications

57 | P a g e
possible. It also helps with trouble shooting by providing a frame of reference that describes
how components are supposed to function.
There are seven OSI layer model to get familiar with and these are the physical layer, data
link layer, network layer, transport layer, session layer, presentation layer, and the
application.

OSI Model

1. Physical Layer, is just that the physical parts of the network such as wires, cables, and
there media along with the length. Also this layer takes note of the electrical signals
that transmit data throughout system.
2. Data Link Layer, this layer is where we actually assign meaning to the electrical
signals in the network. The layer also determines the size and format of data sent to
printers, and other devices. Also I don't want to forget that these are also called nodes

58 | P a g e
in the network. Another thing to consider in this layer is that it also allows and defines
the error detection and correction schemes that ensure data was sent and received.
3. Network Layer, this layer provides the definition for the connection of two dissimilar
networks.
4. Transport Layer, this layer allows data to be broken into smaller packages for data to
be distributed and addressed to other nodes (workstations).
5. Session Layer, this layer helps out with the task to carry information from one node
(workstation) to another node (workstation). A session has to be made before we can
transport information to another computer.
6. Presentation Layer, this layer is responsible to code and decode data sent to the node.
7. Application Layer, this layer allows you to use an application that will communicate
with say the operation system of a server. A good example would be using your web
browser to interact with the operating system on a server such as Windows NT, which
in turn gets the data you requested.
Internet Model
Internet uses TCP/IP protocol suite, also known as Internet suite. This defines Internet Model
which contains four layered architecture. OSI Model is general communication model but
Internet Model is what Internet uses for all its communication. Internet is independent of its
underlying network architecture so is its Model. This model has the following layers:

 Application Layer: This layer defines the protocol which enables user to interact
with the network such as FTP, HTTP etc.
 Transport Layer: This layer defines how data should flow between hosts. Major
protocol at this layer is Transmission Control Protocol. This layer ensures data
delivered between hosts is in-order and is responsible for end to end delivery.
 Internet Layer: IP works on this layer. This layer facilitates host addressing and
recognition. This layer defines routing.
 Link Layer: This layer provides mechanism of sending and receiving actual data. But
unlike its OSI Model’s counterpart, this layer is independent of underlying network
architecture and hardware.

59 | P a g e
Network – Security
Introduction
When networking was first used, it was limited to Military and Universities for Research and
development purposes. Later when all networks merge together and formed Internet, user’s
data use to travel through public transit network, where users are not scientists or computer
science scholars. Their data can be highly sensitive as bank’s credentials, username and
passwords, personal documents, online shopping or secret official documents. All security
threats are intentional i.e. they occur only if intentionally triggered. Security threats can be
divided into the below mentioned categories:
 Interruption:

Interruption is a security threat in which available resources are attacked. For example, a user
is unable to access its web-server or the web-server is hijacked.
 Privacy-breach:

In this threat, the privacy of a user is compromised. Someone who is not the authorized
person is accessing or intercepting data sent or received by the original authenticated user.
 Integrity:

This type of threat includes any alteration or modification in the original context of
communication. The attacker intercepts and receives the data sent by the Sender and the
attacker then either modifies or generates false data and sends to the receiver. The receiver
receives data assuming that it is being sent by the original Sender.
 Authenticity:

When an attacker or security breacher, represents himself as if he is the authentic person and
access resources or communicate with other authentic users.
No technique in the present world can provide 100% security. But steps can be taken to
secure data while it travels in unsecured network or internet. The most widely used technique
is Cryptography.

60 | P a g e
Cryptography is a technique to encrypt the plain-text data which makes it difficult to
understand and interpret. There are several cryptographic algorithms available present day as
described below:
 Secret Key
 Public Key
 Message Digest

Secret Key Encryption


Both sender and receiver have one secret key. This secret key is used to encrypt the data at
sender’s end. After encrypting the data, it is then sent on the public domain to the receiver.
Because the receiver knows and has the Secret Key, the encrypted data packets can easily be
decrypted. Example of secret key encryption is DES. In Secret Key encryption it is required
to have a separate key for each host on the network making it difficult to manage.

Public Key Encryption


In this encryption system, every user has its own Secret Key and it is not in the shared
domain. The secret key is never revealed on public domain. Along with secret key, every user
has its own but public key. Public key is always made public and is used by Senders to
encrypt the data. When the user receives the encrypted data, he can easily decrypt it by using
its own Secret Key. Example of public key encryption is RSA.

Message Digest
In this method, the actual data is not sent instead a hash value is calculated and sent. The
other end user, computes its own hash value and compares with the one just received. The
both hash values matches, it is accepted otherwise rejected. Example of Message Digest is
MD5 hashing. It is mostly used in authentication where user’s password is cross checked
with the one saved at Server.

61 | P a g e
62 | P a g e

You might also like