CSC213 Network Update-2
CSC213 Network Update-2
OF
COMPUTER NETWORKING
AND
DATA COMMUNICATION
PART ONE
TABLE OF CONTENT
Chapter One
1. Networks
Network facilities and connection
Network Protocols
Chapter Two
2. Network and Communication
Signal and Protocol
OSI Reference Model
Data Encapsulation
Chapter three
3. Network topology
Types and Connectivity
LAN topology
WAN topology
Chapter four
4. Network Cables
Coxial Cable
Twisted Paire Cable
Untwisted Pair Cable
Shielded Twisted Paire Cable
Categories of Cables
Fiber Optics Cable
Chapter five
5. LAN Transmission Media
Line Impairment, Crosstalk
Nonliner Distortion, Jitter, Transient
Impulse Noise, Dropout
Gain Hits, Phase Hits
Attenuation Distortion, Attenuation vs Frequency
Chapter six
6. Network Connectivity Architecture and Devices
Hub
Bridge
Switch
Routers
Gateway
Brouters
Chapter Seven
7. Data communication
Types of Transmission t different Layers
CSMA/CD bus
Toen Ring, Token bus
WAN characteristics
ISDN
Internetworking
Network interface Adapter and Network interface Card
Chapter Eight
8. Network Practical
Cable construction
Cable connectivity
Network configuration
Testing and Toubleshooting
Any other Network practice
CHAPTER ONE
Life on this planet has grown, as a whole, toward higher levels of complexity. Over billions of years, neural
structures and "sensing" organs have become more and more elaborate. One could say that consciousness has
grown from this (or is consciousness driving it?). Our own neural systems have developed over millions of
years as we "awakened" to the world around us. Now we are very busy creating a global communications
network that looks a lot to me like an extension of our neural system. It potentially links everybody on the
planet, and it extends our ability to communicate and to "sense" the world. For example, you can connect to
a weather station in Antarctica or view traffic conditions from cameras pointed at city intersections.
"If you look as if you were on another planet and you saw the proliferation of networks,
Internet and Intranet, you would see a fantastic project that the human race has undertaken. It is like one
gigantic project, and it dwarfs the building of the cathedrals and the pyramids."
—Alvin Toffler
Recently, a device was released that transmits information through your body. When you touch another
person, the device sends electronic information about you through your finger to the other person's device.
It's an on-contact electronic calling card. In another report, scientists had wired some brain cells to
microchips! What's next?
Networks and the Internet are no longer the exclusive interest of information systems managers and the
technically advantaged. Web protocols have made information systems available to everyone. People tap into
multimedia information and contribute their own knowledge. They participate in chat forums, exchange
messages in discussion groups, post Web pages, contribute to information warehouses, use collaborative
applications, and videoconference with one another. We are creating a globally accessible information
warehouse. This is not a dictionary, so you won't find an entry for every industry protocol, product, or
technology. However, you'll find many terms and concepts are discussed within the text.
So, welcome to the world of Networking, a survey of computer networking concepts, technologies, and
services. This work is intended to be a comprehensive, accurate, and timely resource for students, system
engineers, network administrators, IT implementers, and computing professionals from all walks of life.
Before we outline its scope of coverage, however, I’ll ask a simple question that surprisingly has no easy
answer: What is networking?
A network is a communication system that allows users to access resources on other computers and exchange
messages with other users. It allows users to share resources on their own systems with other network users
and to access information on centrally located systems or systems that are located at remote offices. It may
provide connections to the Internet or the networks of other organizations. Network connections allow users
to operate from their home or on the road.
In the simplest sense, networking means connecting computers so that they can share files, printers,
applications, and other computer-related resources. That implies any thing that enables two or more
computers to communicate with each other and/or other devices. This enable user to use computer and
networks to share information, collaborate on a work item, printer, and even communicate directly through
individual addressed message. The advantages of networking computers together are pretty obvious:
❖ Users can save their important files and documents on a file server, which is more secure than storing
them on their workstations because a file server can be backed up in a single operation.
❖ Users can share a network printer, which costs much less than having a locally attached printer for
each user’s computer.
❖ Users can share groupware applications running on application servers, which enables users to share
documents, send messages, and collaborate directly.
❖ The job of administering and securing a company’s computer resources is simplified since they are
concentrated on a few centralized servers.
This definition of networking focuses on the basic goals of networking computers, which include
1. Increased manageability
2. Security
3. Efficiency
4. Cost-effectiveness over non-networked systems.
Networks consist of many components, including both hardware and software. Some components can even
be completely intangible. It is important to note that network has evolved into two distinct categories: local
Area Networks (LANs) and Wide Area Networks (WANs). However there several types of network which
we could also focus on:
❖ Local area networks (LANs), which can range from a few desktop workstations in a small
office/home office (SOHO) to several thousand workstations and dozens of servers deployed
throughout dozens of buildings on a university campus or in an industrial park
❖ Wide area networks (WANs), which might be a company’s head office linked to a few branch
offices or an enterprise spanning several continents with hundreds of offices and subsidiaries
❖ Metropolitan Area Networks (MANs) define through the IEEE project 802 for necessary connecting
LANs across geographic distances
❖ The Internet, the world’s largest network and the “network of networks”
We could also focus on the networking architectures in which these types of networks can be implemented:
Or we could look at the networking technologies used to implement each networking architecture:
❖ LAN technologies such as Ethernet, ARCNET, Token Ring, Banyan Vines, Fast Ethernet, Gigabit
Ethernet, and Fiber Distributed Data Interface (FDDI)
❖ WAN technologies such as Integrated Services Digital Network (ISDN), T1 leased lines, X.25,
frame relay, Synchronous Optical Network (SONET), Digital Subscriber Line (DSL), and
Asynchronous Transfer Mode (ATM)
❖ Wireless communication technologies, including cellular systems such as Global System for Mobile
Communications (GSM), Code Division Multiple Access (CDMA), Personal Communications
Services (PCS), and infrared systems based on the standards developed by the Infrared Data
Association (IrDA)
The basic hardware components of a network are included in three types of devices:
1. Transmission facilities
2. Access devices
3. Devices that repeat transmitted signals
Transmission facilities are the media used to transport a network's signal to their destination. Media types
can include cables, light, radio, microwave signals. Access devices is responsible for the following:
In LAN the access device is known as Network Interface Card (NIC). The NIC is a circult board that is
installed in a computer and occupies an I/O slot on its motherboard. While in WAN, the access device is a
router. Routers operate at layer 3 of the OSI reference model, and include two types of protocols: routing and
routatble.
A repeater is a device that accepts transmitted signals, simplifies them, and puts them back on the network.
In LAN a repeat is commonly referred to as hub. This components are basic in that all networks must either
contain them or, at minimum work around them. We could also consider these devices that are used to
implement these technologies:
❖ LAN devices such as repeaters, concentrators, bridges, hubs, switches, routers, and Multistation
Access Units (MAUs)
❖ WAN devices such as modems, ISDN terminal adapters, Channel Service Units (CSUs), Data
Service Units (DSUs), packet assembler/disassemblers (PADs), frame relay access devices
(FRADs), multiplexers (MUXes), and inverse multiplexers (IMUXes)
❖ Equipment for organizing, protecting, and troubleshooting LAN and WAN hardware, such as racks,
cabinets, surge protectors, line conditioners, uninterruptible power supplies (UPS’s), KVM
switches, and cable testers
❖ Cabling technologies such as coaxial cabling, twinax cabling, twisted-pair cabling, fiber-optic
cabling, and associated equipment such as connectors, patch panels, wall plates, and splitters
❖ Unguided media technologies such as infrared communication, wireless cellular networking, and
satellite networking, and their associated hardware
❖ Data storage technologies such as RAID, network-attached storage (NAS), and storage area
networks (SANs), and the technologies used to connect them, such as Small Computer System
Interface (SCSI) and Fibre Channel
❖ Technologies for securely interfacing private corporate networks with unsecured public ones, such
as firewalls, proxy servers, and packet-filtering routers
❖ Technologies for increasing availability and reliability of access to network resources, such as
clustering, caching, load balancing, and fault-tolerant technologies
❖ Network management technologies such as the Simple Network Management Protocol (SNMP) and
Remote Network Monitoring (RMON)
On a more general level, networking also involves the standards and protocols that underlie the technologies
and hardware mentioned, including the Open Systems Interconnection (OSI) networking model of the
International Organization for Standardization (ISO); the X-series, V-series, and G-series standards of the
International Telecommunication Union (ITU); Project 802 of the Institute of Electrical and Electronics
Engineers (IEEE); the Requests for Comments (RFCs) of the Internet Engineering Task Force (IETF); and
others from the World Wide Web Consortium (W3C), the ATM Forum, and the Gigabit Ethernet Alliance.
We could dig still deeper into the technologies and talk about the fundamental engineering concepts that
underlie networking services and technologies, including
❖ Impedance, attenuation, shielding, near-end crosstalk (NEXT), and other characteristics of cabling
systems
❖ Signals and how they can be multiplexed using time-division, frequency-division, statistical, and
other multiplexing techniques
❖ Bandwidth, throughput, latency, jabber, jitter, backbone, handshaking, hop, dead spots, dark fiber,
and late collisions
❖ Balanced vs. unbalanced signals, baseband vs. broadband transmission, data communications
equipment (DCE) vs. data terminal equipment (DTE), circuit switching vs. packet switching,
connection-oriented vs. connectionless communication, unicast vs. multicast and broadcast, point-
to-point vs. multipoint links, direct sequencing vs. frequency hopping methods, and switched virtual
circuit (SVC) vs. permanent virtual circuit (PVC)
We could also look at who provides networking technologies (especially WAN technologies):
❖ Internet service providers (ISPs), application service providers (ASPs), integrated communications
providers (ICPs), and so on
❖ The central office (CO) of the local telco (through an existing local loop connection), a cable
company, or a wireless networking provider
❖ Local exchange carriers (LECs) or Regional Bell Operating Companies (RBOCs) through their
points of presence (POPs) and Network Access Points (NAPs)
❖ Telecommunications service providers that supply dedicated leased lines, circuit-switched
connections, or packet-switching services
We could also look at the vendor-specific software technologies that make computer networking possible
(and useful):
❖ Powerful network operating systems (NOS’s) such as Windows NT and Windows 2000, Novell
NetWare, various flavors of UNIX, and free operating systems such as Linux and FreeBSD
❖ Specialized operating systems such as Cisco Systems’ Internetwork Operating System (IOS), which
runs on Cisco routers
❖ Directory systems such as the domain-based Windows NT Directory Services (NTDS) for Windows
NT, Active Directory in Windows 2000, and Novell Directory Services (NDS) for Novell NetWare
❖ File systems such as NTFS on Windows platforms and distributed file systems such as the Network
File System (NFS) developed by Sun Microsystems
❖ Programming languages and architectures for distributed computing, such as the C and Java
languages, ActiveX and Jini technologies, Hypertext Markup Language (HTML) and Extensible
Markup Language (XML), the Distributed Component Object Model (DCOM) and COM+, and
Remote Procedure Calls (RPCs) and other forms of interprocess communication (IPC)
❖ Tools and utilities for integrating technologies from different vendors in a heterogeneous
networking environment, such as Gateway Services for NetWare (GSNW), Services for Macintosh,
Services for UNIX on the Windows NT and Windows 2000 platforms, and Microsoft SNA Server
for connectivity with mainframe systems
On a more detailed level, we could look at the tools and utilities that you can use to administer various NOS’s
and their networking services and protocols, including the following:
❖ Microsoft Management Console (MMC) and its administrative snap-ins in Windows 2000 or the
utilities in the Administrative Tools program group in Windows NT
❖ TCP/IP command-line utilities such as ping, ipconfig, traceroute, arp, netstat, nbtstat, finger, and
nslookup
❖ Platform-specific command-line utilities such as the Windows commands for batch administration
(including the at command, which you can use with ntbackup to perform scheduled network backups
of Windows NT servers)
❖ Cross-platform scripting languages that can be used for system and network administration,
including JavaScript, VBScript, and Perl
We could also look at applications that are network-aware, such as the Microsoft BackOffice network
applications suite that includes Microsoft Exchange Server, Microsoft SQL Server, Microsoft SNA Server,
and Microsoft Proxy Server. We could look at some of the terminology and technologies associated with
these applications, how the software is licensed, and the GUI or command-line tools used to administer them.
As you can see, there’s more to networking than hubs and cables. In fact, the field of computer networking
is almost overwhelming in its scope and complexity, and one could spend a lifetime studying only one small
aspect of it. But it hasn’t always been this way. Let’s take a look at how we got to this point.
Host Computer: The host computer provides a centralised computing environment for the execution of
application programs. Terminals and peripheral devices are connected to it via front-end processors. It is a
large scale main-frame computer with associated disk and tape storage devices.
Terminal: A terminal is an input device used by a user. It consists of a keyboard and console screen. Most
terminals are non-intelligent, in that all processing is performed by the host computer. Data typed at the
keyboard is sent to the host computer for processing, and screen updates are sent from the host computer to
the console screen of the terminal. Terminals are generally connected via a serial link, though it now
becoming popular to connect terminals via network connections like token ring or ethernet. A terminal is
dependant upon the host computer. It cannot work as a standalone; all processing is done by the host
computer.
Local: Local refers to the location being the same site as the host computer. For instance, local processing
means that all processing is done locally, on site, without the need for sending the data somwehere else to be
processed. Often, it will be located in the same room or building.
Remote: Remote refers to the location being an external location other than the present physical site. To
support terminals in different cities, these remote terminals are connected to the host computer via data
communication links. In general, we could say that a remote terminal is connected via a data communication
link (utilising a modem).
Front End Processor: A front end processor (sometimes referred to as a communications processor) is a
device which interfaces a number of peripheral devices (terminals, disk units, printers, tape units) to the host
computer. Data is transferred between the host computer and front end processor using a high speed parallel
interface. The front end processor communicates with peripheral devices using slow speed serial interfaces.
The purpose of the front end processor is to off-load the work of managing peripheral devices from the host
computer, allowing it to concentrate on running applications software. The FEP provides the interface
between the host computer and the data communications network which comprises all the terminals or other
host computers.
Concentrator: A concentrator is a device which combines a number of low speed devices onto a number of
high speed channels. It does this by combining the data from each sub-channel and sending it over the links.
Concentrators take advantage of the idle time during transfers, and use this idle time by allocating it to another
sub-channel. This means that concentrators must have sufficient buffer storage for holding the data from each
sub-channel. It can be thought of as a many to few device. The purpose of using a concentrator is for
combining circuits.
Fig: Concentrator
Multiplexor: A multiplexor is a device which shares a communications link between a number of users. It
does this by time or frequency division. It is costly to provide a single circuit for each device (terminal).
Imagine having 200 remote terminals, and supplying 200 physical lines for each terminal.
Rather than provide a separate circuit for each device, the multiplexor combines each low speed circuit onto
a single high speed link. The cost of the single high speed link is less than the required number of low speed
links.
In time division, the communications link is subdivided in terms of time. Each sub-circuit is given the channel
for a limited amount of time, before it is switched over to the next user, and so on.
Here it can be seen that each sub-channel occupies the entire bandwidth of the channel, but only for a portion
of the time.
In frequency division multiplexing, each sub-channel is seperated by frequency (each channel is allocated
part of the bandwidth of the channel).
The speed or bandwidth of the main link is the sum of the individual channel speeds or bandwidth. It can be
though of as a many to one device.
Modems
Modems are devices which allow digital data signals to be transmitted across an analogue link. Modem stands
for modulator/demodulator. A modem changes the digital signal to an analogue frequency, and sends this
tone across the analogue link. At the other end, another modem receives the signal and converts it back to
digital.
Smart Modems: This is a standard modem with a micro-processor to provide both data communications and
automatic dialling in the one unit. These modems offer a range of features,
❖ auto dial, the modem can seize the phone line and dial a number
❖ auto answer, the modem can answer an incoming call
❖ auto fallback, the modem can use a lower speed if the line is noisy
❖ accept commands, the modem can be reconfigured
Modem Commands: Modems can receive and act on commands from the computer terminal. Commands
begin with the symbol AT and are terminated with a carriage return. The range of commands is called the
AT Command Set. Depending upon how the modem is configured (whether to echo results of commands),
the modem will return the symbol OK in response to a command request if it is performed.
Acoustic Modem: The acoustic modem interfaces to the telephone handset. The acoustic modem picks up
the sound signals generated by the earphone of the telephone handset, and generates sound which is picked
up by the mouthpiece of the telephone handset. The advantage of these types of modems are easy interface
to existing phones. Its disadvantage is that because the signals are coupled acoustically, it is suitable only for
very low data rates up to about 300 bits per second.
Modem Status Lights: On external modems, a number of lights are provided to indicate modem status.
• HS: High Speed, the modem is operating at its highest available speed
• AA: Auto Answer, the model will automatically answer incoming calls
• CD: Carrier Detect, Means it has detected a carrier signal from a remote computer
• OH: Off Hook, lights when the modem takes control of the phone line
• MR: Modem Ready, the modem is turned on and is ready
• RD: Receive Data, Flickers to indicate incoming data
• SD: Send Data, Flickers to indicate data is being transmitted
Summary
Analogue signals vary in time and strength. Examples of analogue signal is speech and music. The telephone
network is designed to handle analogue signals. Digital signals have two states, on or off. In simplex circuits,
data only travels one way. In half-duplex cicruits, data travels in both directions but not at the same time. In
full-duplex circuits, data can travel in both directions at the same time.
Parallel circuits use a separate wire for each bit of data, and also use wires to convey timing information.
Serial circuits use the same wire for all data bits, and timing information is sent along with the data. Parallel
transmission is faster. Examples of parallel circuits in computers are the address, data and control bus.
In asynchronous signals, each data element like a character is prefixed with a start and stop bit. In
synchronous signals, each group or block of characters is prefixed with start and stop codes. This means
higher speeds (more characters per second) can be achieved using synchronous signals than asynchronous
signals.
A number of different connections are provided by network companies. For users connecting to the internet,
dial-up is a preferred choice. For small businesses, ISDN could be an option. For large companies connecting
to the internet, leased lines of various speeds are a good solution. A modem is a device which allows computer
data to be sent over the telephone (dial-up) networks.
Network Facilities
Point to Point Circuits: Point to point circuits are sometimes called two-point circuits, and are used where
the remote terminal has sufficient data to warrant using a dedicated single connection to the host computer
(think of star networks as being point to point).
Multidrop/Multipoint Circuits: Multiple devices are attached to a common connection. This allows a
number of devices to share an expensive line. Only one device can use the line at any one time, and the line
is often managed using polling.
Switched Circuits: Switched circuits are offered by all public communications carriers. Dial-up telephone
circuits are an example of switched circuits. Connections are made via a central switching center, which
connects the two circuits together (like terminal and host computer).
Packet Switched Circuits: Packet switching is a store and forward data transmission technique in which a
message is broken up into small parts, each called a packet. Each packet is transmitted through the network
(and may take a different route from the previously sent packet), independent of other transmitted packets.
They are reassembled at the destination end into the original message. The facility which converts a message
into packets, and reassembles packets into messages is called a Packet Assembly/Disassembly (PAD) facility.
Charges are related to either time, the number of packets transmitted, or a combination of both. Distance
charges are relatively non-existent. Packet switching is very reliable, and public communications carriers
offer packet switching as an alternative to other forms of connections (like leased lines for example). There
is a high degree of error detection and control built into the packet switching protocol used to transfer packets
from point to point in a network.
Integrated Services Data Network (ISDN): ISDN refers to a new form of network, which integrates voice,
data and image in a digitized form. It offers the customer a single interface to support integrated voice and
data traffic. Rather than using separate lines for voice (PABX system) and data (leased lines), ISDN uses a
single digital line to accommodate these requirements. The basic circuit provided in ISDN is the 2B+D
circuit. This is two B channels (each of 64Kbits) and one D channel (of 16Kbits). The D channel is normally
used for supervisory and signaling purposes, whilst the B circuits are used for voice and data.
Local Area Network: A LAN is a group of microcomputers or other workstation devices located in the same
general area and connected by a common cable. The characteristics of LAN's are
A LAN is used to share resources amongst a group of individuals or company employees. These resources
are typically
o files
o application software
o printers
o computing power
o modems
o fax machines
Wide Area Network: A Wide area network is similar to a LAN, but geographically spread over a wider
area. The different segments of the WAN are interconnected by communication links (high speed serial, X.25
etc). In fact, a WAN often comprises a number of interconnected LAN's at various sites. The characteristics
of a WAN are
❖ geographically expansive
❖ large number of computers and multiple host machines
❖ sophisticated support devices like routers and gateways for interconnecting the various segments
❖ lot. Appropriate driver software provides an interface between the PC operating system and the
network interface card.
Once the network software is loaded, access to provided to other machines on the network. There are three
main types of network access provided.
❖ Peer to Peer Services: Each PC in the network can directly communicate and share resources with
any other machine in the network. Examples of this are
▪ Lantastic
▪ Lan Manager
▪ Windows For WorkGroups
▪ Windows NT
▪ Windows Server
▪ OS/2
▪ UNIX
Software programs are executed either locally, or on the peer computer. A user can take advantage of a faster
computer by specifying that the program run on the faster machine, rather than their own.
❖ File Server Services: A number of machines are designated as HOST servers. Users login to the
host machines and access files and application software stored on them. Users have no access to
other non host machines. Examples of this are
▪ Novell NetWare
▪ (Also LAN Manager, Windows NT, Windows Server and LANtastic)
Software is executed on the users machine. The file server only provides file access;
the file is downloaded from the file server into the workstation, where it is then
executed.
❖ Client/Server Services: This is used extensively in database applications, where a front end
software program sends commands to a server for execution, with the results being displayed on the
users machine. It utilizes the high speed of the server machine to do number crunching and complex
operations.
❖ user training
❖ support (user and system)
❖ configuration
Summary
Network devices like computers can be connected in one of FOUR topologies. A topology defines how they
are connected together. The four types are Ring, Star, Bus and Mesh.
Mesh topologies are used for networks which must be highly redundant and capable of withstanding failure.
This is why telephone companies interconnect their telephone exchanges in a mesh type arrangement.
Bus networks use a common cable which is shared by all devices. A protocol called Carrier-Sense-Multiple-
Access with Collision Detection is used to transmit data onto the common cable.
In token passing networks, a free or empty token is passed from each device to the next device on the ring.
A device wanted to send data waits for the arrival of a free token then fills it with the data it wants to send.
In a polled network, a master device queries each other device in the network at regular intervals to see if
that device has any data it wants to send.
A Local Area Network connects a number of computers and devices together in the same office or building.
It allows users to share resources like files, data, and printers.
CHAPTER TWO
NETWORK COMMUNICATION
When you connect two or more computers so they can communicate with each other, you create a data
network. This is true whether you connect the computers using a cable, a wireless technology such as infrared
or radio waves, or even modems and telephone lines. The technology that connects the computers together,
no matter what form it takes, is called the network medium. Copper-based cables are the most common form
of network medium, and for this reason the term network cable is often used to refer to any kind of network
medium.
Computers can communicate over a network in many ways and for many reasons, but a great deal that goes
on in the networking process is unconcerned with the nature of the data passing over the network medium.
By the time the data generated by the transmitting computer reaches the cable or other medium, it has been
reduced to signals that are native to that medium. These might be electrical voltages for a copper cable
network, pulses of light for fiber optic or infrared or radio waves. These signals form a code that the network
interface in each receiving computer converts back into the binary data understood by the software running
on that computer. The computer then interprets the binary code into information it can use in a variety of
ways. Of course there is a great deal more to this process than this description indicates, and there is a lot
going on to make it possible for the e-mail you just sent to your mother to get reduced to electrical voltages,
transmitted halfway across the country, and then reconstituted into text on her computer.
In some cases, a network consists of identical computers running the same version of the same operating
system and using all the same applications, whereas other networks consist of many different computing
platforms running entirely different software. It might seem that it would be easier for the identical computers
to communicate than it would be for the different ones, and in some ways it is. But no matter what kind of
computers the network uses and what software the computers are running, they must have a common
language to understand each other. These common languages are called protocols, and computers use many
of them during even the simplest exchanges of network data. Just as two people must speak a common
language to communicate, two computers must have one or more protocols in common to exchange data.
A network protocol can be relatively simple or highly complex. In some cases, a protocol is simply a code—
such as a pattern of electrical voltages—that defines the binary value of a bit of data: 0 or 1. The concept is
the same as that of Morse code, in which a pattern of dots and dashes represents a letter of the alphabet. More
complicated networking protocols can provide a variety of services, including the following:
• Packet acknowledgment. This is the transmission of a return message by the recipient to verify
the receipt of a packet or packets. A packet is the fundamental unit of data transmitted over a LAN.
• Segmentation. This is the division of a lengthy data stream into segments sufficiently small for
transmission over the network inside packets.
• Flow control. This is the generation by a receiving system of messages that instruct the sending
system to speed up or slow down its rate of transmission.
• Error detection. This is the inclusion of special codes in a packet that the receiving system uses to
verify that the content of the packet wasn't damaged in transit.
• Error correction. This is the generation by a receiving system of mess-ages that informs the sender
that specific packets were damaged and must be retransmitted.
• Data compression. This is a mechanism for reducing the amount of data transmitted over a network
by eliminating redundant information.
• Data encryption. This is a mechanism for protecting the data transmitted over a network by
encrypting it using a key already known by the receiving system.
In most cases, protocols are based on public standards developed by an independent committee, not a single
manufacturer or developer. These public standards ensure that different types of systems can use them
without incurring any obligation to a particular company. There are still a few protocols, however, that are
proprietary, having been developed by a single company and never released into the public domain.
One of the most important things to remember about networking is that every computer on a network uses
many different protocols during the communications process. The functions provided by the various
protocols are divided into the layers that make up the Open Systems Interconnection (OSI) reference model,
described in Lesson 2, later in this chapter. You might see references to Ethernet networks in books and
articles, for example. Ethernet is a protocol running on those networks, but it is not the only protocol running
on them. Ethernet is, however, the only protocol running at one particular layer (called the data-link layer).
Some layers, however, can have multiple protocols running on them simultaneously.
Protocol Interaction
The protocols operating at the various OSI layers are often referred to as a protocol stack. The protocols
running on a networked computer work together to provide all of the services required by a particular
application. Generally speaking, the services provided by the protocols are not redundant. If, for example, a
protocol at one layer provides a particular service, the protocols at the other layers do not provide exactly the
same service. Protocols at adjacent layers in the stack provide services to each other, depending on the
direction in which the data is flowing. As illustrated in the figure below, the data on a transmitting system
originates in an application at the top of the protocol stack and works its way down through the layers. Each
protocol provides a service to a protocol operating at the layer below it. At the bottom of the protocol stack
is the network medium itself, which carries the data to another computer on the network.
When the data arrives at its destination, the receiving computer performs the same procedure as the
transmitting computer, except in reverse. The data is passed up through the layers to the receiving application,
with each protocol providing an equivalent service to the protocol in the layer above it. For example, if a
protocol at layer three on the transmitting computer is responsible for encrypting data, the same protocol at
layer three of the receiving system is responsible for decrypting it. In this way, protocols at the various layers
in the transmitting system communicate with their equivalent protocols operating at the same layer in the
receiving system.
A LAN is a group of computers located in a relatively small area and connected by a common medium. Each
of the computers and other communicating devices on the LAN is called a node. A LAN is characterized by
three primary attributes: its topology, its medium, and its protocols. The topology is the pattern used to
connect the computers together. With a bus topology, a network cable connects each computer to the next
one, forming a chain. With a star topology, each of the computers is connected to a central nexus called a
hub or switch. A ring topology is essentially a bus network with the two ends joined together.
The network medium, as defined earlier, is the actual physical connection between the networked computers.
The topology and the medium used on a particular network are specified by the protocol operating at the
data-link layer of the OSI model, such as Ethernet or Token Ring. Ethernet, for example, supports several
different topologies and media. When you select one combination of topology and medium for a LAN, such
as unshielded twisted pair (UTP) cable in a star topology, you must (in most cases) use the same topology
and medium for all of the computers on that LAN. There are some hardware products that enable you to
connect computers to the same LAN with different media, but this is only true for closely related
technologies. You can't connect a bus Ethernet computer to a star Ethernet computer and have both systems
be part of the same LAN.
In the same way, all of the computers on a LAN must share common protocols. You can't connect an Ethernet
computer to a Token Ring computer on the same LAN, for example. The same is true for the protocols
operating at the other layers of the OSI model. If the systems on the LAN don't have common protocols at
every layer of the stack, communication among them is not possible.
In most cases, a LAN is confined to a room, a floor, or perhaps a building. To expand the network beyond
these limits, you can connect multiple LANs together using devices called routers. This forms an
internetwork, which is essentially a network of networks. A computer on one LAN can communicate with
the systems on another LAN because they are all interconnected. By connecting LANs in this way, you can
build an internetwork as large as you need. Many sources use the term network when describing a LAN, but
just as many use the same term when referring to an internetwork.
In many cases, an internetwork is composed of LANs in distant locations. To connect remote LANs, you use
a different type of network connection: a WAN connection. WAN connections can use telephone lines, radio
waves, or any one of many other technologies. WAN connections are usually point-to-point connections,
meaning that they connect only two systems. They are unlike LANs, which can connect many systems. An
example of a WAN connection would be a company with two offices in distant cities, each with its own LAN
and connected by a leased telephone line. Each end of the leased line is connected to a router and the routers
are connected to individual LANs. Any computer on either of the LANs can communicate with any one of
the other computers at the other end of the WAN link or with a computer on its own LAN.
Internetwork
In contrast to a LAN, an internetwork is a collection of LANs that are connected by routers. Broadcasts on
one LAN do not propagate across the routers to other LANs, but routers will forward packets that are
specifically addressed to devices on other interconnected LANs.
Each LAN of an internetwork is called a subnetwork. An internetwork protocol such as TCP/IP (Transmission
Control Protocol/Internet Protocol) or IPX (Internetwork Packet Exchange) is required to allow devices to
communicate across router-connected internetworks. The IP portion of the TCP/IP protocol suite provides
an addressing scheme for assigning each LAN in an internetwork a unique address. Routers then learn these
addresses. Packets traversing the internetwork must have an address that identifies a particular network and
a node on that network. Routers then forward the packets to the appropriate network.
In most cases, LANs use a shared network medium. The cable connecting the computers can carry one signal
at a time, and all of the systems take turns using it. This type of network is called a baseband network. To
make a baseband network practical for many computers to share, the data transmitted by each system is
broken into separate units called packets. If you were to tap into the cable of a baseband network and examine
the signals as they flow by, you would see a succession of packets generated by various systems and destined
for various systems. When your computer transmits an e-mail message, for example, it might be broken into
many packets, and the computer transmits each packet separately. If another computer on the network also
wants to transmit, it would also send one packet at a time. When all of the packets constituting a particular
transmission reach their destination, the receiving computer reassembles them back into your original e-mail.
This is the basis for a packet-switching network.
The alternative to a packet-switching network is a circuit-switching network, in which the two systems
wanting to communicate establish a path through the network that connects them (called a circuit) before
they transmit any information. That circuit remains open throughout the life of the exchange and is broken
only when the two systems are finished communicating. This is an impractical solution for computers on a
baseband network, because two systems connected by a circuit could conceivably monopolize the network
medium for long periods of time, preventing other systems from communicating. Circuit switching is more
common in environments like the Public Switched Telephone Network (PSTN), in which the connection
between your telephone and that of the person you're calling remains open for the entire duration of the call.
To make circuit switching practical, telephone companies use broadband networks. A broadband network is
the opposite of a baseband network, in that it carries multiple signals in a single cable at the same time. One
broadband network that you probably use every day is the one operated by your local cable television
company. A cable TV service runs a single cable into a user's home, but that one cable carries the signals for
dozens of TV channels simultaneously, and often provides Internet access as well. If you have more than one
TV in your home, the fact that you can watch a different program on each proves that the one cable carries
multiple signals at the same time. Broadband technologies are almost never used for local area networking,
but they are becoming an increasingly popular solution for wide area networking.
When two computers communicate over a LAN, data typically travels in only one direction at a time, because
the baseband network used for most LANs supports only a single signal. This is called half-duplex
communication. By contrast, two systems that can communicate in both directions simultaneously are
operating in full-duplex mode. The most common example of a full-duplex network is, once again, the
telephone system. Both parties can speak simultaneously during a telephone call and each party can also hear
the other at the same time. An example of a half-duplex communication system is a two-way radio like a CB
radio, in which only one party can transmit at any one time, and each party must say "over" to signal that he
or she has finished talking and is switching from transmit to receive mode.
With the right equipment, full-duplex communication is possible on certain types of LANs. The first
requirement is a separate channel for traffic running in each direction. Whether this is possible depends on
the network medium. Coaxial cable, for example, contains a single conductor and a ground, so there is no
physical way that traffic could run in both directions, unless you were to install two cable runs for each
connection. Twisted pair cable, on the other hand, contains four separate wire pairs within a single sheath,
one of which is dedicated to incoming traffic and one to outgoing traffic. Networks that use this type of cable
can therefore theoretically operate in full-duplex mode, and some manufacturers are making Ethernet
equipment that makes this possible. Full-duplex Ethernet essentially doubles the throughput of the existing
network.
When a small network begins to grow, it is possible to connect LANs together in a haphazard manner for a
while. However, building a large enterprise network by connecting many LANs is a complex undertaking
that requires careful planning. One of the most common designs for a network of this type is a series of
segment LANs connected by a backbone LAN.
The term segment is sometimes used synonymously with LAN or network to refer to any collection of
networked computers, but in this context it refers to a LAN composed of user workstations and other end-
user devices, such as printers. An enterprise network would consist of many such LANs, all of which are
connected to another LAN called a backbone. The backbone exists primarily as a conduit that enables the
segments to communicate with each other. One common configuration for an office building with multiple
floors calls for a horizontal segment connecting all of the workstations on each floor and a backbone running
vertically from the top of the building to the bottom, connecting all of the segments.
This type of configuration increases the efficiency of the network by using the backbone to carry all of the
traffic going from one network to another. No packet has to traverse more than three LANs using this model.
By contrast, if you were to connect each of the horizontal segments to the adjacent segment, daisy chain
fashion, most of the internetwork packets would have to travel through many more segments to reach their
destinations, burdening the intermediate segments with through traffic for no good reason.
In many cases, the backbone network runs at a higher speed than the segments and may also use a different
type of network medium. For example, a typical network might use 10Base-T Ethernet, running at 10
megabits per second (Mbps) over copper UTP cable, for the segments, and it might use 100Base-FX Ethernet,
running at 100 Mbps over fiber optic cable, for the backbone. There are two reasons for using a different type
of network for the backbone. First, the backbone by definition must carry all of the internetwork traffic
generated by all of the segments, and a faster protocol can prevent the backbone from becoming a bottleneck.
Second, the backbone may have to span a much longer distance than the segments, and a network that uses
fiber optic cable can handle long distances better.
Computers can interact with each other on a network in different ways and fulfill different roles. There are
two primary networking models used to define this interaction, called client/server and peer-to-peer. On a
client/server network, certain computers act as servers and others act as clients. A server is simply a computer
(or more precisely, an application running on a computer) that provides a service to other computers. The
most basic network functions are the sharing of files and the sharing of printers; the machines that do this are
called file servers and print servers. There are many other types of servers as well: application servers, e-mail
servers, Web servers, database servers, and so on. A client is a computer that avails itself of the services
provided by servers.
NOTE: Although servers are often thought of as computers, they are actually applications. A single computer
can conceivably run several different server applications at the same time and, in most cases, perform client
operations as well.
At one time, it was common for computers to be limited to either client or server roles. Novell NetWare,
which was the most popular network operating system for many years, consists of a separate server operating
system and clients that run on DOS and Microsoft Windows workstations. The server computer functions
only as a server and the clients only as clients. The most popular network operating systems today, however,
include both client and server functions. All of the current versions of Windows (Windows 95, Windows 98,
Windows Me, Windows NT, Windows 2000, and Windows XP, Windows Visat, and Windows7), for
example, can function as both clients and servers. How to utilize each system is up to the network
administrator.
You can construct a client/server network by designating one or more of the networked computers as a server
and the rest as clients, even when all of the computers can perform both functions. In most cases, servers are
better equipped systems, and on a large network many administrators connect them to the backbone so that
all of the segments have equal access to them. A client/server network typically uses a directory service to
store information about the network and its users. Users log on to the directory service instead of logging on
to individual computers, and administrators can control access to the entire network using the directory
service as a central resource.
On a peer-to-peer network, every computer is an equal and functions as both a client and a server. This means
that any computer can share its resources with the network and access the shared resources on other
computers. You can therefore use any of the Windows versions mentioned earlier for this type of network,
but you cannot use a dedicated client/server operating system like NetWare. Peer-to-peer networks should
be generally limited to 10 or 15 nodes or fewer on a single LAN, because each system has to maintain its
own user accounts and other security settings.
There are many other topics that any one who want to be network designers, administrators, and expert must
be use to, however here are some which are listed here:
❖ Client/Server Computing The predominant model for designing network operating systems and
applications for networks
❖ Data Communication Concepts The basics of how devices communicate with networks and other
links
❖ Communication Services Describes the technologies and services for building wide area networks
❖ Distributed Computer Networks Techniques for making information more accessible to users no
matter where they are on the network
❖ Distributed Object Computing Describes new object-oriented technologies for intranets and the
Internet
❖ Enterprise Networks Networks that span an entire organization and connect all of its information
resources
❖ Intranets and Extranets The new network paradigm, built with Internet standards and protocols
❖ Mobile Computing How to support users on the move
❖ Network Architecture The structural elements of network design
❖ Network Design and Construction How to build a network
❖ Network Management How to manage a network
❖ Network Operating Systems Operating systems designed with networks in mind
❖ Protocol Concepts How protocols work
❖ Security How to protect your network
❖ Servers Information about network server devices
❖ Transport Protocols and Services How two systems hold a “conversation’ with one another to
communicate information and transmit data
❖ Virtual Networks How to build networks that emulate other networks
❖ Voice/Data Networks How networks can transmit both voice and data
❖ VPN (Virtual Private Network) How to build private wide area networks over the Internet
❖ WAN (Wide Area Network) How to build wide area networks
❖ Web Technologies and Concepts How Internet and Web technologies are changing networks
❖ Wireless Communications How to build networks without wires
• The standards process was relatively closed compared with the open standards process used by the
Internet Engineering Task Force (IETF) to develop the TCP/IP protocol suite.
• The model was overly complex. Some functions (such as connectionless communication) were
neglected, while others (such as error correction and flow control) were repeated at several layers.
• The growth of the Internet and TCP/IP—a simpler, real-world protocol model—pushed the OSI
reference model out.
The OSI reference model illustrates the networking process as divided into seven layers. This theoretical
construct makes it easier to learn and understand the concepts involved. At the top of the model is the
application that requires access to a resource on the network, and at the bottom is the network medium itself.
As data moves down through the layers of the model, the various protocols operating there prepare and
package it for transmission over the network. Once the data arrives at its destination, it moves up through the
layers on the receiving system, where the same protocols perform the same process in reverse.
The U.S government tried to require compliance with the OSI reference model for U.S. government
networking solutions in the late 1980s by implementing standards called Government Open Systems
Interconnection Profiles (GOSIPs). This effort was abandoned in 1995, however, and now few real-world
implementations of OSI networking protocols exist outside of Europe.
In 1983, the International Organization for Standardization (ISO) and what is now the Telecommunication
Standardization Sector of the International Telecommunication Union (ITU-T) published a document called
"The Basic Reference Model for Open Systems Interconnection." The model described in that document
divides a computer's networking functions into seven layers. Originally, this seven-layer structure was to be
the model for a new protocol stack, but this never materialized in a commercial form. Instead, the OSI model
has come to be used with the existing network protocols as a teaching and reference tool.
The OSI reference model is best seen as an idealized model of the logical connections that must occur in
order for network communication to take place. Most protocol suites used in the real world, such as TCP/IP,
DECnet, and Systems Network Architecture (SNA), map somewhat loosely to the OSI reference model. The
OSI model is a good starting point for understanding how various protocols within a protocol suite function
and interact. The OSI reference model has seven logical layers, as shown in the following table.
You can think of each layer as being logically connected to the same layer on a different computer on the
network. For example, the application layer on one machine communicates with the application layer on
another machine. But this communication is logical only; physical communication occurs when packets of
data are sent down from the application layer of the transmitting computer, encapsulated with header
information by each lower layer, and then put on the wire at the physical layer of the transmitting computer.
After traveling along the wire, the packets are picked up by the physical layer of the receiving computer,
passed up the seven layers while each layer strips off its associated header information, and then passed to
the application layer of the receiving computer, where the receiving application can process the data.
Most of the protocols commonly used today predate the OSI model, so they don't conform exactly to the
seven-layer structure. In most cases, single protocols combine the functions of two or more of the layers in
the model, and the boundaries between protocols often don't exactly conform to the model's layer boundaries.
However, the model remains an excellent tool for studying the networking process, and professionals
frequently make reference to functions and protocols associated with specific layers.
Data Encapsulation
The primary interaction between the protocols operating at the various layers of the OSI model takes the form
of each protocol adding headers (and in one case, a footer) to the information it receives from the layer above
it. For example, when an application generates a request for a network resource, it passes the request down
through the protocol stack. When the request reaches the transport layer, the transport layer protocol adds its
own header to the request. The header consists of fields containing information that is specific to the
functions of that protocol, and the original request becomes the data field, or payload, for the transport layer
protocol.
The transport layer protocol, after adding its header, passes the request down to the network layer. The
network layer protocol then adds its own header in front of the transport layer protocol's header. The original
request and the transport layer protocol header thus become the payload for the network layer protocol. This
entire construct then becomes the payload for the data-link layer protocol, which typically adds both a header
and a footer. The final product, a packet, is then ready for transmission over the network. After the packet
reaches its destination, the entire process is repeated in reverse. The protocol at each successive layer of the
stack (traveling upward this time) processes and removes the header applied by its equivalent protocol in the
transmitting system. When the process is complete, the original request arrives at the application for which
it was destined in the same condition as when it was generated.
Data encapsulation is the process by which the protocols add their headers and footer to the request generated
by the application (see Figure 1.8). The procedure is functionally similar to the process of preparing a letter
for mailing. The application request is the letter itself, and the protocol headers represent the process of
putting the letter into an envelope, addressing it, stamping it, and mailing it.
Figure: As data travels down through the protocol stack, it is encapsulated by the protocols operating at
the various layers
Run the Data Encapsulation video located in the Demos folder on the CD-ROM accompanying this book
for a demonstration of the data encapsulation process.
The functions of the OSI model layers are covered in the following sections.
The physical layer, at the bottom of the OSI model, is, as the name implies, the layer that defines the nature
of the network's hardware elements, such as what medium the network uses, how the network is installed,
and the nature of the signals used to transmit binary data over the network. The physical layer also defines
what kind of network interface adapter must be installed in each computer and what kind of hubs (if any) to
use. Physical layer options include various types of copper or fiber optic cable, as well as many different
wireless solutions. In the case of a LAN, the physical layer specifications are directly related to the data-link
layer protocol used by the network. When you select a data-link layer protocol, you must use one of the
physical layer specifications supported by that protocol.
For example, Ethernet is a data-link layer protocol that supports several different physical layer options. You
can use one of two types of coaxial cable with Ethernet, any one of several types of twisted pair cable, or
fiber optic cable. The specifications for each of these options include a great deal of detailed information
about the physical layer requirements, such as the exact type of cable and connectors to use, how long the
cables can be, how many hubs you can have, and many other factors. These specific conditions are required
for the protocol to function properly. A cable segment that is too long, for example, can prevent an Ethernet
system from detecting packet collisions. When the system can't detect errors, it can't correct them, and data
is lost.
Some aspects of the physical layer are defined in the data-link layer protocol standard, but others are defined
in separate specifications. One of the most commonly used physical layer specifications is the "Commercial
Building Telecommunications Cabling Standard," published jointly by the American National Standards
Institute (ANSI), the Electronics Industry Association (EIA), and the Telecommunications Industry
Association (TIA) as EIA/TIA 568A. This document includes detailed specifications for installing cables for
data networks in a commercial environment, including the required distances from sources of electromagnetic
interference and other general cabling policies. In most cases, large network cabling jobs are outsourced to
specialized contractors, and any such contractor you hire for a LAN cabling job should be intimately familiar
with EIA/TIA 568A and other such documents, including your local building codes.
The other communications element found at the physical layer is the particular type of signaling used to
transmit data over the network medium. For copper-based cables, these signals are electrical charges. For
fiber optic cables, the signals are pulses of light. Other types of network media can use radio frequencies,
infrared pulses, and other types of signals. In addition to the physical nature of the signals, the physical layer
dictates the signaling scheme that the computers use.The signaling scheme is the pattern of electrical charges
or light pulses used to encode the binary data generated by the upper layers. Ethernet systems use a signaling
scheme called Manchester encoding, and Token Ring systems use a scheme called Differential Manchester.
The protocol at the data-link layer is the conduit between the computer's networking hardware and its
networking software. Network layer protocols pass their outgoing data down to the data-link layer protocol,
which packages it for transmission over the network. When the other systems on the network receive the
transmitted data, their data-link layer protocols process it and pass it up to the network layer.
When it comes to designing and building a LAN, the data-link layer protocol you choose is the single most
important factor in determining what hardware you buy and how you install it. To implement a data-link
layer protocol, you need the following hardware and software:
• Network interface adapters (When an adapter is a discrete card plugged into a bus slot, it is referred
to as a NIC.)
• Network adapter drivers
• Network cables (or other media) and ancillary connecting hardware
• Network hubs (in some cases)
Network interface adapters and hubs are both designed for specific data-link layer protocols and are not
interchangeable with products for other protocols. Some network cables are protocol-specific, whereas others
can be used withvarious protocols.
By far the most popular data-link layer LAN protocol in use today (and throughout the history of the LAN)
is Ethernet. Token Ring is a distant second, followed by other protocols such as the Fiber Distributed Data
Interface (FDDI). Data-link layer protocol specifications typically include the following three basic elements:
• A format for the frame (that is, the header and footer applied to the network layer data before
transmission)
• A mechanism for controlling access to the network medium
• One or more physical layer specifications for use with the protocol
Frame Format
The data-link layer protocol encapsulates the data it receives from the network layer protocol by adding a
header and footer to it, forming what is called a frame (see Figure 1.9). Using the mail analogy given earlier,
the header and footer are the equivalent of the envelope that you use to mail a letter. They contain theaddress
of the system sending the packet and the address of its destination system. For LAN protocols like Ethernet
and Token Ring, these addresses are 6-byte hexadecimal strings assigned to network interface adapters by
their manufacturers. The addresses are referred to as hardware addresses or Media Access Control (MAC)
addresses, to distinguish them from addresses used at other layers of the OSI model.
NOTE : Protocols operating at different layers of the OSI model have different names for the data structures
they create by adding a header to the data theyreceive from the layer above. What the data-link layer protocol
calls a "frame," for example, the network layer protocol calls a "datagram." "Packet" is a more generic term
for the unit of data created at any layer.
It is important to understand that data-link layer protocols are limited to communications with computers on
the same LAN. The hardware address in the header always refers to a computer on the same local network,
even if the data's ultimate destination is a system on another network.
The other primary functions of the data-link layer frame are to identify the network layer protocol that
generated the data in the packet and to provide errordetection information. A computer can use multiple
protocols at the network layer, and the data-link layer protocol frame usually contains a code that specifies
which network layer protocol generated the data in the packet so that the data-link layer protocol on the
receiving system can pass the data to the appropriate protocol at its own network layer.
The error detection information takes the form of a cyclical redundancy check (CRC) computation performed
on the payload data by the transmitting system, the results of which are included in the frame's footer. On
receiving the packet, the receiving system performs the same computation and compares its results to those
in the footer. If the results match, the data has been transmitted successfully. If they do not, the receiving
system assumes that the packet is corrupted and discards it.
The computers on a LAN usually share a common half-duplex network medium, making it possible for two
computers to transmit data at the same time. When this happens, a packet collision is said to occur, and the
data in both packets is lost. One of the main functions of the data-link layer protocol in this type of network
is to provide a mechanism that regulates access to the network medium. This mechanism, called a MAC
mechanism, provides each computer with an equal opportunity to transmit its data while minimizing the
occurrence of packet collisions.
The MAC mechanism is one of the primary defining characteristics of a data-link layer protocol. Ethernet
uses a MAC mechanism called Carrier Sense Multiple Access with Collision Detection (CSMA/CD). Several
other protocols, including Token Ring, use a scheme called token passing.
NOTE : For more information about specific MAC mechanisms, "Data-Link Layer Protocols."
The data-link layer protocols used on LANs often support more than one network medium, and the protocol
standard includes one or more physical layer specifications. The data-link layer and physical layer are closely
related, because the characteristics of the network medium have a profound effect on the functionality of the
protocol's MAC mechanism. For this reason, you can say that the data-link layer protocols used on a LAN
also encompass the functions of the physical layer. There are other data-link layer protocols used for WAN
links, however, such as the Serial Line Internet Protocol (SLIP) and the Point-to-Point Protocol (PPP), which
do not include physical layer information.
At first glance, the network layer seems to duplicate some of the functions ofthe data-link layer. This is not
so, however, because network layer protocols are responsible for end-to-end communications, whereas data-
link layer protocols function only on the local LAN. To say that network layer protocols are responsible for
end-to-end communications means that the network layer protocol is responsible for a packet's complete
journey from the system that created it to its final destination. Depending on the nature of the network, the
source and destination systems can be on the same LAN, on different LANs in the same building, or on LANs
separated by thousands of miles. When you connect to a server on the Internet, for example, the packets your
computer creates may pass through dozens of different networks before reaching their destination. The data-
link layer protocol may change many times to accommodate those dozens of networks, but the network layer
protocol remains intact throughout the trip.
The Internet Protocol (IP) is the cornerstone of the Transmission ControlProtocol/Internet Protocol (TCP/IP)
suite, and the most commonly used network layer protocol. Novell NetWare has its own network layer
protocol, called Internetwork Packet Exchange (IPX), and the NetBIOS Enhanced User Interface (NetBEUI)
protocol is often used on small Microsoft Windows networks. Most of the functions attributed to the network
layer are based on the capabilities of IP.
Like the data-link layer protocol, the network layer protocol applies a header to the data it receives from the
layer above it, as shown in Figure 1.10. The unit of data created by the network layer protocol, which consists
of the transport layer data plus the network header, is called a datagram. The functions associated with the
network layer are discussed in the following sections.
Addressing
The network layer protocol header contains source address and destinationaddress fields, just as the data-link
layer protocol does. However, in this case, the destination address is the packet's final destination, which may
be different from the data-link layer protocol header's destination address. For example, when you type the
address of a Web site in your browser, the packet your system generates contains the address of the Web
server as its network layer destination, but the data-link layer destination is the address of the router on your
LAN that provides you with Internet access.
IP has its own addressing system that is completely separate from the data-link layer addresses. Each
computer on an IP network is assigned a 32-bit IP address by an administrator or an automated service. This
address identifies both thenetwork on which the computer is located and the computer itself, so that one
address can uniquely identify any computer. IPX, on the other hand, uses a separate address to identify the
network on which a computer is located and uses the hardware address to identify a computer on the network.
NetBEUI identifies computers using a NetBIOS name assigned to each system during its installation.
Fragmenting
Network layer datagrams may have to pass through many different networks on the way to their destinations,
and the data-link layer protocols that the datagrams encounter can have different properties and limitations.
One of these limitations is the maximum packet size permitted by the protocol. For example, Token Ring
frames can be as large as 4500 bytes, but Ethernet frames are limited to 1500 bytes. When a large datagram
that originated on a Token Ring network is routed to an Ethernet network, the network layer protocol must
split it into pieces no larger than 1500 bytes each. This process is called fragmentation.
During the fragmentation process, the network layer protocol splits the datagram into as many pieces as
necessary to make them small enough for transmission using the data-link layer protocol. Each fragment
becomes a packet in itself that continues the journey to the network layer destination. The fragments are
notreassembled until all of the packets that make up the datagram reach the destination system. In some cases,
datagrams may be fragmented, and their fragments may be fragmented again repeatedly before reaching their
destination.
Routing
Routing is the process of directing a datagram from its source, through an internetwork, and to its ultimate
destination using the most efficient path possible. On complex internetworks such as the Internet or a large
corporate network, there are often many possible routes to a given destination. Network designers
deliberately create redundant links so that, if one of the routers on the network fails, traffic can still find its
way to its destination.
Routers connect the individual LANs that make up an internetwork. The function of a router is to receive
incoming traffic from one network and transmit it to a particular destination on another network. There are
two types of systems involved in internetwork communications, end systems and intermediate systems. End
systems are the source of individual packets and also their ultimate destination. Routers are the intermediate
systems. End systems utilize all seven layers of the OSI model, whereas packets arriving at intermediate
systems rise only as high as the network layer. The router then processes the packet and sends it back down
through the stack to be transmitted to its next destination, as shown in Figure 1.11.
To properly direct a packet to its destination, routers maintain information about the network in tables that
they store in memory. The information in the tables can be either supplied manually by an administrator or
gathered automatically from other routers using specialized routing protocols. A typical routing table entry
specifies the address of another network and the router that packets should use to get to that network. Routing
table entries also contain a metric that indicates the comparative efficiency of that particular route. If there
are two or more routes to a particular destination, the router selects the more efficient one and passes the
datagram down to the data-link layer for transmission to the router specified in the table entry. On large
networks, routing can be an extraordinarily complicated process, but most of it is automated and invisible to
the average user.
Just as the data-link layer header specifies the network layer protocol that generates the data that it transports,
the network layer header identifies the transport layer protocol from which it receives the data that it carries.
With this information, the receiving system can pass the incoming datagrams to the correct transport layer
protocol.
The transport layer protocols provide services that complement those provided by the network layer. The
transport and network layer protocols used to transmit data are often thought of as a matched pair, as seen in
the case of TCP/IP. These protocols include TCP, which runs at the transport layer, plus IP, which runs at
the network layer. Most protocol suites provide two or more transport layer protocols that provide different
levels of service. The alternative to TCP is the User Datagram Protocol (UDP). The IPX protocol suite also
provides a choice between transport layer protocols, including the NetWare Core Protocol (NCP) and
Sequenced Packet Exchange (SPX).
The difference between the protocols provided at the transport layer within aparticular protocol suite is that
some are connection-oriented and some are connectionless. A connection-oriented protocol is one in which
the two communicating systems exchange messages to establish a connection before they transmit any
application data. This ensures that the systems are both active and ready to exchange messages. TCP, for
example, is a connection-oriented protocol. When you use a Web browser to connect to an Internet server,
the browser and the server first perform what is known as a three-way handshake to establish the connection.
Only then does the browser transmit the address of the desired Web page to the server. When the data
transmission is completed, the systems perform a similar handshake to break down the connection.
Connection-oriented protocols also provide additional services such as packet acknowledgment, data
segmentation, flow control, and end-to-end error detection and correction. Systems generally use this type of
protocol to transmit relatively large amounts of information that can't tolerate even a single bit error, such as
data or program files, and these services ensure the correct transmission of the data. Because of these services,
connection-oriented protocols are often said to be reliable, used here as a technical term that refers to the fact
that each packet transmitted using the protocol has been acknowledged by the recipient and verified as having
been transmitted without error. The drawback of this type of protocol is that it greatly increases the amount
of control data exchanged by the two systems. In addition to the extra messages needed to establish and
terminate the connection, the header applied by a connection-oriented protocol is substantially larger than
that of a connectionless one. In the case of the TCP/IP transport layer protocols, TCP uses a 20-byte header
and UDP uses only an 8-byte one.
A connectionless protocol is one in which there is no preliminary communication between the two systems
before the transmission of application data. The sender simply transmits its data to the destination without
knowing if the system is ready to receive data, or even if the system exists. Systems generally use
connectionless protocols, such as UDP, for brief transactions that consist only of single requests and
responses. The response from the recipient functions as a tacit acknowledgment of the transmission.
NOTE : Connection-oriented and connectionless protocols are not limited to the transport layer. Network
layer protocols are usually connectionless, for example, because they leave the reliability functions to the
transport layer.
Transport layer protocols typically provide a path through the layers above, just as network and data-link
layer protocols do. The headers for both TCP and UDP, for example, include port numbers that identify the
applications from which the packet originated and for which it is destined.
The session layer is the point at which the actual protocols used on networks begin to differ substantially
from the OSI model. There are no separate session layer protocols as there are at the lower layers. Session
layer functions are instead integrated into other protocols that also include presentation and application layer
functions. The transport, network, data-link, and physical layers are concerned with the proper transmission
of data across the network, but the protocols at the session layer and above are not involved in that part of
the communications process. The session layer provides 22 services, many of which are concerned with the
ways in which networked systems exchange information. The most important of these services are dialog
control and dialog separation.
The exchange of information between two systems on the network is called a dialog, and dialog control is
the selection of a mode that the systems will use to exchange messages. When the dialog is begun, the systems
can choose one of two modes, two-way alternate (TWA) mode or two-way simultaneous (TWS) mode. In
TWA mode, the two systems exchange a data token, and only the computer in possession of the token is
permitted to transmit data. This eliminates problems caused by messages that cross in transit. TWS mode is
more complex, because there is no token and both systems can transmit at any time, even simultaneously.
Dialog separation is the process of creating checkpoints in a data stream that enable communicating systems
to synchronize their functions. The difficulty of check pointing depends on whether the dialog is using TWA
or TWS mode. Systems involved in a TWA dialog perform minor synchronizations that require only a single
exchange of check pointing messages, but systems using a TWS dialog perform a major synchronization
using a major/activity token.
There is only one function found at the presentation layer: the translation of syntax between different systems.
In some cases, computers communicating over a network use different syntaxes, and the presentation layer
enables them to negotiate a common syntax for the network communications. When the communicating
systems establish a connection at the presentation layer, they exchange messages containing information
about the syntaxes they have in common, and together they choose the syntax they will use during the session.
Both of the systems involved in the connection have an abstract syntax, which is their native form of
communication. Computers running on different platforms can have different abstract syntaxes. During the
negotiation process, the systems choose a transfer syntax, which is an alternative syntax that the two have in
common. The transmitting system converts its abstract syntax to the transfer syntax, and after the
transmission, the receiving system converts the transfer syntax to its own abstract syntax. When called for,
the systems can select a transfer syntax that provides additional services, such as data compression or
encryption.
The application layer is the entrance point that programs use to access the OSI model and utilize network
resources. Most application layer protocols provide services that programs use to access the network, such
as the Simple Mail Transfer Protocol (SMTP), which most e-mail programs use to send e-mail messages. In
some cases, as with File Transfer Protocol (FTP), the application layer protocol is a program in itself.
Application layer protocols often include the session and presentation layer functions. As a result, a typical
protocol stack consists of four separate protocols that run at the application, transport, network, and data-link
layers.
CHAPTER THREE
A LAN is a network that is located in a relatively small area, such as a department or building. Technically,
a LAN consists of a shared medium to which workstations attach and communicate with one another using
broadcast methods. With broadcasting, any device on the LAN can transmit a message that all other devices
on the LAN can listen to. The device to which the message is addressed actually receives the message. See
“LAN (Local Area Network)” for more details.
A bridge extends a LAN to create a much larger broadcast domain, but the bridge filters each individual
segment’s broadcasts by dropping frames that are not addressed to devices on connected segments. On the
right, several LANs are interconnected at a centrally located hub device that handles the delivery of all inter-
LAN traffic. See “Bridges and Bridging” and “Hubs/Concentrators/MAUs” for more information. Cables
branch from a central internetwork hub to departmental hubs. This system of interconnecting cables and hub
is often referred to as the backbone network. See “Backbone Networks” for more information.
The two most popular LAN technologies are Ethernet and token ring. See “Ethernet” and “Token Ring
Network” for more details.
A WAN connects an organization’s remote offices over public and private data communication channels. In
the not too distant past, there were only a few choices for connecting remote offices. You could connect them
with slow dial-up modems or with dedicated leased lines. Dedicated lines can provide high throughput but
can be expensive since the price increases with distance.
Network Topology's
A network topology describes the configuration of a network (how the network components are connected
together). There are FOUR main topology's.
Star: The star topology uses a central hub through which all components are connected. In a computer
network, the central hub is the host computer, and at the end of each connection is a terminal.
A star network uses a significant amount of cable (each terminal is wired back to the central hub, even if two
terminals are side by side several hundred meters away from the host). All routing decisions are made by the
central hub, and all other workstations can be simple.
An advantage of the star topology is failure in one of the terminals does not affect any other terminal, how-
ever, failure of the central hub affects all terminals. This type of topology is frequently used to connect
terminals to a large time-sharing host computer.
Ring: The ring topology connects workstations in a closed loop. Each terminal is connected to TWO other
terminals (the next and the previous), with the last terminal being connected to the first. Data is transmitted
around the ring in one direction only, each station passing on the data to the next station till it reaches its
destination.
Faulty workstations can be isolated from the ring. When the workstation is powered on, it connects itself into
the ring. When power is off, it disconnects itself from the ring and allows the information to bypass the
workstation.
Information travels around the ring from one workstation to the next. Each packet of data sent on the ring is
prefixed by the address of the station to which it is being sent to. When a packet of data arrives, the
workstation checks to see if the packet address is the same as its own. If it is, it grabs the data in the packet.
If the packet does not belong to it, it sends the packet to the next workstation in the ring.
Ring systems use 4 pair cables (separate send/receive). The common implementation of this topology is
token ring. A break in the ring causes the entire network to fail. Individual workstations can be isolated from
the ring.
Bus: The bus topology connects workstations using a single cable. Each workstation is connected to the next
workstation in a point to point fashion. All workstations connect to the same cable.
If one workstation goes faulty, all workstations are affected. Workstations share the same cable for the
sending and receiving of information. The cabling costs of bus systems is the least of all the different
topology's. Each end of the cable is terminated using a special terminator. The common implementation of
this topology is Ethernet. A message transmitted by one workstation is heard by all the other workstations.
Mesh: The mesh topology connects all computers to each other. The cable requirements are high, but there
are redundant paths built in. Any failure of one computer allows all others to continue, as they have alternative
paths to other computers.
Mesh topology's are used in critical connection of host computers (typically telephone exchanges). Alternate
paths allow each computer to balance the load to other computer systems in the network by using more than
one of the connection paths available.
Network Protocols
This section describes the protocols used in different network topology's. Remember that a protocol defines
the rules for sending data from one point to another.
Carrier Sense Multiple Access with Collision Detection (CSMA/CD): This protocol is commonly used in
bus (Ethernet) implementations. Multiple access refers to the fact that in bus systems, each station has access
to the common cable. Carrier sense refers to the fact that each station listens to see if no other station is
transmitting before sending data. Collision detection refers to the principle of listening to see if other stations
are transmitting whilst we are transmitting.
In bus systems, all stations have access to the same cable medium. It is therefore possible that a station may
already be transmitting when another station wants to transmit. Rule 1 is that a station must listen to determine
if another station is transmitting before initiating a transmission. If the network is busy, then the station must
back off and wait a random interval before trying again.
Rule 2 is that a station, which is transmitting, must monitor the network to see if another station has begun
transmission. This is a collision, and if this occurs, both stations must back off and retry after a random time
interval. As it takes a finite time for signals to travel down the cable, it is possible for more than one station
to think that the network is free and both grab it at the same time. CSMA/CD models what happens in the
real world. People involved in group conversation tend to obey much the same behaviour.
Token Ring: This protocol is widely used in ring networks for controlling station access to the ring. A short
message (called a token ) is circulated around the ring, being passed from station to station (it originates from
a controller or master station which inserts it onto the ring).
A station, which wants to transmit, waits for the token to arrive. When it arrives, it changes it from a token
to a connector message, and appends its message on the end. This is then placed on the outgoing side of the
ring.
Each station passes on received tokens if they have nothing to transmit. They monitor connector messages to
see if the message is addressed to them. If connector messages are addressed to them, they copy the message,
modify it to signify its receipt, and then send it on around the ring. Connector messages that are not addressed
to them are passed directly onto the next station in the ring. When the connector message travels full circle
and arrives at the original sending station, it checks the message to see if its been received. It then discards
the message and replaces it with a token.
Polling: This protocol uses a central station to monitor other stations in the network. It is used predominantly
in star networks.
The master station sends a message to each slave station in turn. If the slave station has information to send,
it passes it to the master station. The master station then sends the information to the desired slave station, or
keeps it if the information is for itself.
Slave stations are very simple and have limited functionality. It is appropriate for industrial and process
control situations. A detailed view of protocols is described later.
Analogue Signals: Analogue signals are what we encounter every day of our life. Speech is an analogue
signal, and varies in amplitude (volume) and frequency (pitch). The main characteristics of analogue signals
are:
Amplitude: This is the strength of the signal. It can be expressed a number of different ways (as volts,
decibels). The higher the amplitude, the stronger (louder) the signal.
Frequency: This is the rate of change the signal undergoes every second, expressed in Hertz (Hz), or cycles
per second. A 30Hz signal changes thirty times a second. In speech, we refer to it as the number of vibrations
per second, as the air that we expel out of our mouth pushes the surrounding air at a specific rate.
One cycle is defined as a point on the wave to the next identical point. The number of cycles there are per
second determine the frequency of the signal.
Digital Signals: Digital signals are the language of modern day computers. Digital signals normally comprise
only two states. These are expressed as ON or OFF, 1 or 0 respectively. The following diagram shows a
digital signal.
The following is a discussion on the three main types of transmission circuits, simplex, half duplex and full
duplex.
Simplex: Data in a simplex channel is always one way. Simplex channels are not often used because it is not
possible to send back error or control signals to the transmit end. An example of a simplex channel in a
computer system is the interface between the keyboard and the computer, in that key codes need only be sent
one way from the keyboard to the computer system.
Half Duplex: A half duplex channel can send and receive, but not at the same time. Its like a one-lane bridge
where two way traffic must give way in order to cross. Only one end transmits at a time, the other end
receives.
Full Duplex: Data can travel in both directions simultaneously. There is no need to switch from transmit to
receive mode like in half duplex. Its like a two lane bridge on a two-lane highway.
Data may be transmitted between two points in two different ways. Lets consider sending 8 bits of digital
data (1 byte).
Parallel: Each bit uses a separate wire. The organisation looks likes,
To transfer data on a parallel link, a separate line is used as a clock signal. This serves to inform the receiver
when data is available. In addition, another line may be used by the receiver to inform the sender that the
data has been used, and its ready for the next data.
Serial: Each bit is sent over a single wire, one after the after. The organisation looks like,
No signal lines are used to convey clock (timing information). There are two ways in which timing
information is encoded with the signal so that the sender and receiver are synchronised (working on the same
data at the same time). If no clock information was sent, the receiver would misinterpret the arriving data
(due to bits being lost, going too slow).
Parallel transmission is obviously faster, in that all bits are sent at the same time, whereas serial transmission
is slower, because only one bit can be sent at a time. Parallel transmission is very costly for anything except
short links.
There are two main methods of sending data, asynchronous and synchronous.
Asynchronous Serial Transmission: Because no signal lines are used to convey clock (timing) information,
this method groups data together into a sequence of bits (five - eight), then prefixes them with a start bit and
a stop bit.
The purpose of the start and stop bits was introduced for the old electromechanical tele-typewriters. These
used motors driving cams, which actuated solenoids, which sampled the signal at specific time intervals. The
motors took a while to get up to speed, thus by prefixing the first bit with a start bit, this gave time for the
motors to get up to speed, and thus acted as a reference point for the start of the first bit.
At the end of the character sequence, a stop bit was used to allow the motors/cams etc to get back to normal,
in addition, it was also need to fill in time in case the character was an end of line, when the tele-type-writer
would need to go to the beginning of a new-line. Without the stop character, the machine could not complete
this before the next character arrived.
Synchronous Serial Transmission: In synchronous transmission, the line idle state is changed to a known
character sequence (7E), which is used to synchronise the receiver to the sender. The start and stop bits are
removed, and each character is combined with others into a data packet. The data packet is prefixed with a
header field, and suffixed with a trailer field which includes a checksum value (used by the receiver to check
for errors in sending).
The header field is used to convey address information (sender and receiver), packet type and control data.
The data field contains the users data (if it can't fit in a single packet, then use multiple packets and number
them) or control data. Generally, it has a fixed size. The tail field contains checksum information which the
receiver uses to check whether the packet was corrupted during transmission.
Channel: A channel is a portion of the communications medium allocated to the sender and receiver for
conveying information between them. The communications medium is often subdivided into a number of
separate paths, each of which is used by a sender and receiver for communication purposes.
Baud Rate: Baud rate is the reciprocal of the shortest signal element (a measure of the number of line changes
which occur every second). For a binary signal of 20Hz, this is equivalent to 20 baud (there are 20 changes
per second).
Bits Per Second: This is an expression of the number of bits per second. Where a binary signal is being used,
this is the same as the baud rate. When the signal is changed to another form, it will not be equal to the baud
rate, as each line change can represent more than one bit (either two or four bits).
Bandwidth: Bandwidth is the frequency range of a channel, measured as the difference between the highest
and lowest frequencies that the channel supports. The maximum transmission speed is dependant upon the
available bandwidth. The larger the bandwidth, the higher the transmission speed.
Protocols: A protocol is a set of rules which governs how data is sent from one point to another. In data
communications, there are widely accepted protocols for sending data. Both the sender and receiver must use
the same protocol when communicating. By convention, the least significant bit is transmitted first.
Asynchronous protocols: A synchronous systems send data bytes between the sender and receiver. Each
data byte is preceded with a start bit, and suffixed with a stop bit. These extra bits serve to synchronise the
receiver with the sender.
Transmission of these extra bits (2 per byte) reduce data throughput. Synchronisation is achieved for each
character only. When the sender has no data to transmit, the line is idle and the sender and receiver are NOT
in synchronisation. Asynchronous protocols are suited for low speed data communications.
Synchronous protocols: Synchronous protocols involve sending timing information along with the data
bytes, so that the receiver can remain in synchronisation with the sender. When the sender has no data to
transmit, the sender transmits idle flags (a sequence of alternating 0's and 1's) to maintain sender/receiver
synchronisation. Data bytes are packaged into small chunks called packets, with address fields being added
at the front (header) and checksums at the rear of the packet. There are TWO main types of synchronous
protocols used today, character orientated (bi-sync) and bit orientated (hdlc).
Public communication carriers offer a number of different circuits to clients. The following section discusses
some of the common circuits available.
Switched Dial-Up Lines: The most common circuit provided by public communication carriers are dial-up
telephone circuits. Subscribers send routing information (ie, the dialled number) to the network, which
connects them to the receiver, then follow this with the information (speech).
Switched circuits are not permanent. They exist only for the duration of the connection, and are switched by
the public network (it connects the circuits). Switched dial-up lines are not generally suited to data
transmission, but are used heavily for some types of services (eg, Bulletin Boards). Using a modem, a user
can use their phone line to dial up a network provider via the phone line and connect to the Internet. At
present, speeds up to 56Kbps are possible over standard dial up telephone circuits.
Datel (data over dial-up telephone circuit): The public communications carrier provides a dial-up line and
modem for the user. The line may be used for speech or data, but not at the same time. The circuit is non-
permanent and is switched. The user dials the number, and when the other end modem replies, flicks a switch
which connects the modem and disconnects the telephone. It is suited to low speed, intermittent, low volume
data requirements. Datel was 300bits per second and is no longer offered.
Leased Lines: A leased line is a permanent non-switched end to end connection. Data is sent from one end
to the other. There is no requirement to send routing information along with the data. Leased lines provide
an instant guaranteed method of delivery. They are suited to high volume, high speed data requirements. The
cost of the line (which is leased per month), is offset against that of toll or other rental charges. In addition,
the leased line offers a significantly higher data transmission rate than the datel circuit.
Very high speeds can be achieved on leased lines. The cost varies, and goes up according to the capacity
(speed in bits per second) that the customer requires.
Packet Switched Circuits: Connection is made to the public communications carrier packet network. This
is a special network which connects users which send data grouped in packets. Packet technology is suited
to medium speed, medium volume data requirements. It offers cheaper cost than the datel circuit, but for
large volumes, is more expensive than the leased circuit. Special hardware and software is required to
packetize the data before transmission, and depacketize the data on arrival. This adds to the cost (though this
equipment can be leased). Packet switched circuits exist for the duration of the call.
ISDN: ISDN refers to a new form of network which integrates voice, data and image in a digitised form. It
offers the customer a single interface to support integrated voice and data traffic. Rather than using separate
lines for voice (PABX system) and data (leased lines), ISDN uses a single digital line to accommodate these
requirements.
The basic circuit provided in ISDN is the 2B+D circuit. This is two B channels (each of 64Kbits) and one D
channel (of 16Kbits). The D channel is normally used for supervisory and signalling purposes, whilst the B
circuits are used for voice and data. In New Zealand, users pay a connection fee and for the amount of data
they transfer, a time charge is also applied. ISDN is great for transferring a reasonable amount of data in a
short time. It is a dial up on demand service. A lot of Radio stations use ISDN to transfer songs, and
companies use it for internet access like dialling up a mail server in another site periodically during the day
to transfer email. ISDN is not suited for permanent 24 hour connections. It becomes too costly if used for
extended periods. A leased line is a far better option if permanent connection is required.
This section guides you through a variety of topics related to network design and construction. If you are
expanding or upgrading an existing network or building a new network, it is important to be aware of new
technologies that can help you build efficient and “future-proof” networks (assuming that is possible). New
network designs are developed to improve bandwidth and traffic flow, something that is certainly needed as
more and more users run collaborative network and Web applications that introduce large amounts of traffic
to networks. This section jumps around a bit in order to cover the new network technologies, but you are
encouraged to refer to the other referenced sections to continue your research.
To accommodate these users, the following should be considered as part of your network design. Note that
these issues are elaborated on in this topic, as well as elsewhere in the book.
❖ Consider one cabling system that supports both voice, video, and data instead of maintaining
separate telephone lines, data trunks, and video feeds.
❖ Provide bandwidth and traffic controls so real-time traffic (live voice and video) can be delivered
without delay.
❖ Support a single protocol—TCP/IP—and support Web technologies, intranets, and extranets
(business-to-business networks over the Internet).
❖ Upgrade from slower shared networks to switched networks that reduce contention.
❖ Implement VLAN (virtual LAN) schemes in which users and servers can be anywhere on the
network and configured into any VLAN.
❖ Implement schemes that reduce the burden on routers, or implement new router switches that
support high-speed internetworking.
❖ Centralize servers for better management, security, and data protection, rather than keep servers in
departments.
❖ Support advanced services such as a global directory service and a distributed file system.
This section evaluates traditional network designs. A single Ethernet LAN provides a single broadcast
domain in which multiple users share the same network topology. This scheme works well as long as the
number of users remains small. Adding more users increases the number of attempts to access the shared
medium. Collisions may occur, reducing network performance.
To regain performance, you split the network with a bridge or router. Now, fewer users are contending for
the shared media in each segment, but they can still communicate across the bridge or router. Broadcasts are
contained within each segment, thus traffic meant for one segment does not propagate to other segments.
Reducing contention and containing broadcasts are key design goals.
As networks grow, more bridges may be added, but routers are a better choice because they give
administrators more control over network traffic and provide security barriers between networks. Figure N-
15(c) illustrates a set of router-connected network segments. However, traffic flowing from the segment on
the left crosses the intermediate network segment and adds unnecessary traffic to it.
To avoid this, a high-speed backbone network is installed, as shown in Figure N-15(d). Now, traffic flowing
from one network to another crosses a router to the backbone. The backbone is dedicated to handling
internetwork traffic. The simplest backbones are FDDI (Fiber Distributed Data Interface) and Fast Ethernet
networks running at 100 Mbits/sec.
Hierarchical Wiring
The backbone may be either distributed or collapsed, as shown in Figure N-16. A distributed backbone is a
physical cable that snakes through an entire building or across a campus. Subnetworks are attached to this
cable. A collapsed backbone is a bus or a silicon-based circuit that exists on the backplane of a wiring hub.
Each workgroup hub is then connected to the central hub with a dedicated cable.
Networks with centralized cores require structured, hierarchical wiring systems. The section “TIA/EIA
Structured Cabling Standards” describes such a system. It provides a standard that specifies the type of cable
and the hierarchy of the cabling system.
Hierarchical cabling and network designs provide better troubleshooting and fault isolation. They also
improve traffic flow and may help avoid congestion. Users in the same group can communicate with one
another through the same hub without sending traffic all the way to the core hub, but the core hub provides
a connection point to every other point in the network.
Centralizing Services
With the introduction of a high-speed backbone, it makes sense to attach network servers and peripherals to
the backbone. Network administrators can then physically locate servers in a central location where they are
easier to manage. Server centers (or server farms) can be staffed full time and have backup power supplies
and fire-extinguishing systems.
It is better to leave some servers attached to the LAN segment of the users that access the servers. This keeps
network traffic local. But if users on other subnets access a server often, it should be moved to the backbone.
Otherwise, traffic for server requests will move across the backbone and into other subnets, adding additional
traffic to those subnets.
Switching Models
Increasing bandwidth requirements are driving the design and construction of networks today. Internetwork
traffic has increased. The old 80/20 rule, in which 80 percent of the traffic stayed local, is now reversed.
Now, 80 percent of the traffic crosses the internetwork. Typically, users are accessing servers across
internetwork boundaries, and router-connected networks are failing to provide the performance requirements
demanded by users.
One way to gain performance is to reduce the number of routers on a network. Normally, removing routers
would mean that more users would be contending for the network and that broadcasts would propagate to
more of the network. Switching technologies can reduce these problems while helping to reduce router
dependence.
A switch is a multiport box in which each port on the box is essentially its own network segment. The switch
can quickly bridge any two ports together so devices on those ports become part of the same broadcast
network. A hub with multiple computers can also be attached to a switch port. The best performance is
obtained when a single computer is attached to the port, since that computer is then the only contender for
the port. Switches are often called layer 2 (data link layer) technologies while routing is a layer 3 (network
layer) technology. Switches may be installed in workgroups to increase the performance of links between
shared hubs in those workgroups or links to the internetwork routers.
By building a large switched network and eliminating routers, you create a flat network topology. Note that
no routers exist on this network (at least not yet in this discussion). Basically, any workstation can connect
with any other workstation without going through a router. This is good for access and performance, but for
security reasons it may not be good that any user can reach every station.
In addition, there are practical limitations to the size of a flat network. If class C Internet addressing is used,
only 255 network devices can be configured on the network. Additional networks with their own class C
Internet addresses will be required to support additional stations, and these networks must be interconnected
with routers.
The advantage of LANs is that they limit network broadcasts to a specific group of workstations, but if a flat,
switched network is created, this advantage is lost. Administrators can, however, create VLANs or ELANs
(emulated LANs) by using hardware or software techniques to restore the functionality of LANs to the
switched network. There are a number of ways to create ELANs or VLANs, as discussed under “LANE
(LAN Emulation)” and “VLAN (Virtual LAN).”
Once VLANs or ELANs are created, routing is required to forward packets from one LAN to the next. A
number of interesting routing solutions are being devised to handle this, as discussed in the following
sections.
Routing is necessary with VLANs to contain broadcasts and add security. Several solutions have been
proposed. The usual technique is to avoid router hops as much as possible, following the mantra “route once,
switch many.” Alternatively, new hardware-based router switches running on Gigabit Ethernet networks
eliminate the problem altogether, as discussed later in this topic under “Router Switching.”
Since the early 1990s, ATM (Asynchronous Transfer Mode) has been considered the ultimate network
architecture. It was assumed that at some point, every workstation would sport an ATM network adapter
rather than an Ethernet or token ring adapter. That has not occurred, and it probably will not occur in the near
future. Still, many organizations have installed ATM backbone switches because of their ability to handle
network traffic at high speeds.
The only problem with this scheme is that ATM is a virtual circuit-based, connection-oriented, cell-based
networking scheme while Ethernet (assuming that is the predominant LAN scheme) is a connectionless
frame-based scheme. A number of overlay schemes have been developed to allow “legacy” Ethernet LANs
to be connected to ATM backbones.
The usual method is to use layer 3 routing algorithms to discover paths through the network, then set up layer
2 virtual circuits through the ATM fabric that can deliver datagrams to the destination without going through
a router. The technique is often called shortcut routing. Assuming IP is your internetwork routing protocol,
you can refer to “IP over ATM” for additional information.
One of the problems with these techniques is knowing when to route and when to switch the traffic at layer
2. If a transmission is long, it makes sense to switch it. If it is short, it may be more efficient just to route it.
However, traffic at layer 3 does not explicitly identify itself as a long stream that might be worth switching,
so the layer 3 protocol must identify a stream, usually by inspecting the contents of datagrams. A reservation
protocol may also be used. Techniques for detecting flow include 3Com FastIP, Ipsilon IP Switching, and
other techniques discussed under “IP Switching.”
All of these extra protocols and techniques add complexity to network construction and management. Many
vendors and network administrators want to simply improve what they already have. This is what router
switching is all about, as discussed under “Router Switching” below.
Gigabit Ethernet
Gigabit Ethernet has already made a name for itself as a high-speed alternative to ATM. One of its main
attractions is its compatibility with legacy Ethernet networks, allowing it to avoid some of the overlay
schemes required to connect legacy LANs with ATM backbones.
Gigabit Ethernet can even exceed the performance of ATM because it uses the same frame size as legacy
Ethernet. ATM breaks frames up into small cells, and each cell adds overhead that reduces throughput. In
addition, Gigabit Ethernet can provide some of the QoS (Quality of Service) features that ATM provides
through integrated bandwidth management and the RSVP (Resource Reservation Protocol). Gigabit Ethernet
allows organizations to scale up their existing Ethernet networks. Some administrators have reconsidered
their ATM plans with the emergence of Gigabit Ethernet, RSVP, and IP Multicast. See “Gigabit Ethernet”
for more details.
Router Switching
Along with Gigabit Ethernet, several vendors have created router switches that can perform layer 3 routing
at the same speed as layer 2 switching. These routers use new hardware-based circuitry implemented in
ASICs (application-specific integrated circuits) that can perform routing at multigigabit speeds. Older routers
rely on a single shared CPU (central processing unit) to inspect and forward packets. Note that router switches
still implement CPUs to run normal routing protocols such as RIP (Routing Information Protocol) and OSPF
(Open Shortest Path First).
Rapid City Communications (now part of Bay Networks) developed a family of router switches that provide
packet-relay capabilities with the control of IP routing. The devices provide the bandwidth to accommodate
increased volumes of internetwork traffic and support high-speed Gigabit Ethernet. Its f1200 has a 15-
Gbit/sec shared memory switch fabric with over 7-Gbit/sec throughput. The device will forward 7 million
packets/sec whether routing, switching, or doing both. A traditional router will forward 500,000 packets per
second. At these speeds, most network managers can build switched networks with any routing configuration
necessary.
Switched Topology
A switch is a multiport data link layer (OSI Reference Model Layer 2) device. A switch "learns" MAC
addresses and stores them in an internal lookup table. Temporary, switched paths are created between the
frame's originator and its intended recipient, and the frames are forwarded along that temporary path.
The typical LAN with a switched topology features multiple connections to a switching hub. Each port, and
the device it connects to, has its own dedicated bandwidth. Although originally switches forwarded frames,
based upon the MAC address, technological advances are rapidly changing this. Switches are available today
that can switch cells (a fixed-length, Layer 2 data-bearing structure). Switches can also be triggered by Layer
3 protocols, IP addresses, or even physical ports on the switching hub. Switches can improve the performance
of a LAN in two important ways. First, they increase the aggregate bandwidth available throughout that
network. For example, a switched Ethernet hub with 8 ports contains 8 separate collision domains of 10Mbps
each, for an aggregate of 80Mbps of bandwidth.
The second way that switches improve LAN performance is by reducing the number of devices that are
forced to share each segment of bandwidth. Each switch-delineated collision domain is inhabited by only
two devices: the networked device and the port on the switching hub to which it connects. These are the only
two devices that can compete for the 10Mbps of bandwidth on their segment. In networks that do not utilize
a contention-based media access method, such as Token Ring or FDDI, the tokens circulate among a much
smaller number of networked machines. One area for concern with large switched implementations is that
switches do not isolate broadcasts. They bolster performance solely by segmenting collision, not broadcast,
domains. Excessive broadcast traffic can significantly and adversely impact LAN performance.
These four basic topologies are the building blocks of local area networking. They can be combined,
extended, and implemented in a kaleidoscopic array of ways. The right topology for your LAN is the one
that is best suited to your clients' particular network performance requirements. More likely than not, this
ideal topology will be some combination of the basic topologies.
Complex Topologies
Complex topologies are extensions and/or combinations of basic physical topologies. Basic topologies, by
themselves, are adequate for only very small LANs. The scaleability of the basic topologies is extremely
limited. Complex topologies are formed from these building blocks to achieve a custom-fitted, scaleable
topology.
Daisy Chains
The simplest of the complex topologies is developed by serially interconnecting all the hubs of a network.
This is known as daisy chaining. This simple approach uses ports on existing hubs for interconnecting the
hubs. Thus, no incremental cost is incurred during the development of such a backbone.
Small LANs can be scaled upward by daisy-chaining hubs together. Daisy chains are easily built and don't
require any special administrator skills. Daisy chains were, historically, the interconnection method of choice
for emerging, first-generation LANs.
The limits of daisy chaining can be discovered in a number of ways. LAN technology specifications, such as
802.3 Ethernet, dictate the maximum size of the LAN in terms of the number of hubs and/or repeaters that
may be strung together in sequence. The distance limitations imposed by the physical layer, multiplied by
the number of devices, dictate the maximum size of a LAN. This size is referred to as a maximum network
diameter. Scaling beyond this diameter will adversely affect the normal functioning of that LAN. Maximum
network diameters frequently limit the number of hubs that can be interconnected in this fashion. This is
particularly true of contemporary high-performance LANs, such as Fast Ethernet, that place strict limitations
on network diameter and the number of repeaters that can be strung together.
Daisy-chaining networks that use a contention-based media access method can become problematic long
before network diameter is compromised, however. Daisy chaining increases the number of connections, and
therefore the number of devices, on a LAN. It does not increase aggregate bandwidth or segment collision
domains. Daisy chaining simply increases the number of machines sharing the network's available
bandwidth. Too many devices competing for the same amount of bandwidth can create collisions and quickly
bring a LAN to its knees. This topology is best left to LANs with less than a handful of hubs and little, if any,
wide area networking.
Hierarchies
Hierarchical topologies consist of more than one layer of hubs. Each layer serves a different network function.
The bottom tier would be reserved for user station and server connectivity. Higher-level tiers provide
aggregation of the user-level tier. A hierarchical arrangement is best suited for medium- to large-sized LANs
that must be concerned with scaleability of the network and traffic aggregation.
Hierarchical Rings
Ring networks can be scaled up by interconnecting multiple rings in a hierarchical fashion. User station and
server connectivity can be provided by as many limited size rings as is necessary to provide the required level
of performance. A second-tier ring, either Token Ring or FDDI, can be used to interconnect all the user-level
rings and to provide aggregated access to the WAN.
Small ring LANs can be scaled by interconnecting multiple rings hierarchically. In this figure, 16Mbps Token
Ring (shown logically as a loop, rather than in its topologically correct star) is used to interconnect the user
stations and FDDI loops are used for the servers and backbone tier.
Hierarchical Stars
Star topologies, too, can be implemented in hierarchical arrangements of multiple stars. Hierarchical stars
can be implemented as a single collision domain or segmented into multiple collision domains using either
switches or bridges. A hierarchical star topology uses one tier for user and server connectivity and the second
tier as a backbone.
Hierarchical Combinations
Overall network performance can be enhanced by not force-fitting all the functional requirements of the LAN
into a single solution. Mixing multiple technologies is enabled by today's high-end switching hubs. New
topologies can be introduced by inserting the appropriate circuit board into the high-bandwidth backplane. A
hierarchical topology lends itself to such combination of topologies.
Topological variation can be an important way to optimize network performance for each of the various
functional areas of a LAN. LANs contain four distinct functional areas: station connectivity, server
connectivity, WAN connectivity, and backbone. Each may be best served by a different basic or complex
topology.
Station Connectivity
The primary function of most LANs is station connectivity. Station connectivity tends to have the least
stringent performance requirements of the LAN's functional areas. There are obvious exceptions to this, such
as CAD/CAM workstations, desktop videoconferencing, and so on. In general, compromises in the cost and
performance of this part of a LAN's technology and topology are less likely to adversely affect the network's
performance.
Providing connectivity to machines that have divergent network performance requirements may require the
use of multiple LAN technologies. Fortunately, many of today's hub manufacturers can support multiple
technologies from the same hub chassis. LANs provide basic connectivity to user stations and the peripherals
that inhabit them. Differences in the network performance requirements of user station equipment can
necessitate a mixed topology/technology solution.
Server Connectivity
Servers tend to be much more robust than user workstations. Servers tend to be a point of traffic aggregation
and must serve many clients and/or other servers. In the case of high-volume servers, this aggregation must
be designed into a LAN's topology; otherwise, clients and servers suffer degraded network performance.
Network connectivity to servers, typically, should also be more robust than station connectivity in terms of
available bandwidth and robustness of access method. LAN topologies can also be manipulated to
accommodate the robust network performance requirements of servers and server clusters. The "server farm"
is interconnected with a small FDDI loop, while the less-robust user stations are interconnected with Ethernet.
WAN Connectivity
A frequently overlooked aspect of a LAN's topology is its connection to the wide area network. In many
cases, WAN connectivity is provided by a single connection from the backbone to a router.
The LAN's connection to the router that provides WAN connectivity is a crucial link in a building's overall
LAN topology. Improper technology selection at this critical point can result in unacceptably deteriorated
levels of performance for all traffic entering or exiting the building. LAN technologies that use a contention-
based access method are highly inappropriate for this function.
Networks that support a high degree of WAN-to-LAN and LAN-to-WAN traffic benefit greatly from having
the most robust connection possible in this aspect of their overall topology. The technology selected should
be robust in terms of its nominal transmission rate and its access method. Contention-based technologies
should be avoided at all costs. The use of a contention-based media, even on a dedicated switched port, may
become problematic in high-usage networks. This is the bottleneck for all traffic coming into, and trying to
get out of, the building.
Backbone Connectivity
A LAN's backbone is the portion of its facilities used to interconnect all the hubs. A backbone can be
implemented in several topologies and with several different network components, as shown in Figure 5.13.
The LAN's backbone provides a critical function. It interconnects all the locally networked resources and, if
applicable, the WAN. This logical depiction of a backbone can be implemented in a wide variety of ways.
Determining which backbone topology is correct for your LAN is not easy. Some options are easier to
implement, very affordable, and easy to manage. Others can be more costly to acquire and operate. Another
important difference lies in the scaleability of the various backbone topologies. Some are easy to scale, up to
a point, and then require reinvestment to maintain acceptable levels of performance.
Each option must be examined individually, relative to your particular situation and requirements.
Serial Backbone
A serial backbone, is nothing more than a series of hubs daisy-chained together. As described in the preceding
section, this topology is inappropriate for all but the smallest of networks.
The hubs interconnecting users and servers may be serially connected to each other to form a primitive
backbone. This, as previously mentioned, is what is known as daisy chaining.
Distributed Backbone
A distributed backbone is a form of hierarchical topology that can be built by installing a backbone hub in a
central location. A building's PBX room usually serves as the center of its wiring topology. Consequently, it
is the ideal location for a distributed backbone hub. Connections from this hub are distributed to other hubs
throughout the building.
A distributed backbone can be developed by centrally locating the backbone hub. Connections are distributed
from this hub to other hubs throughout the building. Unlike the serial backbone, this topology enables LANs
to span large buildings without compromising maximum network diameters.
Distributing the backbone in this fashion requires an understanding of the building's wire topology and
distance limitations of the various LAN media choices. In medium to large locations, the only viable option
for implementing a distributed backbone will likely be fiber-optic cabling.
Collapsed Backbone
A collapsed backbone topology features a centralized router that interconnects all the LAN segments in a
given building. The router effectively creates multiple collision and broadcast domains, thereby increasing
the performance of each of the LAN segments.
Routers operate at Layer 3 of the OSI Reference Model. They are incapable of operating as quickly as hubs.
Consequently, they can limit effective throughputs for any LAN traffic that originates on one LAN segment
but terminates on another.
Collapsed backbones, also introduce a single point of failure in the LAN. This is not a fatal flaw. In fact,
many of the other topologies also introduce a single point of failure into the LAN. Nevertheless, it must be
considered when planning a network topology.
LAN segments can be interconnected by a router that functions as a collapsed backbone. This topology offers
centralized control over the network, but introduces delays and a single point of failure.
An important consideration in collapsed backbone topologies is that user communities are seldom
conveniently distributed throughout a building. It is more probable that multiple LAN segments will be
needed for any given community. It is equally probable that multiple segments will exist in close proximity.
Collapsed backbone topologies need to be carefully planned. Hastily or poorly constructed topologies will
have adverse effects on network performance.
Parallel Backbone
In some of the cases where collapsed backbones are untenable solutions, a modified version may prove ideal.
This modification is known as the parallel backbone. The reasons for installing a parallel backbone are many.
User communities may be widely dispersed throughout a building, some groups and/or applications may
have stringent network security requirements, or high network availability may be required. Regardless of
the reason, running parallel connections from a building's collapsed backbone router to the same telephone
closet enables supporting multiple segments from each closet.
The parallel backbone topology is a modification of the collapsed backbone. Multiple segments can be
supported in the same telephone closet or equipment room. This marginally increases the cost of the network,
but can increase the performance of each segment, and satisfy additional network criteria, like security.
Careful understanding of the performance requirements imposed by customers, stratified by LAN functional
area, is the key to developing the ideal topology for any given set of user requirements. The potential
combinations are limited only by one's imagination. Continued technological innovation will only increase
the topological variety available to network designers.
Miscellaneous Criteria
Numerous other criteria, both technical and financial, are also factors in the selection of a LAN topology.
The overall topology should be determined by customer performance requirements. These options should be
used to refine and/or temper topological design decisions.
Cost
It doesn't take too much of an imagination to conjure up a network topology that can't be cost justified. Even
large, well-funded network implementations have finite budgets. The implemented topology must balance
cost against satisfaction of existing user requirements.
Legacy Drag
The ideal topology may prove impossible to implement for a number of reasons. The physical wire and
distribution throughout a building may be inappropriate for the planned network. Rewiring might be cost
prohibitive. Similarly, if your company has an extensive financial commitment to legacy technologies, it
might not be feasible to implement an "ideal" network and topology. Lastly, the lack of adequate budgeting
can quickly scale back network plans.
These are valid reasons for tempering an idealistic topology. Therefore, they should be examined and factored
in before hardware is purchased.
Future Expectations
It is foolish to design a network without first considering what is likely to occur in the foreseeable future.
Innovations in network and computing technologies, changes in traffic volume and/or patterns, and a myriad
of other factors could greatly alter the users' expectations for network performance in the future. The network
and its topology must be flexible enough to accommodate expected future changes.
Summary
LAN topology is one of the most critical components of aggregate LAN performance. The four basic
topologies--switched, star, ring, and bus--can be implemented in a dizzying array of variations and
combinations. These combinations are not limited to just those that were presented in this chapter. Many of
today's LAN technologies lend themselves quite readily to creative arrangement and combinations. It is
important to understand the strengths and weaknesses of each topology relative to the LAN's desired
performance and underlying technologies. These must be balanced against the realities of a building's
physical layout, cable availability, cable paths, and even cable and wire types.
Ultimately, however, the successful topology is driven by the users' required performance levels and
tempered with other considerations like cost, expected future growth, and technology limitations. The biggest
challenge is translating the users' requirements into megabits per second (Mbps) and other network
performance metrics.
This section explains the types of network cables used in computer networks in detail. Learn the
specifications, standards, and features of the coaxial cable, twisted-pair cable, and fiber-optical
cable.
There are three primary types of cable used to build LANs: coaxial, twisted-pair, and fiber optic. Coaxial and
twisted-pair cables are copper-based and carry electrical signals, and fiber optic cables use glass or plastic
fibers to carry light signals.
Coaxial Cable
Coaxial cable is so named because it contains two conductors within the sheath. Unlike other two-conductor
cables, however, coaxial cable has one conductor inside the other, as illustrated in Figure C1. At the center
of the cable is the copper core that actually carries the electrical signals. The core can be solid copper or
braided strands of copper. Surrounding the core is a layer of insulation, and surrounding that is the second
conductor, which is typically made of braided copper mesh. This second conductor functions as the cable's
ground. Finally, the entire assembly is encased in an insulating sheath made of PVC or Teflon.
Coaxial cable consists of two conductors. The inner conductor is held inside an insulator with the other
conductor woven around it providing a shield. An insulating protective coating called a jacket covers the
outer conductor.
The outer shield protects the inner conductor from outside electrical signals. The distance between the outer
conductor (shield) and inner conductor plus the type of material used for insulating the inner conductor
determine the cable properties or impedance. Typical impedances for coaxial cables are 75 ohms for Cable
TV, 50 ohms for Ethernet Thinnet and Thicknet. The excellent control of the impedance characteristics of
the cable allow higher data rates to be transferred than with twisted pair cable.
CAUTION: The outer sheath—also called a casing—of electrical cables can be made of different types of
materials, and the sheath you use should depend on local building codes and the location of the cables in the
network's site. Cables that run through a building's air spaces (called plenums) usually must have a sheath
made of a material that doesn't generate toxic gases when it burns. Plenum cable costs more than standard
PVC-sheathed cable and is somewhat more difficult to install, but it's an important feature that should not
be overlooked when you are purchasing cable.
Figure C1: Coaxial cable consists of two electrical conductors sharing the same axis, with insulation in
between and encased in a protective sheath
This cable contains a conductor, insulator, braiding, and sheath. The sheath covers the braiding, the braiding
covers the insulation, and the insulation covers the conductor.
Sheath
This is the outer layer of the coaxial cable. It protects the cable from physical damage.
Braided shield
This shield protects signals from external interference and noise. This shield is built from the same metal that
is used to build the core.
Insulation
Insulation protects the core. It also keeps the core separate from the braided shield. Since both the core and
the braided shield use the same metal, without this layer, they will touch each other and create a short-circuit
in the wire.
Conductor
The conductor carries electromagnetic signals. Based on conductor a coaxial cable can be categorized into
two types; single-core coaxial cable and multi-core coaxial cable.
A single-core coaxial cable uses a single central metal (usually copper) conductor, while a multi-
core coaxial cable uses multiple thin strands of metal wires. The following image shows both types of cable.
The coaxial cables were not primarily developed for the computer network. These cables were developed for
general purposes. They were in use even before computer networks came into existence. They are still used
even their use in computer networks has been completely discontinued.
At the beginning of computer networking, when there were no dedicated media cables available for computer
networks, network administrators began using coaxial cables to build computer networks.
Because of its low cost and long durability, coaxial cables were used in computer networking for nearly two
decades (the 80s and 90s). Coaxial cables are no longer used to build any type of computer network.
Coaxial cables have been in use for the last four decades. During these years, based on several factors such
as the thickness of the sheath, the metal of the conductor, and the material used in insulation, hundreds of
specifications have been created to specify the characteristics of coaxial cables.
From these specifications, only a few were used in computer networks. The following table lists them.
i. Coaxial cable uses RG rating to measure the materials used in shielding and conducting cores.
ii. RG stands for the Radio Guide. Coaxial cable mainly uses radio frequencies in transmission.
iii. Impedance is the resistance that controls the signals. It is expressed in the ohms.
iv. AWG stands for American Wire Gauge. It is used to measure the size of the core. The larger the
AWG size, the smaller the diameter of the core wire.
There are two types of coaxial cable that have been used in local area networking: RG-8, also known as thick
Ethernet, and RG-58, which is known as thin Ethernet. These two cables are similar in construction but differ
primarily in thickness (0.405 inches for RG-8 versus 0.195 inches for RG-58) and in the types of connectors
they use (N connectors for RG-8 and bayonet-Neill-Concelman [BNC] connectors for RG-58). Both cable
types are wired using the bus topology.
Because of their differences in size and flexibility, thick and thin Ethernet cables are installed differently. On
a thick Ethernet network, the RG-8 cable usually runs along a floor, and separate AUI cables run from the
RG-8 trunk to the network interface adapter in the computer. The RG-58 cable used for thin Ethernet
networks is thinner and much more flexible, so it's possible to run it right up to the computer's network
interface, where it attaches using a T fitting with a BNC connector to preserve the bus topology.
NOTE: Thick Ethernet and thin Ethernet are also known as 10Base5 and 10Base2, respectively. These
abbreviations indicate that the networks on which they are used run at 10 Mbps, use baseband transmissions,
and are limited to maximum cable segment lengths of 500 and 200 (actually 185) meters, respectively.
Coaxial cable is used today for many applications, most noticeably cable television networks. It has fallen
out of favor as a LAN medium due to the bus topology's fault-tolerance problems and the size and relative
inflexibility of the cables, which make them difficult to install and maintain.
Twisted-Pair Cable
The twisted-pair cable was primarily developed for computer networks. This cable is also known as Ethernet
cable. Almost all modern LAN computer networks use this cable.
This cable consists of color-coded pairs of insulated copper wires. Every two wires are twisted around each
other to form pair. Usually, there are four pairs. Each pair has one solid color and one stripped color wire.
Solid colors are blue, brown, green, and orange. In stripped color, the solid color is mixed with the white
color.
Based on how pairs are stripped in the plastic sheath, there are two types of twisted-pair cable; UTP and STP.
In the UTP (Unshielded twisted-pair) cable, all pairs are wrapped in a single plastic sheath.
In the STP (Shielded twisted-pair) cable, each pair is wrapped with an additional metal shield, then all pairs
are wrapped in a single outer plastic sheath.
• Both STP and UTP can transmit data at 10Mbps, 100Mbps, 1Gbps, and 10Gbps.
• Since the STP cable contains more materials, it is more expensive than the UTP cable.
• Both cables use the same RJ-45 (registered jack) modular connectors.
• Both cables can accommodate a maximum of 1024 nodes in each segment.
• The STP provides more noise and EMI resistance than the UTP cable.
• The maximum segment length for both cables is 100 meters or 328 feet.
Twisted-pair cable wired in a star topology, is the most common type of network medium used in LANs
today. Most new LANs use UTP cable, but there is also a shielded twisted pair (STP) variety for use in
environments more prone to electromagnetic interference. Unshielded twisted pair cable contains eight
separate copper conductors, as opposed to the two used in coaxial cable. Each conductor is a separate
insulated wire, and the eight wires are arranged in four pairs, twisted at different rates. The twists prevent the
signals on the different wire pairs from interfering with each other (called crosstalk) and also provide
resistance to outside interference. The four wire pairs are then encased in a single sheath. The connectors
used for twisted-pair cables are called RJ45s; they are the same as the RJ11 connectors used on standard
telephone cables, except that they have eight electrical contacts instead of four or six.
Note: UTP cable has four separate wire pairs, each individually twisted, enclosed in a protective sheath
Twisted-pair cable has been used for telephone installations for decades; its adaptation to LAN use is
relatively recent. Twisted-pair cable has replaced coaxial cable in the data networking world because it has
several distinct advantages. First, because it contains eight separate wires, the cable is more flexible than the
more solidly constructed coaxial cable. This makes it easier to bend, which simplifies installation. The second
major advantage is that there are thousands of qualified telephone cable installers who can easily adapt to
installing LAN cables as well. In new construction, the same contractor often installs telephone and LAN
cables simultaneously.
Unshielded twisted pair cable comes in a variety of different grades, called categories by the Electronics
Industry Association (EIA) and the Telecommunications Industry Association (TIA), the combination being
referred to as EIA/TIA. These categories are listed in Table 2.1. The two most significant UTP grades for
LAN use be Category 3 and Category 5. Category 3 cable was designed for voice-grade telephone networks
and eventually came to be used for Ethernet. Category 3 cable is sufficient for 10-Mbps Ethernet networks
(where it is called 10Base-T), but it is generally not used for Fast Ethernet (except with special equipment).
If you have an existing Category 3 cable installation, you can use it to build a standard Ethernet network, but
virtually all new UTP cable installations today use at least Category 5e cable.
The degree of reduction in noise interference is determined specifically by the number of turns per foot.
Increasing the number of turns per foot reduces the noise interference. To further improve noise rejection, a
foil or wire braid "shield" is woven around the twisted pairs. This shield can be woven around individual
pairs or around a multi-pair conductor (several pairs).
Cables with a shield are called shielded twisted pair and are commonly abbreviated STP. Cables without a
shield are called unshielded twisted pair or UTP. Twisting the wires together results in a characteristic
impedance for the cable. A typical impedance for UTP is 100 ohm for Ethernet 10BaseT cable.
UTP or unshielded twisted pair cable is used on Ethernet 10BaseT and can also be used with Token Ring. It
uses the RJ line of connectors (RJ45, RJ11, etc..)
STP or shielded twisted pair is used with the traditional Token Ring cabling or ICS - IBM Cabling System.
It requires a custom connector. IBM STP (shielded twisted pair) has a characteristic impedance of 150 ohms.
CAUTION: Most Ethernet networks use only two of the four wire pairs in the UTP cable, one for
transmitting data and one for receiving it. However, this does not mean that you are free to utilize the other
two pairs for another application, such as voice telephone traffic. The presence of signals on the other two
wire pairs is almost certain to increase the amount of crosstalk on the cable, which could lead to signal
damage and data loss.
Category Use
1 Voice-grade telephone networks only; not for data transmissions
Voice-grade telephone networks, as well as IBM dumb-terminal connections to mainframe
2
computers
Voice-grade telephone networks, 10-Mbps Ethernet, 4-Mbps Token Ring, 100Base-T4 Fast
3
Ethernet, and 100Base-VG-AnyLAN
4 16-Mbps Token Ring networks
100Base-TX Fast Ethernet, Synchronous Optical Network (SONET), and Optical Carrier (OC3)
5
Asynchronous Transfer Mode (ATM)
5e 1000Base-T (Gigabit Ethernet) networks
TIP: When you install a network with a particular grade of cable, you must be aware of more than the
category of the cable. You must also be sure that all of the connectors, wall plates, and patch panels you use
are rated for the same category as the cable. A network connection is only as strong as its weakest link.
Category 5 UTP is suitable for 100Base-TX Fast Ethernet networks running at 100 Mbps, as well as for
slower protocols. The standard for Category 5e UTP cable was ratified in 1999 and is intended for use on
1000Base-T networks. 1000Base-T is the Gigabit Ethernet standard designed to run on UTP cable with 100-
meter segments, making it a suitable upgrade path from Fast Ethernet. The Category 5e standard does not
call for an increase in the frequency supported by the cable over that of Category 5 (both are 100 MHz), but
it does elevate therequirements for some of the other Category 5 testing parameters and adds other new
parameters. In addition to the officially ratified EIA/TIA categories, there are other UTP cable grades
available that have not yet been standardized. A series of numbered cable standards (called levels) from
Anixter, Inc. is currently being used as the basis for UTP cables that go beyond the performance levels of
Category 5e.
NOTE: There is a Fast Ethernet protocol called 100Base-T4 that is designed to use Category 3 UTP cable
and run at 100 Mbps. This is possible because 100Base-T4 uses all four wire pairs in the cable, whereas
100Base-TX uses only two pairs. See, "Data-Link Layer Protocols," for more information.
Shielded twisted pair cable is similar in construction to UTP, except that it has only two pairs of wires and it
also has additional foil or mesh shielding around each pair. The additional shielding in STP cable makes it
preferable to UTP in installations where electromagnetic interference is a problem, often due to the proximity
of electrical equipment. IBM, which developed the Token Ring protocol that originally used them,
standardized the various types of STP cable. STP networks use Type 1A cables for longer runs and Type 6A
cables for patch cables. Type 1A contains two pairs of 22 gauge solid wires with foil shielding, and Type 6A
contains two pairs of 26 gauge stranded wires with foil or mesh shielding. Token Ring STP networks also
use large, bulky connectors called IBM data connectors (IDCs). However, most Token Ring LANs today use
UTP cable.
The wires in twisted pair cabling are twisted together in pairs. Each pair consists of a wire used for the +ve
data signal and a wire used for the -ve data signal. Any noise that appears on 1 wire of the pair will also occur
on the other wire. Because the wires are opposite polarities, they are 180 degrees out of phase (180 degrees
- phasor definition of opposite polarity). When the noise appears on both wires, it cancels or nulls itself out
at the receiving end. Twisted pair cables are most effectively used in systems that use a balanced line method
of transmission: polar line coding (Manchester Encoding) as opposed to unipolar line coding (TTL logic).
NOTE: Token Ring networks, both UTP and STP, use the ring topology implemented in a MAU, even though
the cable is installed in the form of a star.
CATEGORY 1 CABLING
The lowest grade of unshielded twisted-pair (UTP) cabling. Category 1 cabling was designed to support
analog voice communication only. Category 1 cabling was used prior to 1983 for wiring installations of
analog telephone systems, otherwise known as the Plain Old Telephone Service (POTS). The electrical
characteristics of category 1 cabling make it unsuitable for networking purposes, and it is never installed as
premise wiring. Instead, all premise wiring must use either category 3 cabling, category 4 cabling, or category
5 cabling, with category 5 or enhanced category 5 cabling preferred for all new installations.
CATEGORY 2 CABLING
The second-lowest grade of unshielded twisted-pair (UTP) cabling. Category 2 cabling was designed to
support digital voice and data communication. Category 2 cabling was capable of data transmissions up to 4
Mbps. It was used primarily in the installation of premise wiring for legacy Token Ring networks from IBM.
The electrical characteristics of category 2 cabling make it unsuitable for most networking purposes today,
thus it is no longer installed as premise wiring. Instead, all premise wiring today must use only category 3
cabling, category 4 cabling, or category 5 cabling, with category 5 or enhanced category 5 cabling preferred
for all new installations.
CATEGORY 3 CABLING
The third-lowest grade of unshielded twisted-pair (UTP) cabling. Category 3 cabling was designed to support
digital voice and data communication at speeds up to 10 Mbps. It uses 24-gauge copper wires in a
configuration of four twisted-pairs enclosed in a protective insulating sheath. Category 3 cabling is the lowest
grade of UTP cabling that can support standard 10BaseT types of Ethernet networks and was often used for
legacy 4-Mbps Token Ring installations.
Category 3 cabling still has an installed base in older buildings where it is often cheaper to use the existing
cabling than to upgrade to newer grades. Installing higher-grade cabling for backbone cabling in vertical rises
and elevator shafts can extend the life of work areas that still use category 3 cabling. However, if greater
speeds are required at users’ workstations, the best solution is to rewire the work areas using category 5
cabling or enhanced category 5 cabling.
NOTE: The following table summarizes the electrical characteristics of category 3 cabling at different
frequencies, which correspond to different data transmission speeds. Note that attenuation increases with
frequency, while near-end crosstalk (NEXT) decreases.
CATEGORY 4 CABLING
The second-highest grade of unshielded twisted-pair (UTP) cabling. Category 4 cabling was designed to
support digital voice and data communication at speeds up to 16 Mbps. It uses 22-gauge or 24-gauge copper
wires in a configuration of four twisted-pairs enclosed in a protective insulating sheath. Category 4 cabling
can support standard 10BaseT types of Ethernet networks. It was also commonly used in older 16-Mbps
Token Ring installations.
Category 4 cabling still has an installed base in older buildings where it is often cheaper to use the existing
cabling than to upgrade to newer grades. Installing higher-grade cabling for backbone cabling in vertical rises
and elevator shafts can extend the life of work areas that still use category 4 cabling. However, if greater
speeds are required at users’ workstations, the best solution is to rewire the work areas using category 5
cabling or enhanced category 5 cabling.
NOTE: The following table summarizes the electrical characteristics of category 4 cabling at different
frequencies, which correspond to different data transmission speeds. Note that attenuation increases with
frequency, while near-end crosstalk (NEXT) decreases.
CATEGORY 5 CABLING
The highest and most commonly used grade of unshielded twisted-pair (UTP) cabling in networking today.
Category 5 cabling was designed to support digital voice and data communication at speeds up to 100 Mbps.
It uses 22-gauge or 24-gauge copper wires in a configuration of four twisted-pairs enclosed in a protective
insulating sheath. Category 5 cabling is the standard grade of UTP cabling for networks such as
❖ The standard 10BaseT variety of Ethernet
❖ Fast Ethernet networks of the 100BaseTX variety
Category 5 cabling is always recommended for new installations of premise cabling and for upgrading
existing premise wiring for higher-speed networks, because of its superior electrical characteristics. It is the
highest grade of UTP cabling currently recognized by the Electronic Industries Alliance (EIA) and
Telecommunications Industry Association (TIA), although proposals there have made for higher category 6
and category 7 grades. Many vendors offer an enhanced category 5 cabling grade with electrical
characteristics exceeding those of standard category 5. Enhanced category 5 cabling supports data
transmission up to frequencies of 350 MHz, and new standards are under development to allow even higher
data transmission frequencies.
NOTE: The following table summarizes the electrical characteristics of category 5 cabling at different
frequencies, which correspond to different data transmission speeds. Note that attenuation increases with
frequency, while near-end crosstalk (NEXT) decreases.
TIP: Category 5 cabling is usually referred to simply as “CAT5.” UTP cables using CAT5 should be no
more than 90 meters in length for typical Ethernet and Fast Ethernet installations, and patch cords should
be no longer than 10 meters.
This section explains how the twisted-pair cable works and how it is used to connect different networking
devices in a network.
The TIA/EIA specifies standards for the twisted-pair cable. The first standards were released in 1991, known
as TIA/EIA 568. Since then, these standards have been continually revised to cover the latest technologies
and developments of the transmission media.
The TIA/EIA 568 divides the twisted-pair cable into several categories. The following table lists the most
common and popular categories of twisted-pair cable.
• Cat 1, 2, 3, 4, 5 are outdated and not used in any modern LAN network.
• Cat 7 is still a new technology and not commonly used.
• Cat 5e, 6, 6a are the commonly used twisted-pair cables.
Before we learn how to make a straight-through or cross-cable, let's understand how the UTP cable transfers
the data.
In UTP cable, electronic signals are used to transmit and receive the data. A UTP cable connects two nodes.
In data transmission, one node sends data and another node receives that data. NIC of the sender node
converts data stream into the electronic signals and places them into the copper wire of the UTP cable. NIC
of the receiver node reads those signals from the wire and converts back them into the data stream.
Electronic signals or electric currents flow in a circuit. In an electric circuit, two wires are used. The first
wire is used to carry the electrons or current from the source to the load. The second wire is used to complete
the circuit between the load and the source. When electrons or current passes through the load, the load
performs its functions.
Let’s take a simple example. Suppose we have an LED bulb, two wires, and a battery. To light this bulb, we
connect it from the battery using the wires. We connect the positive side and negative side of the battery to
the bulb separately. The following image shows this example.
The same mechanism is used in UTP cable to transfer the data. Two wires of a UTP cable create an electric
circuit between nodes.
In this circuit:-
• The NIC of the node which sends the data is work as the source.
• The NIC of the node which receives the data is work as the load.
• The first wire carries the current from the sender node to the receiver node.
• The second wire completes the electric circuit.
The following image shows how the electric circuit builds between the sender and receiver nodes.
Once the electric circuit is built, both the sender and receiver nodes use this electric circuit to transfer the
data. End devices, usually PCs or Server, store and process data in digital or binary format. To transfer binary
data through the electric circuit, NICs of both sender and receiver nodes use an encoding scheme.
An encoding scheme is a language that both NICs understand. In the encoding scheme, the sender node
changes the electrical signal over time, while the receiver node interprets those changes as binary data.
For example, to transfer a binary digit 0, NIC of the sender node drops the voltage to the lower voltage during
the middle of a 1/10,000,000th-of-a-second interval. NIC of the receiver node detects this change and
interprets it as a binary digit 0. Just like this, to transfer binary digit 1, NIC of the sender node raises the
voltage to the higher voltage. Current in an electric circuit always flows in one direction; from the source to
the load. For this reason, only the sender node (source) can send its data to the receiver node (load). If the
receiver node wants to send its data, it must have to create its own circuit.
The following image shows how both nodes create and use their circuits to transfer the data.
Thus, for a two-way data transfer, two electrical circuits are required. To create two electrical circuits, four
wires are required. In the below section, we will understand which wires of the UTP cable are used to the
electric circuit between nodes.
UTP cable
A UTP cable contains 8 wires. These wires are grouped in four pairs. Each pair consist of two twisted wires.
The first wire has a single color-coded plastic coating while the other wire has that color plus white color
striped plastic coating. For example, for the brown wire pair, one wire’s coating is all brown, while the other
wire’s coating is brown-and-white striped.
Both NIC and switch port have an eight pins slot. To connect these pins with the wires of a UTP cable, a
connector known as the RJ-45 connector is used. The RJ-45 connector has eight physical locations, known
as pin positions or simply pins, into which the eight wires of the UTP cable can be inserted. These pins create
a place where the ends of the copper wires can touch the pins of NIC or switch port.
A NIC uses pins 1 and 2 to transmit the data. To receive data, it uses pins 3 and 6. A switch does the opposite
of it. It receives data on pins 1 and 2 and transmits data from the pin 3 and 6. Based on the type of end devices,
a UTP cable can be made in two ways. The first type of cable, known as the straight-through cable, connects
two different types of end devices; such as PC to Switch. The second type of cable, known as the cross-over
cable, connects two same type of end devices such as PC to PC or Switch to Switch.
In this cable, wires are placed in the same position at both ends. The wire at pin 1 on one end of the cable
connects to pin 1 at the other end of the cable. The wire at pin 2 connects to pin 2 on the other end of the
cable; and so on.
The following table lists the wire positions of the straight-through cable on both sides.
Side A Side B
Green White Green White
Green Green
Orange White Orange White
Blue Blue
Blue White Blue White
Orange Orange
Brown White Brown White
Brown Brown
i. PC to Switch
ii. PC to Hub
iii. Router to Switch
iv. Switch to Server
v. Hub to Server
In this cable, transmitting pins of one side connect with the receiving pins of the other side.
The wire at pin 1 on one end of the cable connects to pin 3 at the other end of the cable. The wire at pin 2
connects to pin 6 on the other end of the cable. Remaining wires connect in the same positions at both ends.
The following table lists the wire positions of the cross-over cable on both sides.
Side A Side B
Green White Orange White
Green Orange
Orange White Green White
Blue Blue
Blue White Blue White
Orange Green
Brown White Brown White
Brown Brown
i. Two computers
ii. Two hubs
iii. A hub to a switch
iv. A cable modem to a router
v. Two router interfaces
Fiber optic cable is a completely different type of network medium than twisted-pair or coaxial cable. Instead
of carrying signals over copper conductors in the form of electrical voltages, fiber optic cables transmit pulses
of light over a glass or plastic filament. Fiber optic cable is completely resistant to the electromagnetic
interference that so easily affects copper-based cables. Fiber optic cables are also much less subject to
attenuation—the tendency of a signal to weaken as it travels over a cable—than are copper cables. On copper
cables, signals weaken to the point of unreadability after 100 to 500 meters (depending on the type of cable).
Some fiber optic cables, by contrast, can span distances up to 120 kilometers without excessive signal
degradation. Fiber optic cable is thus the medium of choice for installations that span long distances or
connect buildings on a campus. Fiber optic cable is also inherently more secure than copper because it is
impossible to tap into a fiber optic link without affecting normal communication over that link.
Optical fiber consists of thin glass fibers that can carry information at frequencies in the visible light spectrum
and beyond. The typical optical fiber consists of a very narrow strand of glass called the core. Around the
core is a concentric layer of glass called the cladding. A typical core diameter is 62.5 microns (1 micron =
10-6 meters). Typically Cladding has a diameter of 125 microns. Coating the cladding is a protective coating
consisting of plastic, it is called the Jacket.
A fiber optic cable, illustrated in the Figure below, consists of a clear glass or a clear plastic core that actually
carries the light pulses, surrounded by a reflective layer called the cladding. Surrounding the cladding is a
plastic spacer layer, a protective layer of woven Kevlar fibers, and an outer sheath.
Figure: Fiber optic cable has a glass or plastic core surrounded by cladding that reflects the light pulses
back and forth along the cable's length
There are two primary types of fiber optic cable, singlemode and multimode, with the thickness of the core
and the cladding being the main difference between them. The measurements of these two thicknesses are
the primary specifications used to identify each type of cable. Singlemode fiber typically has a core diameter
of 8.3 microns, and the thickness of the core and cladding together is 125 microns. This is generally referred
to as 8.3/125 singlemode fiber. Most of the multimode fiber used in data networking is rated as 62.5/125.
Singlemode fiber uses a single-wavelength laser as a light source, and as a result, it can carry signals for
extremely long distances. For this reason, singlemodefiber is more commonly found in outdoor installations
that span long distances, such as telephone and cable television networks. This type of cable is less suited to
LAN installations because it is much more expensive than multimode cable and it has a higher bend radius,
meaning that it cannot be bent around corners as tightly. Multimode fiber, by contrast, uses a light-emitting
diode (LED) as a light source instead of a laser and carries multiple wavelengths. Multimode fiber cannot
span distances as long as singlemode, but it bends around corners better and is much cheaper. Fiber optic
cables use one of two connectors, the straight tip (ST) connector or the subscriber connector (SC), as shown
in Figure below
Installing fiber optic cable is very different from copper cable installation. The tools and testing equipment
required for installation are different, as are thecabling guidelines. Generally speaking, fiber optic cable is
more expensive than twisted-pair or coaxial cable in every way, although prices have come down in recent
years.
An important characteristic of fiber optics is refraction. Refraction is the characteristic of a material to either
pass or reflect light. When light passes through a medium, it "bends" as it passes from one medium to the
other. An example of this is when we look into a pond of water.
If the angle of incidence is small, the light rays are reflected and do not pass into the water. If the angle of
incident is great, light passes through the media but is bent or refracted.
Optical fibers work on the principle that the core refracts the light and the cladding reflects the light. The
core refracts the light and guides the light along its path. The cladding reflects any light back into the core
and stops light from escaping through it - it bounds the medium!
There are three primary types of transmission modes using optical fiber. They are
a. Step Index
b. Graded Index
c. Single Mode
Step index has a large core, so the light rays tend to bounce around inside the core, reflecting off the cladding.
This causes some rays to take a longer or shorter path through the core. Some take the direct path with hardly
any reflections while others bounce back and forth taking a longer path. The result is that the light rays arrive
at the receiver at different times. The signal becomes longer than the original signal. LED light sources are
used. Typical Core: 62.5 microns.
Graded index has a gradual change in the core's refractive index. This causes the light rays to be gradually
bent back into the core path. This is represented by a curved reflective path in the attached drawing. The
result is a better receive signal than with step index. LED light sources are used. Typical Core: 62.5 microns.
Note: Both step index and graded index allow more than one light source to be used (different colors
simultaneously), so multiple channels of data can be run at the same time!
Single mode has separate distinct refractive indexes for the cladding and core. The light ray passes through
the core with relatively few reflections off the cladding. Single mode is used for a single source of light (one
color) operation. It requires a laser and the core is very small: 9 microns.
Single Mode
We don't use frequency to talk about speed any more, we use wavelengths instead. The wavelength of light
sources is measured in nanometers or 1 billionth of a meter.
1. Noise immunity: RFI and EMI immune (RFI - Radio Frequency Interference, EMI -Electromagnetic
Interference)
2. Security: cannot tap into cable.
3. Large Capacity due to BW (bandwidth)
4. No corrosion
5. Longer distances than copper wire
6. Smaller and lighter than copper wire
7. Faster transmission rate
The cost of optical fiber is a trade-off between capacity and cost. At higher transmission capacity, it is cheaper
than copper. At lower transmission capacity, it is more expensive.
Line Impairments
Line Impairments are faults in the line that occur due to either improper line terminations or equipment out
of specifications. These cannot be conditioned out, but can be measured to determine the amount of the
impairment.
Crosstalk
Crosstalk is when one line induces a signal into another line. In voice communications, we often hear this as
another conversation going on in the background. In digital communication, this can cause severe disruption
of the data transfer. Cross talk can be caused by the overlapping of bands in a multiplexed system, or by poor
shielding of cables running close to one another. There are no specific communications standards that are
applied to the measurement of crosstalk.
All media have a preferred termination condition for perfect transfer of signal power. The signal arriving at
the end of a transmission line should be fully absorbed, otherwise it will be reflected back down the line to
the sender (and appear as an Echo). Echo Suppressors are often fitted to transmission lines to reduce this
effect.
Usually during data transmission, these suppressors must be disabled or they will prevent return
communication in full duplex mode. Echo suppressors are disabled on the phone line if they hear carrier for
400ms or more. If the carrier is absent for 100 mSec, the echo suppressor is re-enabled. Echo Cancellers are
currently used in Modems to replicate the echo path response. These cancellers then combine the results to
eliminate the echo (thus, no signal interruption is necessary).
Frequency Shift
Frequency shift is the difference between the transmitted frequency and the received frequency. This is
caused by the lack of synchronization of the carrier oscillators.
Nonlinear Distortion
Nonlinear distortion changes the wave shape of the signal. If the signal was transmitted as a sine wave (and
arrived as a square wave), it would be an example of severe nonlinear distortion. Amplitude modulated
carriers would suffer drastically if the original wave shape was distorted.
Amplitude Jitter is the small constantly changing swing in the amplitude of a signal. It is principally caused
by power supply noise (60 Hz) and ringing tone (20 Hz) on the signal.
Phase Jitter is the small constantly changing swing in the phase of a signal. It may result in the pulses
moving into time slots that are allocated to other data pulses (when used with Time Domain Multiplexing).
Telephone company standards call for no more than 10 degrees between 20 and 300 Hz and no more than 15
degrees between 4 and 20 Hz.
Transients are irregular-timed impairments. They appear randomly, and are very difficult to troubleshoot.
There are 4 basic types of Transients.
1. Impulse Noise
2. Gain Hits
3. Dropouts
4. Phase Hits
Impulse Noise
Impulse noise is a sharp and quick spike on the signal that can come from many sources: electromagnetic
interference, lightning, sudden power switching, electromechanical switching, etc.. These appear on the
telephone line as clicks and pops: they're not a problem for voice communication, but can appear as a loss of
data (or even as wrong data bits) during data transfers. Impulse noise has a duration of less than 1 mSec and
their effect is dissipated within 4 mSec.
Gain Hits
Gain Hits are sudden increases in amplitude that last more than 4 mSec. Telephone company standards allow
for no more than 8 gain hits in any 15 minute interval. A gain hit would be heard on a voice conversation as
if the volume were turned up for just an instance. Amplitude modulated carriers are particularly sensitive to
Gain Hits.
Dropouts
Dropouts are sudden losses of signal amplitude that are greater than 12 db, and last longer than 4 mSec. They
cause more errors than any other type of transients. Telephone company standards allow no more than 1
dropout for every 30 minute interval. Dropouts can be heard on a voice conversation (similar to call waiting),
where the line goes dead for a 1/2 second. This is a sufficient loss of signal for some digital transfer protocols
(such as SLIP), where the connection is lost and would then have to be re-established.
Phase Hits
Phase Hits are either a sudden--and large--change in the received signal phase (20 degrees), or a frequency
that lasts longer than 4 mSec. Phase Hits generally occur when switching between Telcos, common carriers,
or transmitters. FSK and PSK are particularly sensitive to Phase Hits. The data may be incorrect until the
out-of-phase condition is rectified. The telephone company standard allows no more than 8 phase hits in any
15 minute period.
Telephone lines are not perfect devices due to their analog nature. The quality of the telephone line determines
the rate that modulated data can be transferred. Good noise-free lines allow faster transfer rates (such as 14.4
kbps) while poor quality lines require the data transfer rate to be stepped down to 9600 bps or less. Phone
lines have several measurable characteristics that determine the quality of the line.
1. Attenuation Distortion
2. Propagation Delay
3. Envelope Delay Distortion
Attenuation Distortion
Attenuation Distortion is the change in amplitude of the transmitted signal over the Voice Band: it is the
frequency response curve of the Voice Band.
To measure Attenuation Distortion, the phone line has a test frequency. This frequency is transmitted from 0
- 4 kHz into the line at a standard amplitude of 0 db. The loss of signal--or attenuation--is measured at the
receiving end, and compared to a standard reference frequency of 1004 Hz.
Decibel (db) is a relative unit of measure. It is a log unit and a +3 db gain will indicate an amplitude of 2x
the reference. It is a logarithmic ratio between input voltage and output voltage, calculated by the following
formula:
db =10 x log (Vout/Vin)
The resulting information is graphed on an Attenuation vs. Frequency chart. Attenuation is a loss of signal
amplitude (the receive signal is a smaller amplitude than the transmitted signal). It is indicated by a positive
db. It's also possible to have a signal appear at the receiving end, with a larger amplitude than when it started
(this is indicated by negative db).
Attenuation occurs because the signal has to pass through many pieces of electronic equipment and
transmission media. Some can amplify the signal (make it a larger amplitude) and some may attenuate the
signal (make it smaller).
There are maximum and minimum acceptable limits for the Attenuation Distortion that is on phone lines.
The Basic channel conditioning is as follows:
The above Loss is a range of acceptable values for the frequency range. In the Basic Channelling
Conditioning, it is acceptable to have a loss in signal-- in the frequency range of 500-2500 Hz--of "8 db loss
to -2 db loss" (referenced to the amplitude at 1 kHz). Note that this is shown as -8db and +2 db (see the graph
on the previous page). A +3 db attenuation is equal to -3 db in signal amplitude and a +8 db attenuation
equates to -8 db in signal amplitude.
Propagation Delay
Signals transmitted down a phone line will take a finite time to reach the end of the line. The delay from the
time the signal was transmitted to the time it was received is called Propagation Delay. If the propagation
delay was the exact same across the frequency range, then there would be no problem. This would imply that
all frequencies from 300 to 3000 Hz have the same amount of delay in reaching their destination over the
phone line. They would arrive at the destination at the same time, but delayed by a small amount of
propagation delay.
For example, this delay is heard when talking on long distance telephones. In this instance, we have to wait
a little longer before we speak (to ensure that the other person hasn't already started to talk). Actually, all
phone lines have propagation delay.
If the Propagation Delay is long enough, the modem or communications package may time-out and close the
connection. In other words, it may think that the receive end has shut off!
If the Propagation Delay changes with frequency, we would then have the condition where the lower
frequencies--such as 300 Hz-- may arrive earlier or later than the higher frequencies--such as 3000 Hz. For
voice communication, this would probably not be noticeable. However, for data communication using
modems, this could affect the phase of the carrier or the modulation technique that's used to encode the data.
When the Propagation Delay varies across the frequency range, we call this Envelope Delay Distortion. We
measure propagation delay in microseconds (us), and the reference is from the worst case to the best case.
CHAPTER FOUR
Hubs
As you continue in your quest to build the perfect networking environment for your organization, you will
be faced with many decisions. One of the most basic yet most important is the topology of your network.
That decision sets the stage for everything that is yet to come: what levels of performance and reliability you
can expect, what kind of administrative and support issues you will have to face, how you can scale or expand
your network, and how large a dent the network will make in your operating budget.
In many cases, the thought of having to purchase hubs prevents most people from building 10base-T
networks. While a fear of the unknown is natural, the fear of hubs is sometimes more like an uncontrollable
phobia. If you are in the position of having to upgrade an existing 10base-2 LAN, you should not let the
additional cost of hubs stop you from enjoying the increased performance and flexibility of your
infrastructure that hubs can offer. In all probability, you will spend more on wiring than you will on any other
component--especially if you have to wire an entire building and want things to look neat.
What Is a Hub?
As its name implies, a hub is a center of activity. In more specific network terms, a hub, or concentrator, is a
common wiring point for networks that are based around a star topology. Hubs can also be called either Multi
port Repeaters or Concentrators. They are physical hardware devices. Some Hubs are basic hubs with
minimum intelligence (i.e. no microprocessors).Arcnet, 10base-T, and 10base-F, as well as many other
proprietary network topologies, all rely on the use of hubs to connect different cable runs and to distribute
data across the various segments of a network. Hubs basically act as a signal splitter. They take all of the
signals they receive in through one port and redistribute it out through all ports. Some hubs actually
regenerate weak signals before re-transmitting them. Other hubs retime the signal to provide true synchronous
data communication between all ports. Hubs with multiple 10base-F connectors actually use mirrors to split
the beam of light among the various ports. Intelligent Hubs can perform basic diagnostics, and test the nodes
to see if they are operating correctly. If they are not, the Smart Hubs (or Intelligent Hubs) will remove the
node from the network. Some Smart Hubs can be polled and managed remotely.
Purpose of Hubs
Hubs are used to provide a Physical Star Topology. The Logical Topology is dependent on the Medium
Access Control Protocol. At the center of the star is the Hub, with the network nodes located on the tips of
the star.
Star Topology
The Hub is installed in a central wiring closet, with all the cables extending out to the network nodes. The
advantage of having a central wiring location is that it's easier to maintain and troubleshoot large networks.
All of the network cables come to the central hub. This way, it is especially easy to detect and fix cable
problems. You can easily move a workstation in--a star topology-- by changing the connection to the hub at
the central wiring closet.
Hubs are multi port repeaters, and as such they obey the same rules as repeaters (See previous section OSI
Operating Layer). They operate at the OSI Model Physical Layer.
For example, on a 10base-T network, all of your devices will be physically wired to one or more hubs using
unshielded, twisted-pair cabling. Your hub will have multiple ports and possible multiple types of ports so
that you can connect many devices to it. You may need to connect multiple hubs together. If that is the case,
you may want to use one of the other higher-speed ports on your hubs to build a backbone to your network.
Each hub--and possibly your network servers--should connect directly to your high-speed backbone. Because
the majority of communication on most LANs is between workstations and the primary servers, your
backbone of network wire or segment will play a very important part in the overall performance of your
network.
Token-Ring networks also have devices on them that can be referred to as hubs. A Multi- Station Access
Unit or MSAU can also be considered a type of hub, because it serves a similar purpose to an Ethernet hub.
However, MSAUs use mechanical switches and relays to reroute packets to each active device in serially--
not in parallel like Ethernet hubs. To reduce confusion between types of networks, Token-Ring MSAUs will
not be discussed as hubs.
There is a simple way to determine whether you need a hub on your LAN. If you are building a network with
a star topology and you have two or more machines, you need a hub. There is, however an exception to this
rule. If you are building a 10base-T network and you have only two machines, you can connect them to each
other without using a hub. You do, however, need to have a special jumper cable to match the transmitter
leads of one system with the receiver leads of the other system and vice-versa. These kinds of cables are
becoming more and more common. If you cannot find one, however, you can easily make one. It is also
important to point out that the parts required to assemble them are relatively inexpensive. If you have a small
length of twisted-pair cable, two RJ-45 connectors, and a crimping tool, you can build one yourself using
Table below as a guide. It shows how the pins must be connected on a section of Ethernet cable to allow
connections between two systems without the use of a hub. This type of Ethernet jumper is commonly
referred to as a cross-over cable.
Table: Connecting the pins on a section of Ethernet cable to allow connections between two systems without
the use of a hub.
The preceding table shows how the pins must be connected on a section of Ethernet cable to allow it to be
used to connect two systems together without the use of a hub. When you add a third machine to your mix,
you will need a hub and two additional lengths of 10base-T. Remember to save your dual system 10base-T
jumper. You never know when you may need it.
Types of Hubs
As you may have already guessed, hubs provide a crucial function on networks with a star top-ology. There
are many different types of hubs, each offering specific features that allow you to provide varying levels of
service. In the next section, we talk about some of the standard features of most hubs, the differences between
passive, active, and intelligent hubs, as well as some of the additional features found in today's more high-
performance hubs.
Basic Specifications
All hubs have a basic set of features that is determined in part by the types of cabling that run to the hub. In
many respects, a hub is simply another network device that must perform within the standard parameters of
the particular type of network to which the hub is connected. Despite the fact that hubs provide additional
services to a network than simply an interface, they must still follow the restrictions placed on the medium
by the IEEE.
The majority of connections to most hubs are through RJ-45 jacks. RJ-45 jacks are the standard connector
type for many types of Ethernet that rely on twisted-pair cabling. From 10base-T to 100base-T, the cabling
that runs from most workstations, printers, and other devices on your LAN to the hub is more than likely
some type of twisted-pair cable, depending on the speed of the network. At either end of that cable is an RJ-
45 connector.
NOTE: An RJ-45 connector looks similar to the connector that comes out of most North-American
telephones except it is just a little bit wider. Although primarily used to connect devices that rely on twisted-
pair Ethernet, RJ-45 connectors can also be used to connect Token Ring devices.
The length of each cable run to a hub is limited by the medium in use. (See Table below) For example, any
length of 10base-T cabling cannot exceed 100 meters or roughly 330 feet in length. This is a limitation in the
specification of 10base-T from the IEEE, not a limitation of any particular hub. For example, if your hub has
a 10base-F connector to connect the hub to a high-speed backbone, the maximum run of that connection may
be as far as 2 kilometers--as defined by the IEEE specification for 10base-F.
Table: The maximum distances of cable runs for different types of Ethernet, as determined by the IEEE.
Table above lists Ethernet specifications and the maximum length that a single run of each type can be.
Remember that most of these runs can be extended with the use of Ethernet repeaters. Of course, there are
other standard requirements. Since hubs are electronic devices that take a single signal and broadcast it to
multiple ports, hubs need a power source. Most hubs have LEDs that can be used to monitor various
conditions. The two most common are LEDs to monitor power and active connections at particular ports.
Other hubs have additional LEDs to monitor traffic on a particular port, as well as packet collisions on the
LAN in general.
NOTE: One way to get in trouble with the FAA is to setup a LAN on an airplane using a battery-powered
hub. While the promise of six-hours of multi-player Quake or Interstate '76 on a trip from New York to Los
Angeles may seem like an acceptable risk, believe me when I say, "It is not!" Apparently, connecting
electronic devices (such as computers and hubs) using external cables (such as unshielded twisted pair) is
against FAA regulations.
Passive Hubs
Passive hubs, as the name suggests, are rather quiescent creatures. They do not do very much to enhance the
performance of your LAN, nor do they do anything to assist you in troubleshooting faulty hardware or finding
performance bottlenecks. They simply take all of the packets they receive on a single port and rebroadcast
them across all ports--the simplest thing that a hub can do.
Passive hubs commonly have one 10base-2 port in addition to the RJ-45 connectors that connect each LAN
device. As you have already read, 10base-5 is 10Mbps Ethernet that is run over thick-coax. This 10base-2
connector can be used as your network backbone. Other, more advanced passive hubs have AUI ports that
can be connected to the transceiver of your choice to form a backbone that you may find more advantageous.
Most passive hubs are excellent entry-level devices that you can use as your starting points in the world of
star topology Ethernet. Most eight-port passive hubs cost less than $200, and if you are upgrading from
10base-2, even the most inexpensive 10base-T setup will deliver a whole new world of performance.
Active Hubs
Active hubs actually do something other than simply rebroadcasting data. Generally, they have all of the
features of passive hubs, with the added bonus of actually watching the data being sent out. Active hubs take
a larger role in Ethernet communications by implementing a technology called store & forward where the
hubs actually look at the data they are transmitting before sending it. This is not to say that the hub prioritizes
certain packets of data; it does, however, repair certain "damaged" packets and will retime the distribution of
other packets.
If a signal received by an active hub is weak but still readable, the active hub restores the signal to a stronger
state before rebroadcasting it. This feature allows certain devices that are not operating within optimal
parameters to still be used on your network. If a device is not broadcasting a signal strong enough to be seen
by other devices on a network that uses passive hubs, the signal amplification provided by an active hub may
allow that device to continue to function on your LAN. Additionally, some active hubs will report devices
on your network that are not fully functional. In this way, active hubs also provide certain diagnostic
capabilities for your network.
Active hubs will also retime and resynchronize certain packets when they are being transmitted. Certain cable
runs may experience electromagnetic (EM) disturbances that prevent packets from reaching the hub or the
device at the end of the cable run in timely fashion. In other situations, the packets may not reach the
destination at all. Active hubs can compensate for packet loss by retransmitting packets on individual ports
as they are called for and retiming packet delivery for slower, more error-prone connections. Of course,
retiming packet delivery slows down overall network performance for all devices connected to that particular
hub, but sometimes that is preferable to data loss--especially since the retiming can actually lower the number
of collisions seen on your LAN. If data does not have to be broadcast over and over again, the LAN is
available for use for new requests more frequently. Again, it is important to point out that active hubs can
help you diagnose bad cable runs by showing which port on your hub warrants the retransmission or retiming.
Active hubs provide certain performance benefits and, sometimes, additional diagnostic capabilities. Active
hubs are more expensive than simple, passive hubs and can be purchased in many configurations with various
numbers and types of ports.
Intelligent Hubs
Intelligent hubs offer many advantages over passive and active hubs. Organizations looking to expand their
networking capabilities so users can share resources more efficiently and function more quickly can benefit
greatly from intelligent hubs. The technology behind intelligent hubs has only become available in recent
years and many organizations may not have had the chance to benefit from them; nevertheless intelligent
hubs are a proven technology that can deliver unparalleled performance for your LAN.
In addition to all of the features found in active hubs, incorporating intelligent hubs into your network
infrastructure gives you the ability to manage your network from one central location. If a problem develops
with any device on a network that is connected to an intelligent hub, you can easily identify, diagnose, and
remedy the problem using the management information provided by each intelligent hub--that is, in the event
it is a problem that cannot be remedied by the hub itself. This is a significant improvement over standard
active hubs. Troubleshooting a large enterprise-scale network without a centralized management tool that
can help you visualize your network infrastructure usually leaves you running from wiring closet to wiring
closet trying to find poorly functioning devices.
Another significant and often overlooked feature of intelligent hubs is their ability to offer flexible
transmission rates to various devices. Of course, intelligent hubs have additional ports for connecting high-
speed backbones--just like other types of hubs. However, the intelligent hubs support standard transmission
rates of 10, 16 and 100Mbps to desktop systems using standard topologies such as Ethernet, Token Ring or
FDDI. That means that you can gradually upgrade your systems from 10Mbps connections to 100Mbps
connections, or simply deliver faster transmission speeds to devices that need faster services.
In addition, to boost the flexibility in configuration and management of networks of mixed media and mixed
levels of technology, intelligent hubs have incorporated support for other technologies such as terminal
servers, bridges, routers, and switches. Additionally, modern intelligent hubs provide more comprehensive
and easier-to-use network management software, which make them a crucial component of most
comprehensive network management systems.
To understand the Ethernet segment-to-segment characteristics of a hub, determine how the Ethernet Hubs
operate. Logically, they appear as a Bus Topology, and physically as a Star Topology. Looking inside an
Ethernet Hub, we can see that it consists of a electronic printed circuit board (which doesn't tell us much). If
we form a functional drawing, then we can clearly see how the Physical and Star Topology appears:
Understanding that inside the Hub is only more repeaters, we can draw the conclusion that all connections
attached to a Hub are on the same Segment (and have the same Segment Number). A single repeater is said
to exist from any port to any port, even though it is indicated as a path of 2 repeaters.
Connect Hubs together through RJ45 ports creates Cascading Hubs. One Master Hub (Level 1) is connected
to many Level 2 (Slave) Hubs, who are masters to Level 3 (slave) Hubs in a hierarchical tree (or clustered
star). The maximum number of stations in a Cascaded Hub Network is limited to 128.
Backbone Networks
In a Backbone Network, there is no Master Hub. The Level 1 Hubs are connected through their AUI port to
a Coax Backbone. For Thin Coax, up to 30 Hubs can be connected together. For Thick Coax, up to 100 Hubs
can be connected to the backbone. The Backbone is considered to be a populated segment.
Level 2 Hubs are allowed to be connected to the Level 1 Hubs' 10BaseT ports. This connection between the
2 Hubs is considered an un populated segment, or link segment. Up to 1024 stations (or nodes) can be attached
to the Level 2 Hubs' 10BaseT ports.
All stations and segments would appear as 1 Logical segment, with 1 Network Number. In the real world,
you would never attach 1024 stations to 1 segment; the resulting traffic would slow the network to a crawl.
Hub's Addressing
Again, because a Hub is just many repeaters in the same box, any network traffic between nodes is heard
over the complete network. As far as the stations are concerned, they are connected on 1 long logical bus
(wire).
Normal Ethernet operation is Half-Duplex: only 1 station or node is talking at a time. The stations take turns
talking on the bus (CSMA/CD -bus arbitration).
Full-Duplex Ethernet Hubs are Hubs which allow two-way communication, thus doubling the available
bandwidth from 10 Mbps to 20 Mbps. Full duplex Hubs are proprietary products, and normally only work
within their own manufacturer's line.
For example, if A wanted to talk to C, a direct 10 Mbps line would be connected through the 2 switching
hubs. Simultaneously, if D wanted to talk to B, another direct 10 Mbps line (in the opposite direction) would
be connected through the two switching Hubs (doubling the available bandwidth to 20 Mbps).
There are no official standards for Full-Duplex Ethernet (proprietary standards do exist).
Switching Hubs
Switching hubs are hubs that will directly switch ports to each other. They are similar to full duplex hubs,
except that they allow dedicated 10 Mbps channels between ports.
If A wanted to communicate with B, a dedicated 10 Mbps connection would be established between the two.
If C wanted to communicate with D, another dedicated 10 Mbps connection would be established.
Advanced Features
There are many additional features supported by some of the more high-end hubs from various
manufacturers. Some hubs feature redundant AC power supplies. If one should fail, the other takes over--and
is fully capable of powering the entire unit. Other hubs have built-in DC power supplies to function in the
event of a power outage. Redundant fans in some hubs provide cooling of the hub in the event that either fan
fails. Other more commonly encountered advanced features in some intelligent hubs include automatic
termination for coaxial connections, full hot-swap capabilities for connector modules, as well as the ability
to automatically reverse the polarity of improperly wired 10base-T connections. More advanced intelligent
hubs have features such as redundant configuration storage and redundant clocks. The redundant clocks allow
any hub with an onboard clock on the network to act as a master to help in timing packet delivery. The
redundant configuration storage feature available on some intelligent hubs is used when assigning various
properties to various ports. Each similar hub on the network stores the configuration of one other hub,
allowing configuration information to be restored by way of the intelligent link between hubs. Additionally,
some manufacturers deliver modules for routing and bridging services that can live inside of the same chassis
as your larger enterprise-level hubs.
Choosing a Hub
One of the most fascinating aspects of the computer industry is the rapid pace at which available products
manage to evolve, consistently beating IT professionals in their attempt to maintain cutting-edge networks.
Unlike most other industries, where core product lines experience revolutionary growth every five to fifteen
(or potentially more) years, even the most conservative projections show networking technology undergoing
logarithmic growth every year and a half--sometimes even more frequently. Part of the reason for this
explosive growth, other than the obvious demand factors, is the rather large number of software publishers
and hardware manufacturers dedicated to improving the speed and performance of computing. Because
network computing is such a large component of computing in the business world, it is quite easy to see why
there are so many vendors who specialize in networking hardware and software--all dedicated to making
your LAN and WAN perform as well as they possibly can.
When it comes time for you to choose a vendor for your networking hardware, not necessarily just your hubs,
there are three things you want to consider before making any purchases:
As you begin your search for the perfect hub, you will undoubtedly encounter more vendors than you know
what to do with, many of whom will be offering almost identical product lines. If your intent is to network
only a few machines together, perhaps fewer than 12, you will more than likely be satisfied by selecting any
of the many passive hubs offered on the market. If you have a long-term goal for your workgroup that includes
a larger, more high-end network infrastructure, you may wish to make a more educated purchasing decision.
Finding a vendor who makes a highly scaleable workgroup- to enterprise-level intelligent hub system and
additional products such as routers, bridges, and network interface adapters that all function under the same
network management system, might prove to be a bit of a challenge. While there are many vendors that make
hubs for specific audiences, and it may be really easy to purchase any old product from your local computer
store to get you started, it is important to look at the big picture. This step is critically important if you are on
the verge of expansion or are just now implementing a star topology on your LAN as part of your overall
upgrade path. Besides, many vendors will argue that their products work best in a brand-homogeneous
environment.
Many larger vendors also provide networking consulting and installation services through local distributors
and certified resellers. If you are in the position of having to build a network or upgrade an existing one, and
you really do not want to get involved much further than paying the bills, you may find these services quite
beneficial. If you feel that your LAN implementation is going to be a one-time-only type of project and have
no immediate plans to upgrade until the technology significantly improves, you really have nothing to lose
by picking any product that will fulfill your needs from any vendor. At the same time, there is no telling how
any level of assistance from a qualified professional can benefit your implementation. Having someone to
bounce an idea off of may change your whole outlook on what you are trying to accomplish with your
network. Remember, you know what you need your network to do so that you can get your job done; however,
only an experienced network engineer can help you build a network computing environment that will fulfill
your needs in the best possible way.
Many of the more high-end manufacturers of intelligent hubs and comprehensive network hardware solutions
are significantly more expensive than their nearest competitors with seemingly similar product lines. While
entry-level passive hubs obviously will be less than entry-level intelligent hubs, the three-digit price
difference for the same number of ports and the same overall specified speed may seem like wasted money.
However, depending on the scope of your project or the performance that you need from your LAN, the
additional expense in hardware may save you lots of time, money, and aggravation when it comes down to
adding additional nodes, adding routing capabilities, trying to troubleshoot problem connections, increasing
operating speed on certain systems, or simply using your network. You should be aware that certain increased
costs in hardware can offer substantial savings in operation later, when you need to be more flexible and
open.
Summary
You can see how crucial hubs are to even the most basic LAN. All of the communicating you do over a star
topology has to be routed through one or more hubs and it is crucial that your center of activity performs in
a manner that is most beneficial for you. Basic, passive hubs are great for small networks--even in mixed
protocol and/or multiple-operating system environments. 10base-T networks based upon the simplest and
least expensive hubs can easily deliver performance and services that can rival even the most advanced
10base-2 networks. As your network grows and your needs increase, you will clearly see how better, more
advanced systems will enhance your network performance. If you add active hubs, collisions and
retransmissions decrease substantially. You may even see a performance increase in systems that were more
problematic under basic, passive hubs. As your organization expands, and your network infrastructure grows,
you will most likely take advantage of certain features of active hubs, and want to move as quickly as possible
to more intelligent hubs that will become the nucleus of your overall network management system. It is for
that very reason that the hub can make or break your network computing environment.
Bridges
This section provides an overview of bridges and the way that they work, covering both transparent and
source route bridging. The spanning tree algorithm is also explained--this is the basis of the way that
transparent bridges are controlled.
Bridges operate on the OSI Model Data Link Layer. They look at the MAC addresses for Ethernet and Token
Ring, and determine whether or not to forward--or ignore--a packet.
Purpose of a Bridge
The purposes of a Bridge are the following:
❖ Isolates networks by MAC addresses
❖ Manages network traffic by filtering packets
❖ Translates from one protocol to another
What Is a Bridge?
Bridges are both hardware and software devices. They can be stand alone devices --separate boxes
specifically designed for bridging applications-- or they can be dedicated PCs (with 2 NICs and bridging
software). Most server software will automatically act as a bridge when a second NIC card is installed.
Bridges, which operate at the data link layer, connect two LANs (local area networks) together, and forward
frames according to their MAC (media access control) address. Often the concept of a router is more familiar
than that of a bridge; it may help to think of a bridge as a "low-level router" (routers operate at the network
layer, forwarding by addresses such as an IP address).
A remote bridge connects two remote LANs (bridge 1 and 2) over a link that is normally slow (for example,
a telephone line), while a local bridge connects two locally adjacent LANs together (bridge 3). With a local
bridge, performance is an issue, but for a remote bridge, the capability to operate over a long connecting line
is often more im-portant.
Bridges do not know about the higher level protocols inside the frames that they forward. This means that
they will deal with IP, IPX, and so on all at the same time (and in a consistent manner) together with any new
protocols that come along. Bridges also provide a way to segment networks that are using nonroutable
protocols, such as NetBEUI.
The fact that routers deal with data at the network level means that it is much easier for them to interconnect
different data-link layers, such as connecting a Token Ring segment to an Ethernet segment.
Bridges are often more difficult to control than routers. Protocols such as IP have highly sophisticated routing
protocols associated with them, allowing the network administrator to exercise tight control over routing.
Protocols such as IP also provide more information about how networks should be logically segmented (even
in the addresses themselves). Bridges are inherently more difficult to control--they only have the MAC
address and the physical topology to work with. For this reason, bridges are generally more suitable for
smaller, simpler networks.
Transparent Bridges
Transparent bridges are mostly used to interconnect Ethernet segments. The bridge passes traffic that needs
to go between different segments, but isolates traffic that is local to the segment on which it is received. The
bridge thus reduces the total amount of traffic on the network. Bridges have two or more interfaces to the
network--each of these is called a port.
Transparent Bridges examine the MAC address of the frames to determine whether the packet is on the local
Segment or on the distant Segment. Early bridges required the system administrator to manually build the
routing table to tell a bridge which addresses were on which side of the bridge. Manually building a routing
table is called fixed or static routing. Modern bridges are self-learning: they listen to the network frame source
addresses to determine which side of the bridge the node is on, and build a routing table that way.
The following network will be used as an example of a self-learning transparent bridge's routing table
construction.
As frames flow on Bridge #1's local port, Bridge #1 examines the source address of each frame. Eventually,
after all nodes on the local port have become active, Bridge #1 associates their address as being on the local
port. Any frames with a destination address (other than the nodes on the local port) are forwarded to the
remote port. As far as Bridge #1 is concerned, nodes on Bridge #2's local port appear as if they were on
Bridge #1's remote port.
Bridge #2 builds its routing table in a similar manner to Bridge #1. Note the differences.
Can only work with one path between segments: loops are not allowed. A loop would confuse the bridge as
to which side of the bridge a node was really on (i.e. local or remote?).
Transparent Bridges are not acceptable for use on MANs or WANs, because many paths can be taken to
reach a destination. In the above example, it is simple to determine that a loop occurs, but in a large corporate
network (with several hundred bridges), it may be next to impossible to determine. As such, Bridges are most
commonly used in LAN to LAN connectivity (and not in MANs or WANs).
This section describes bridge operation when there are no loops in the network and there is only one path
between any two given hosts.
The bridge is called transparent because it appears to all hosts on the network as though it is not there. As far
as the network layer (IP for example) is concerned, all networks connected by a bridge might as well be
physically connected.
How is this transparency maintained? The "default" action for a bridge is to forward any received frame. The
only situation where frames will not be forwarded is when the bridge knows that the destination host is
connected via the same bridge port as the source host (for example, if a frame is received on port 1 that is
also destined to go out only on port 1). Fortunately, the bridge can use this rule to eliminate the forwarding
of many frames.
For each of the bridge's ports, a list of MAC addresses connected to that port is maintained. The bridge knows
that host G is connected to port 3 if it receives a frame from host G on port 3. In case hosts move their position
on the network, each entry in this list has a TTL (time to live) associated with it, and it will expire after a set
time. Whenever a frame is received from that MAC address, the TTL for the relevant entry will be reset.
The simplest bridge configuration is one bridge connecting two subnets. However, in the initial learning
phase, the situation is not quite as simple. Imagine that the bridge has just been turned on. All of its data
tables are empty--it does not know where any other hosts are. The bridge's tables through an initial packet
sequence might be as follows:
Host A sends a frame to host B. The bridge receives the frame on port 1, but does not know where host B is
located, so forwards the frame to port 2 as well. The bridge also updates its table for port 1, adding an entry
for host A.
Host B sends a reply back to host A. The bridge receives the frame on port 1, but it is not forwarded to port
2, because the bridge knows that host A is also on port 1. The bridge updates its table for port 1, adding an
entry for host B.
Host A sends a frame to host B. The bridge receives the frame on port 1, but this time it knows where host B
is located (also on port 1), so it does not forward the frame to port 2. The only change to the bridge's tables
is to reset the TTL on the entry for host A.
For example, you have one segment called Segment 100: it has 50 users (in several departments) using this
network segment. The Engineering Dept. is CAD (Computer Aided Design) -oriented, while the Accounting
Dept. is into heavy number crunching (year end reports, month end statements, etc.).
On this network, any traffic between Client A, B or C and the Accounting File Server (in the Accounting
Dept.) will be heard across the Segment 100. Likewise, any traffic between the Engineering Dept. Clients G,
H or I (to the CAD File Server) will be heard throughout the Network Segment. The result is that "Other"
Department accesses to the Generic File Server are incredibly slow: this is because of the unnecessary traffic
that's being generated from other departments (Engineering & Accounting).
Note: The designations A, B, and C are used (instead of MAC addresses) for brevity. The actual MAC
addresses would be hexadecimal numbers, such as 08-00-EF-45-DC-01.
The solution is to use one Bridge to isolate the Accounting Dept., and another bridge to isolate the
Engineering Department. The Bridges will only allow packets to pass through that are not on the local
segment. The bridge will first check its "routing" table to see if the packet is on the local segment. If it is, it
will ignore the packet, and not forward it to the remote segment. If Client A sent a packet to the Accounting
File Server then Bridge #1 will check its routing table (to see if the Accounting File Server is on the local
port). If it is on the local port, then Bridge #1 will not forward the packet to the other segments.
If Client A sent a packet to the Generic File Server, Bridge #1 will again check its routing table to see if the
Generic File Server is on the local port. If it is not, then Bridge #1 will forward the packet to the remote port.
Note: The terms local and remote ports are arbitrarily chosen to distinguish between the two network ports
available on a bridge.
In this manner, the network is segmented, and the local department traffic is isolated from the rest of the
network. Overall network bandwidth increases because the Accounting Dept. does not have to fight with the
Engineering Dept. (for access to the segment). Each segment has reduced the amount of traffic on it and the
result is faster access. Each department still has complete access to the other segments, but only when
required.
Bridges listen to the network traffic, and build an image of the network on each side of the bridge. This image
of the network indicates the location of each node (and the bridge's port that accesses it). With this
information, a bridge can make a decision whether to forward the packet across the bridge --if the destination
address is not on the same port-- or, it can decide to not forward the packet (if the destination is on the same
port).
This process of deciding whether or not to forward a packet is termed "filtering packets." Network traffic is
managed by deciding which packets can pass through the bridge; the bridge filters packets.
The MAC layer also contains the bus arbitration method used by the network. This can be CSMA/CD, as
used in Ethernet, or Token Passing, as used in Token Ring. Bridges are aware of the Bus Arbitration and
special translation bridges can be used to translate between Ethernet and Token Ring.
The simple transparent bridge described in the previous section will function well, even with a much more
complex network with many bridges. However, when loops in the network or multiple paths between any
two points are created, the model breaks down. In Figure 7.4, a network is shown with two bridges, both
connecting two subnets together, and thereby creating a loop. This type of design would be useful to create
redundancy, in case one of the bridges fails.
Now imagine the following situation: All bridge tables are empty to start with, and host A sends a frame to
host B (transmission 1 on the diagram).
Bridge J receives transmission 1 on port 1--not knowing where B is, it forwards the frame to port 2
(transmission 2 on the diagram). It adds host A to its table for port 1. Bridge K also receives transmission 1
on port 3; not knowing where B is, it forwards the frame to port 4 (transmission 3 on the diagram). It adds
host A to its table for port 3.
Bridge J receives transmission 3 on port 2--not knowing where B is, it forwards the frame to port 1
(transmission 4 on the diagram). It has received a frame from host A on port 2, so it must update its tables.
Bridge K receives transmission 2 on port 4; not knowing where B is, it forwards the frame to port 3
(transmission 5 on the diagram). It has received a frame from host A on port 4, so it must update its tables.
It is clear that a loop has been created, along with a large amount of unnecessary traffic. Worse still, there
are two packets going around in circles for each packet sent, and the bridges tables are being continuously
updated.
The problem is caused by the presence of more than one bridge forwarding traffic between the same two
subnets, and this is clearly an unacceptable situation. The chosen way to resolve this problem is the "Spanning
Tree Algorithm" defined by IEEE 802.1d.
Spanning Trees
When all the arrows are used, loops can be seen. The black arrows form a spanning tree by using a subset of
the links. Notice that all nodes are directly or indirectly connected to each other, but there are no loops. This
is the definition of a spanning tree.
Spanning trees are not always unique. Given sufficient redundancy in the network, it is normally possible to
draw a different spanning tree. A resilient network design will ensure that it is possible to draw a spanning
tree in the absence of any given link. The paranoid (or the military) may try to achieve a network that contains
a spanning tree despite the absence of any two links (or more!). Figure 7.6 shows an alternative spanning
tree.
It turns out that it is fairly easy to prove mathematically (graph theory) that a spanning tree always requires
one fewer link than the number of networks it is connecting. It is also easy to see that a completely different
set of three links are gray between the two diagrams. In fact, there is no single link in the diagram that could
be turned gray (that is, broken), which would prevent us from drawing a spanning tree (or in other words,
interconnecting the networks).
It can be proven mathematically that given any set of networks that are connected to one another, it is possible
to find a subset of the links that form a spanning tree (that is, a set that still connects all the networks, but
with no loops).
Bridges physically separate a network segment by managing the traffic (that's based on the MAC address).
Bridges are store and forward devices. They receive a packet on the local segment, store it, and wait for the
remote segments to be clear before forwarding the packet. The two physical types of bridges are Local and
Remote Bridges.
Local Bridges are used (as in the previous examples) where the network is being locally (talking physical
location now) segmented. The 2 segments are physically close together: same building, same floor, etc. Only
one bridge is required.
Remote Bridges are used in pairs, and also used where the network is remotely segmented (again, talking
physical locations). The two segments are physically far apart: different buildings, different floors, etc. 2 x
Half Bridges are required: one at each segment. The Remote bridges are 1/2 of a normal bridge, and may use
several different communications media in between.
Bridge Methodologies
There are 3 primary bridging methodologies used by bridges for connecting local area networks:
1. Transparent bridges
2. Spanning Tree Protocol
3. Source Routing
Transparent Bridges were originally developed to support the connection of Ethernet networks. The spanning
tree protocol was developed to improve transparent bridging. Source Routing Bridges are used by Token
Ring. Source routing bridges require a solid understanding of Token Ring concepts, and as such will be
covered under the section discussing Token Ring.
Now that you know a subset of the bridged links exists that will allow any network to operate without loops
(a spanning tree), how do the bridges determine a spanning tree and decide which spanning tree to use?
Bridges communicate via messages called Bridge Protocol Data Units (BPDUs). Before the bridges in the
network can make sensible decisions about how to configure themselves, each bridge and each port need
some configuration data:
Having configured each bridge, the bridges will automatically determine a spanning tree to use. The
configuration parameters that you have set will determine which spanning tree is chosen.
The bridge with the lowest bridge ID is selected as the root bridge. Bridge IDs are supposed to be unique,
but if there two bridges with the lowest ID, the one with the lowest MAC address is used as a tie-breaker. In
Figure 7.7, bridge 1 is selected as the root bridge.
On every bridge except for the root bridge, a root port must be selected. This is supposed to be the best port
for the bridge to communicate with the root bridge. The lowest cost path from each of the bridge's ports to
the root bridge is calculated. On each bridge, the port with the lowest cost path to the root bridge is selected-
-marked as (Root) in Figure 7.7.
If there is only one bridge connecting to a given LAN, it must be the designated bridge for that LAN (for
example, bridge 3 is the designated bridge for LAN G). If there is more than one bridge connected to a given
LAN, the bridge with the lowest cost path to the root bridge is chosen (for example, bridge 4 is chosen over
bridge 3 for LAN F). The designated port connects the designated bridge to the relevant LAN (if there are
multiple ports, the one with the lowest priority is chosen).
Note that a port must be one (and only one) of the following:
1. Root port
2. Designated port for a LAN
3. Blocked
Note that a root port is never a designated port for a LAN (port 7 on bridge 3 is not the designated port for
LAN E, for example). The root port is a path to the root bridge, so there must be another bridge closer to the
root bridge attached to this LAN (in this case, bridge 2). This other bridge would therefore be the designated
bridge for the LAN, and would hold the designated port (port 6).
When a bridge is switched on, it assumes that it is the root bridge. The bridge transmits a configuration bridge
protocol data unit (CBPDU), stating the bridge ID of the bridge it perceives to be the root bridge.
A bridge receiving a CBPDU frame with a lower bridge ID than its known root bridge will update its tables.
If the frame was received upon the bridge's root port (upstream), the information is disseminated to all
designated ports (downstream).
If the given bridge ID is higher that its known root bridge, the information will be discarded. If the frame was
received on a designated port (downstream), a reply is sent, containing the lower bridge ID of the real root
bridge.
If the network is reconfigured, either deliberately or due to a link failure, the process will be repeated, and a
new spanning tree decided upon.
The Spanning Tree Protocol was developed to address the problems of loops in Transparent Bridging. The
IEEE 802.1D (Institute of Electrical and Electronic Engineers) committee formed the Spanning Tree
Protocol.
The Spanning Tree Protocol (STP) converts a loop into a tree topology by disabling a bridge link. This action
ensures that there is a unique path from any node to every other node (in a MAN or WAN). Disabled bridges
are kept in a stand-by mode of operation until a network failure occurs. At that time, the Spanning Tree
Protocol will attempt to construct a new tree, using any of the previously disabled links.
The Spanning Tree Protocol is a Bridge-to-Bridge communication where all bridges cooperate to form the
overall bridge topology. The Spanning Tree algorithm is dynamic, and periodically checks every one to four
seconds to see if the bridge topology has changed.
Bridge #3 & #5 are stand-by bridges, and have their links disabled. This results in only a single path to each
network segment.
Each bridge is assigned an arbitrary number that assigns priority to the bridge in the Internetwork. The
number is concatenated with the bridge MAC address. If 2 bridges have the same priority, the MAC address
is used as a tie breaker mechanism. The lower the assigned number, the higher the bridge priority.
During initial power-up, a Bridge Protocol Data Unit (BPDU) is flooded out each network port of the bridge.
The BPDU contains the following: the current spanning tree root, the distance to the root (measured in hops
through other bridges), the bridge address information, and the age of the information in the BPDU. Bridge
priorities are usually controlled manually so as to configure the traffic flow--over the Internetwork--on a
preferred path.
Problems can arise where, for example, the Spanning Tree Algorithm may select a path from Los Angeles to
New York City --and back to San Francisco--rather than the preferred route of Los Angeles to San Francisco.
Bridge Addressing
Bridges work at the Data Link Layer and they recognize the MAC addresses. Spanning Tree Protocol adds a
Bridge Protocol Data Unit (BPDU) for Bridge to Bridge communications. Source Route Bridges and Token
Ring provide special Data Link layer communication (and will be discussed later).
Collapsed Backbones
Collapsed Backbones take the network backbone and electronically collapse it into a high speed electronic
card cage. Usually, Collapsed Backbones operate at 100 Mbps. The card cage holds plug-in cards for
repeaters, hubs, bridges, routers, brouters and gateways.
Software is provided to remotely configure all plug-in cards using SNMP. SNMP is a network management
protocol that stands for Simple Network Management Protocol. It is a standard for intelligent network devices
to communicate their configuration to administrators (who are operating from remote workstations). The
workstations can be located thousands of miles away!
Source routing bridges operate on a different principle than transparent bridges. Transparent bridges present
the illusion of one continuous network segment to the connected hosts. Source route bridges do not make any
decisions about where to forward packets, and do not build up lists of host MAC addresses. Though the
principle of routing is different, source route bridges must still be configured with identification information.
Each bridge is given a unique number (the bridge ID). Each LAN (Token Ring) is also given a unique
identifier (the ring ID). This can be seen clearly in Figure 7.8. Note that some bridges use different bases for
identifiers (that is, some are in hexadecimal, and some are in decimal).
Any station wishing to send a frame to a station on a remote network must specify which bridges the frame
should traverse. For instance, if host A wishes to send a frame to host D, it could specify:
There is not always such a large choice of routes--the network in Figure 7.8 is highly robust, providing a
large number of alternative routes. In a simpler network, there might only be one available route. The choice
of which route to use is strictly the responsibility of the sending host.
What would transparent bridging do with this network? The spanning tree algorithm would force some of the
bridge ports to be blocked. You might well end up with a spanning tree where bridges 1, 4, 5, and 8 are totally
redundant (until a failure occurs). However, this would put a massive load on Token Ring 3. With source
route bridging, a more flexible routing scheme can be achieved at the cost of the hosts managing all the
routing information.
The IEEE 802.5 standard for Token Ring defines several fields of interest in the frame header. The I/G bit at
the head of the source address is set if routing information is present in the frame. The route itself is defined
in the routing information section as a list of 2-byte route designators.
Path Discovery
If the stations are to specify the path to be taken to the remote host, they must have a way of finding the path.
This function is performed by sending out path discovery messages. Path discovery need not be performed
for each packet sent, but rather the path information is cached and reused.
The transmitting host sends out an All Routes Explorer (ARE) frame with a blank list of route designators.
Each bridge receiving the frame adds the bridge ID and the network ID to the list of route designators, and
forwards the frame to all ports other than the port on which it was received.
The receiving host will receive one ARE frame for each possible route, from the transmitter to the receiver.
For each ARE frame arriving at its destination, an SRF (specified route frame) is sent in reply to the original
host. A path to the destination is then chosen by the original host.
The Spanning Tree Explorer (STE) frame relies upon a spanning tree being defined. An STE frame is
broadcast from the originating host, which is passed across a spanning tree by the bridge network (see the
previous section on spanning trees). This means that exactly one copy of the STE frame will arrive at each
LAN on the network. The destination host will therefore receive only one copy of the STE frame, with a
copy of the route taken in it. The destination host responds with an ARE frame to the originating host.
Source route bridges put the requirement on the host to determine all the routing information and route
discovery. This means that more traffic is generated by routing information--assuming there are more hosts
than bridges on the network!
Transparent bridges require no input from the host, and therefore no modifications to the network stack.
However, the set of paths used are often suboptimal. The method used to avoid loops is simply to disable
some ports!
Summary
Bridges operate at the data link layer, allowing them to be independent of the network layer protocols being
used. There are two main types of bridges: transparent bridges and source routing bridges. Transparent
bridges make decisions about frame routing for themselves and are most commonly found on Ethernet
networks. Source route bridges rely on the host for routing decisions, and are most commonly found on
Token Ring networks.
Switches
One of the most amazing, and most aggravating, aspects of networking in the late 1990s is the rapid pace at
which the underlying technologies consistently manage to evolve. In other industries, core technologies
experience revolutionary growth every 5 to 15 years (or more), but even the most conservative projections
show the underpinnings of the information technology industry growing by leaps and bounds every 18
months, sometimes even more frequently! Most recently, companies have become locked in a number of
heated battles over which vendors will provide the hardware and software that will lead the industry into the
twenty-first century. These companies have been releasing at breakneck speeds attractive yet bloated
browsers, plug-and-play desktop operating systems, and proprietary high-speed modem designs, in addition
to wiring various cities with cutting-edge communications media, including ATM (Asynchronous Transfer
Mode), ADSL (Asynchronous Digital Subscriber Line), and more. The race to provide fast, robust, and cost-
effective data-sharing solutions is on, and everyone--from vendors to corporate clientele--is playing for
keeps.
The one disheartening factor associated with this tremendous growth is that the budgets of most corporate IT
managers and network engineers continue to shrink. If you think about it from the technologically illiterate
standpoint of most executives, unless system problems become chronic and network reliability begins to
suffer noticeably, there's obviously no need to invest tens of thousands of additional dollars for infrastructure
build-outs. After all, if it ain't broke, why fix it? Rarely, if ever, are any long-term plans or proactive
procedures developed for identifying upcoming network issues and resolving them in a timely and low-cost
manner.
Unfortunately, this hands-off attitude leaves the network engineer in a rather precarious position. Financially,
your hands are tied: You'll be lucky to get even enough funding to maintain your current network
infrastructure, let alone make the serious improvements to cabling, hardware, and software necessary to
support your ever-growing user community. So how do you stave off latency, device time-outs, intermittent
connections, and crotchety users, all while not spending thousands of dollars to physically increase
bandwidth, add new servers, or hire a staff of thousands? The solution can be described in two words:
Ethernet switching.
As executive management becomes increasingly tight-fisted when it comes to providing financial assistance
for the support, maintenance, and expansion of network resources, you'll frequently be confronted by a
number of different problems that will require innovative or unorthodox solutions--often your only route
when funding is scarce. Two factors will significantly affect your network. You'll need to pay special
attention to both of them, or you're liable to suddenly find yourself between a rock and a hard place.
First, computers, workstations and servers alike, simply aren't what they used to be. No longer are desktop
machines simply dumb terminals or systems that are only slightly more intelligent than the average toaster.
Today's class of desktops, workstations, and network servers have been rebuilt bigger, faster, and stronger
than their predecessors, with the memory and processing capacity to crush many of the server-class machines
that were in wide use only five years ago. This has resulted in newer, more demanding roles for these
machines, roles that require the systems to be constantly transmitting across your network to access remote
files, surf the Internet, use shared devices, and so on.
As if this weren't enough of a challenge all by itself, overall network growth will, if left unchecked, work to
cripple your network infrastructure. As your network grows in terms of nodes, users, and services provided,
sooner or later you'll be faced with the mother of all network problems: insufficient bandwidth. Performance
will undoubtedly degrade, users will gripe about slow network response times, client/server applications will
grind to a halt, and cross-network connections will be few and far between. This is not good for your network
or for your hopes for a promotion. You've got two options at this point--work for a company that doesn't care
how much money you spend and can afford long spans of network downtime, or adopt Ethernet switching.
Using bridges to separate multiple network segments has long been seen as an excellent method of reducing
cross-segment traffic and realizing modest performance increases in growing or mature networks. Although
bridging allows network engineers to subdivide saturated networks into more manageable mini-networks,
the gains achieved through bridge solutions are only effective to a certain point, when the initial saturation
problems recur. This generally happens when large numbers of users, on any of your segments, begin to
demand significant numbers of intersegment connections. This is when the bridge's incapability to provide
simultaneous cross-segment connections starts to hobble the effectiveness of the bridge solution. Thankfully,
one of the strengths of switches revolves around multiple cross-segment communications, so there is a route
out of your misery. Despite all this, it is important to recognize that in certain situations, bridges offer a better
solution than switches--don't reject bridges out of hand. If your subnets will require little, if any, cross-
segment contact, a bridge may just do the trick.
In an Ideal World
In an ideal world, you'd never be faced with critical network congestion. You'd always have plenty of time,
staff, and financial support to isolate and neutralize even the most nascent of problems. You say that your
company has just acquired your major competitor and that you have to integrate an additional 12,000 nodes
into your currently overburdened token ring network? Just scrap the damn thing and install FDDI. You've
just taken over three new floors in your building and are hiring a bunch of new employees? Heck, why not
just rip out your old wiring, run some CAT5, and sing the praises of Fast Ethernet? Pardon the sarcasm, but
on what planet are you currently living?
Alas, we operate in anything but an ideal world. Fast Ethernet, ATM, FDDI, and ADSL solutions (among
others) are simply too expensive and resource-intensive to be used as fast-response weapons in your fight
against network congestion. Although your IS team is probably quite versatile, the task of designing,
implementing, and supporting a network based on these new technologies requires a good deal of time and
effort, is anything but a straightforward procedure, and no doubt will require significant outlays of cash for
staff training. Not to mention, of course, transition downtime, the inevitable glitches associated with large-
scale installations of new technology, as well as any of a host of unanticipated issues that will need immediate
attention. Maybe, just maybe, you can pull it off successfully. But if this is your only contingency plan, you
can start looking for a new job soon.
Another avenue that is frequently explored when trying to alleviate network congestion centers on the
installation of bridges and routers to segregate intersegment traffic. Although this is a valid solution that can
often yield at least marginal results, configuring bridges and routers to provide optimal performance takes
high degrees of skill, patience, and lots of network traffic analysis. Again, this option should be exercised
only if time and money are on your side.
In reality, you can't simply dump your entire infrastructure every time problems begin to crop up with the
current iteration of your network. Your boss won't stand for it, most likely won't (or can't) pay for new
hardware and software. Your company certainly can't afford the hassle of one transition after another, along
with the downtime and other hassles, every time you want to take the easy way out of a networking fiasco.
In many instances, Ethernet switching has emerged as the de facto solution for dealing with these types of
network congestion issues, often saving network administrators time, money, and frustration in the process.
The redesign of networking infrastructure to integrate Ethernet segment switches into a traditional,
nonswitched networking environment can yield surprising results, not only in terms of increased overall
network performance but also in light of the low price/performance trade-offs that will be required to
implement switching solutions, as opposed to FDDI, Fast Ethernet, or other high-speed networking
technologies. The benefits of Ethernet switching are many: It is relatively inexpensive compared to other
options; it can reap tangible improvements in network performance, regaining lost bandwidth and allowing
for full duplex (20Mbps) networking; it can be implemented in a proportionately shorter period of time than
FDDI, Fast Ethernet, or other technologies; and it allows you to retain your investments in current network
infrastructure. All in all, this sounds like a pretty good option when the demon of clogged networks is staring
you in the face.
How do switches work? In a very broad sense, Ethernet switches function by helping you break down greater,
traffic-intensive networks into smaller, more controllable subnetworks. Instead of each device constantly
vying for attention on a single saturated segment of 10Mbps Ethernet, switches allow single devices (or
groups of devices) to "own" their own dedicated 10Mbps segments connected directly to the high-speed
switch, which then facilitates intersegment communication. Although this sounds a lot like a bridge, there
are some important distinctions that make switches much more dynamic and useful pieces of hardware.
Switches themselves are hardware devices not entirely different in appearance from routers, hubs, and
bridges. However, three important factors separate switches from their networking brethren: overall speed
(switches are much faster); forwarding methodology or electronic logic (smarter); and higher port counts. In
contrast to the functionality of bridges and routers, which traditionally utilize the less effective and more
expensive microprocessor and software methods, switches direct data frames across the various segments in
a faster and more efficient manner through an extensive reliance upon on-board logic, through Application-
Specific Integrated Circuits (ASICs).
Like bridges, switches subdivide larger networks and prevent the unnecessary flow of network traffic from
one segment to another, or in the case of cross-segment traffic, switches direct the frames only across the
segments containing the source and destination hosts. In a traditional nonswitched Ethernet situation, each
time a particular device broadcasts, or "talks," on the network, other devices become incapable of accessing
the network, thus preventing collisions (as defined in the IEEE's 802.3 specification). Although this ensures
the integrity of your data, it does nothing to increase overall network speed. Switches help to ensure additional
network access opportunities for attached devices (increasing speed and reducing latency) by restricting data
flows to local segments unless frames are destined for a host located on another segment. In such a case, the
switch would examine the destination address and forward the requisite frames only across the destination
segment, leaving all additional segments attached to that switch free from that particular broadcast and
(theoretically) able to facilitate local-segment traffic. Rather than being a passive connection between
multiple segments, the switch works to ensure that network traffic burdens the fewest number of segments
possible.
Switch Properties
By now you should be convinced of the important role that switches can play as part of your Ethernet network.
If not, you may want to reread the chapter up to this point. Switches may save your job when your network
starts down the inevitable road toward total collapse.
If you've whole-heartedly embraced this chapter's recommendations and are ready to go switch-shopping, a
question presents itself: How do you pick a switch that will suit your needs? You'll need to spend a little time
getting to know switches, some of their more important features, and how they do that which they do so well.
Once you've got that information under your belt, you should be in a fairly good position to make authoritative
choices about switch purchases.
If you've gotten to the point where your network is extremely congested and you've called vendors in for
demonstrations and quotes, be extraordinarily wary if their solutions depend on static switches. Although the
devices that you evaluate during the course of re-engineering your network may or may not be explicitly
referred to as static switches, take a good look at the functionality of the particular piece or pieces of
hardware. If the products perform in a fashion that appears to make them nothing more than glorified hubs,
chances are that you really don't want to invest in that type of switch. After all, the point of this whole
operation is to segment and intelligently control intersegment traffic, thus reducing congestion. Static
switches just don't hit the mark.
On the other end of the spectrum are the products that you do want to consider seriously: dynamic switches.
Dynamic switches not only pay special attention to the forwarding of packets to their proper destination, but
also maintain a table that associates individual nodes with the specific ports to which they are connected.
This information, updated each time a particular machine transmits across the network, or perhaps at
operator-defined intervals, keeps the switch's information as to node/port combinations up to date, allowing
the switch to quickly direct frames across the proper segments, rather than across all segments on the switch.
NOTE: Dynamic switches will continue to save you huge amounts of time and energy long after you first
integrate them into your network. Because dynamic switches update their forwarding tables every time
devices broadcast across the network, you can rearrange your network, switching workstations from port to
port to port, until your network is configured in the manner that suits you best, or you're blue in the face,
whichever comes first! The tables will be updated automatically and your network won't go down!
There's a great ongoing debate about whether segment switching or port switching provides the optimum
solution for resolving network congestion crises. It all boils down to a question of cash on hand: If you've
got the cash, go with port switching; if not, then segment switching will be the order of the day. What's great
about the segment-versus-port debate is that, for a change, you win either way.
Segment switches are able to handle the traffic from an entire network segment on each port, allowing you
to connect a higher number of workstations or segments with fewer switches/physical ports. The great aspect
of segment switches is that they are also capable of handling a single workstation on each port (in essence, a
segment with one node). This will allow the network engineer to prearrange machines requiring only
intermittent network access along the same segment, sharing one (relatively) low-traffic 10Mbps pipe. At the
same time, high-end machines, such as network and database servers, optical drives, and other devices can
be connected with a one device/one port scheme, allowing these high-bandwidth and critical devices their
own dedicated path to the greater network without having to compete with someone's Internet game for
network access. Because of the inevitable cost controls that you encounter on a daily basis, segment switching
is the preferred and most readily implemented solution because it requires little in the way of additional
expenditures for hardware, additional cabling, and so on.
Port switches (also referred to as switching hubs) are designed to accommodate a single device on each
physical port. This is a network manager's dream--each workstation, server, and random device would have
its own dedicated, 10Mbps path to the rest of the network. However, implementing a port-switching solution
demands a good deal of capital for additional wiring (cable runs are needed from each device directly to the
switch) and enough switches to provide the requisite number of physical ports. Additionally, as your network
grows, you'll be faced with significantly increased expansion costs because you'll need new cable runs and
possibly entirely new switches every few months. Again, if you've got lots of cash, this is a great option;
you'll have quite the impressive network. However, whatever route you choose, you'll certainly end up with
a much better network than you had prior to implementing switching.
Cut-Through Switching
Although switches by themselves will provide impressive gains in your overall network performance, there
will occasionally be certain situations in which you will want (or need) to squeeze just a little more juice out
of the system. Instead of looking at your boss and screaming in despair, an excellent alternative is to
implement a cut-through switching solution.
Cut-through switching helps speed network communication by forwarding packets much sooner than
traditional switching configurations will allow. This is achieved by forwarding packets to their destination
machine prior to receiving them in their entirety, sending them on as soon as the switch is able to determine
the destination address. Although this generally reduces network latency, cut-through switching can often
allow many bad packets to eat up available bandwidth. To prevent this, reconfigure your switch to allow for
a marginally longer delay between the receipt and forwarding of packets. Ideally, as soon as the switch
receives the packet, it should buffer 64 bytes to ensure that the possibility of packet errors has been
eliminated. After the possibility of these errors has passed, the switch can then forward the packets across
the appropriate segment to the destination host. This slightly increases network latency, though it will provide
for faster speeds than floor-model switching. Unfortunately, if yours is an extraordinarily busy network, the
benefits of cut-through switching will be less noticeable, and will reach their limits much sooner than in a
less intensive environment.
Store-and-forward switching devices, as the archnemesis of cut-through switches, take an entirely different
approach. It's very much like the tortoise and the hare, with store-and-forward devices playing the slower,
yet more dependable, role of the two.
Instead of the faster send-it-as-soon-as-you-can rule used by cut-through devices, store-and-forward devices
wait until the entire packet is received by the switch, only then sending it on to its destination. This lets the
switch verify the packet's CRC and eliminate the possibility of other transmission errors, allowing for highly
reliable data transmission across your network. Although this doesn't strictly increase network performance,
it does eliminate the additional transmissions that must occur as a result of packet errors that otherwise would
have occupied network resources, thus providing an associated speed increase.
As with just about any area of networking technology, there are a number of additional issues that must be
considered when implementing a switching solution for your network. These topics go above and beyond the
simple selection of a basic switch and instead take a holistic view of networking in order to create a more
powerful and more efficient finished package. The following section briefly covers high-speed network
interfaces, competing solutions for high-speed Ethernet, network management issues, and virtual network
options that can work in concert to make your network perform well above your expectations.
High-Speed Interfaces
When building (or rebuilding) a network with high-speed switching as the centerpiece of the endeavor, it's
important to upgrade and to standardize as many of your network connections as you can, based on your
particular hardware and financial constraints. In addition to the host adapters in each of these instances, it is
also important to provide the associated high-speed connection, such as CAT5. Three areas need to be
addressed: servers, workstations and attached devices, and interswitch connections.
Network Servers
Because the NIC on a server is one of the prime areas where bottlenecks occur, it's important to install a high-
speed interface on the server to alleviate NIC congestion. Because your network is only as fast as its slowest
part, it's up to you to ensure that easily upgraded items, such as host adapters, do not pose significant
performance barriers in and of themselves. This will allow for fast data transfers to and from the server.
As the second factor in the network equation, workstations and other devices that are attached to your network
can also make or break a network's performance based on the type and speed of network adapters that are
installed throughout your workstation community. High-speed interfaces are important because they allow
for faster connections and data transfers with switches and servers, and will free up the network for other
devices in a much shorter time frame. Additionally, your network, switches notwithstanding, will experience
degraded performance if there is a significant differential among the interface speeds of your various network
devices. The reason for this is that fast ports usually can't transmit prior to receiving the whole transmission
from a slow port, and slow ports cannot even begin to utilize the full bandwidth provided by a fast port. In
either case, bandwidth is wasted because the faster port is at the mercy of the slower port, with the associated
bandwidth going to waste.
Interswitch Connections
Just as it is important to provide high-speed interfaces to servers and workstations, it is equally important to
ensure that your switches, when interconnected, can communicate with one another at similarly fast rates.
After all, your network redesign is for naught if your switches themselves become the very bottleneck that
you have been trying to avoid!
Streamlining interswitch connections is generally not the most urgent issue that you'll encounter when
integrating switches into your network, simply because the majority of installations will require only a single
switch, or will have switches that themselves are supporting completely isolated segments. However, when
switches need to be interconnected, in effect creating a miniature high-speed backbone between each other,
it's important to provide the fastest interfaces and cabling possible to ensure plentiful bandwidth and low-
latency network access.
Two of the more recent developments in Ethernet networking deserve a paragraph or two when discussing
switched Ethernetworks. Both technologies can afford significant increases in overall network performance
with only modest expenditures in labor and hardware necessary to implement them.
Designing a switched Ethernet network that combines port and segment switching, coupled with full duplex
(20Mbps) Ethernet operation will increase overall network throughput and will greatly decrease network
latency, providing a much more responsive network for your user community. Full duplex Ethernetworks are
implemented by disabling the collision detection procedures involved with the traditional (half duplex)
Ethernet CSMA/CD schema. Full duplex Ethernetworks function across existing cable infrastructure,
allowing you to retain a good portion of your current investment in network technology. Of course, there's
always a catch. Although you can use your current wiring, you will need to purchase newer, high-speed
network interface cards (NICs) for PCs, network and other servers, and any other devices attached to your
network.
Another of the amazing technologies that have been increasing the speed at which network communications
are taking place is Fast Ethernet, which is a new system that allows Ethernet-based traffic to cross your
network at speeds at or near 100Mbps, well beyond the speed of traditional Ethernet (10Mbps) or Token
Ring (4 or 16Mbps) networks. The 100base-TX implementation of Fast Ethernet will operate across existing
CAT3 or CAT5 cable, although CAT5 is preferred because its lower signal-to-noise ratio ensures a more
reliable level of communication.
As we've already discussed, one of the most frequent areas that suffer from network congestion occurs at the
network server. Although switched networks are the first step in lessening this bottleneck, supplementing
your switch with high-speed Ethernet can provide greater performance gains than can be realized by either
technology by itself. If you're up to the task of rewiring your entire office building, Fast Ethernet is the least
expensive and most easily implemented solution available for providing 100Mbps networking throughout
your organization. ATM, FDDI, and ADSL are nice, but without a lot of time for training, a large budget,
and a whole lot of patience, they're not likely to be deployed in your company any time soon!
Network Management
When evaluating your switching solutions, be sure that your manufacturer supports (or hopefully provides)
some type of SNMP-compliant management tools so that you can easily and effectively monitor and
troubleshoot your switches. Although network management resources will vary to some degree from vendor
to vendor, make sure that your particular switch can readily supply you with performance, error, and other
related information so that you can easily spot and address network trouble.
Virtual Networks
If proactively monitoring your network, designing speed-saving cut-through switches, and installing high-
speed interfaces throughout your network still hasn't delivered the fantasy performance levels that you've
been pining for, you're not out of options just yet. Another step you can take will require a lot more time and
effort, but if you're working in a particularly large and unwieldy switched environment, then perhaps virtual
networking is the right solution for your particular needs.
The reality of virtual networking is not far off from what you're probably envisioning right now. Virtual
networking is the creation of multiple logical networks out of a single physical network or grouping of
segments connected to a single switching device. This can be accomplished by configuring the switch to
allow certain workstations or segments access only to specifically delineated segments, thereby denying
access to all other segments connected to that particular switch.
Virtual networking can also be used as a management and security tool, enabling an administrator to group
segments together into logical networks based on department (legal, production, customer service, and so
on), physical location in the building, or simply to further filter excess traffic from busier parts of the network.
What's more, virtual networking can be implemented in such a way as to allow only packets from certain
predefined hosts access to restricted segments. In this way, the flow of data from potentially hostile machines
can be eliminated long before those machines pose a threat to corporate data or the performance of your
network.
Summary
As local area networks become increasingly more complex and crowded beasts, the limits, financial and
otherwise, that are imposed on technology support teams require that these troubleshooters approach network
problems in new and often unconventional ways. Rather than scrapping your current network in favor of the
revolutionary and often enticing technologies that are constantly flooding the market from a variety of
vendors, one of the best ways to address network performance issues is to redesign your current network to
include Ethernet switches.
By including Ethernet switches as part of your greater network, you'll gain a wide variety of benefits,
including decreased latency, faster file transfers, fewer collisions and other transmission errors, and
significantly easier management of the greater network. Your users will love you for providing a much more
user-friendly network environment, you bosses will be thrilled with the fact that you've managed to keep the
network together and responsive with a minimum of cash, threats, and ultimatums, and you'll be happy to
avoid the gripes, groans, and midnight pages from the network operations center that tend to go hand in hand
with slow-performing networks.
Routers
The simplest of networks can be imagined as a one-wire bus, where each computer can talk to any other by
sending a packet out onto that bus.
But as you increase the number of computers on the network, this becomes impractical. There are several
main problems:
Segmenting the network helps solve all of these problems. But if you break the network into separate
segments, you must provide a mechanism for the different segment hosts to communicate. This normally
involves selectively passing data between segments at some layer of the ISO network stack. Let's look again
at the network stack to see where routers fit in.
Routers are both hardware and software devices. They can be cards that plug into a collapsed backbone,
stand-alone devices (rack mount or desktop) or software that would run on a file server with 2 NICs. Routers
operate at the network layer. This chapter assumes that the network layer is IP (version 4), as this is by far
the most popular protocol. The concepts involved are similar to those behind other network layer protocols.
Purpose of Routers
The purpose of a router is to connect nodes across an Internetwork, regardless of the Physical Layer and Data
Link Layer protocol that is used. Routers are hardware and topology-independent. Routers are not aware of
the type of medium or frame that is being used (Ethernet, Token Ring, FDDI, X.25, etc.). Routers are aware
of the Network Layer protocol that's used (e.g. Novell's IPX, Unix's IP, XNS, Apple's DDP, and so on).
Routers operate on the OSI Model's Network Layer. The Internetwork must use the same Network Layer
protocol. Routers allow the transportation of the Network Layer PDU through the Internetwork, even though
the Physical and Data Link Frame size and addressing scheme may change.
Routing is a higher-level concept than layer 2 switching/bridging--you are further removed from the physical
details of the network. Any machine on a routed network has the same network layer address format (for
example, an IP address) whether it is communicating over an Ethernet, Token Ring, FDDI, or WAN link.
Data link layer addresses (for example, MAC addresses) are just unique identifying tags for a particular
network interface within a particular layer 3 network (they may also be globally unique--for example,
Ethernet addresses). Network layer addresses usually hold more information than this--they consist of two
parts: a network address and a host address. (For example, the IP address of my network interface card is
158.84.81.39, the network address is 158.84.81, and the host address is 39).
NOTE: Here the network address is taken to mean the whole of the number specifying the network (that is,
including any subnet address).
A bridge can only connect networks with the same (or very similar) data link layer protocols. A router
transcends this problem. It can connect any two networks, provided that the hosts use the same network layer
protocol.
Routers that only know Novell IPX (Internetwork Packet Exchange) will not forward Unix's IP (Internetwork
Packet) PDUs, and vice versa. Routers only see the Network Layer protocol that they have been configured
for. This means that a network can have multiple protocols running on it (e.g. SPX/IPX, TCP/IP, Appletalk,
XNS, etc.).
In the following network, Router #3 is a Novell SPX/IPX router; it only sees the Network Layer protocol
IPX. This means that any TCP/IP PDUs will not pass through: the router does not recognize the PDUs, and
doesn't know what to do with them.
Routers #1 & #2 are TCP/IP routers and they recognize only IP protocols. This keeps SPX/IPX traffic off of
"Segment 300". This is in quotations because TCP/IP has a different network numbering scheme than IPX.
Important Point: Routers allow network traffic to be isolated--or segmented--based on the Network Layer
Protocol. This provides a functional segmentation of the network.
Routers that only can see one protocol are called Protocol Dependent Routers. Routers that can see many
different protocols (two or more) are called Multi protocol Routers.
Router Addressing
Routers combine the Network Number and the Node Address to make Source and Destination addresses (in
routing Network Layer PDUs across a network). Routers have to know the name of the segment that they are
on, and the segment name or number where the PDU is going. They also have to know the Node Address:
MAC Address for Novell, and the IP address for TCP/IP.
For Novell's SPX/IPX (Sequential Packet eXchange/Internetwork Packet eXchange), the Network Layer
PDU's address is composed of the Network Address (32 bit number) and the Host address (48 bit - MAC
address).
Routing Protocols
Routing Protocols are a "sub-protocol" of the Network Layer Protocol. They deal specifically with the routing
of packets from the source to the destination (across an Internetwork). Examples of Routing Protocols are:
RIP, IGRP and OSPF.
RIP was one of the first routing protocols to gain widespread acceptance. It is described in RFC1058, which
is an Internet standard. RFC stands for request for comment and the RFC1058 is the 1,058th RFC standard
published. Commercial NOS, such as Novell, Apple, Banyan Vines, and 3Com, use RIP as the base routing
algorithm for their respective protocol suites.
RIP is a distance vector algorithm. Routers maintain a detailed view of locally-attached network segments,
and a partial view of the remainder of the routing table. The routers contain information on the number of
hop counts to each segment. A hop is considered to be one transverse through a router. Pass through a router,
and the Hop count increases by 1.
The routers are updated every 30 seconds, when each router sends out a RIP broadcast. This advertisement
process is what enables RIP routing to be dynamic. Dynamic routers can change routing tables on the fly (as
the network configuration changes). By using the Hop Count information from their routing tables, routers
can select the shortest path (the least number of hops) to the destination. Apple uses RTMP (routing table
maintenance protocol); this adds a good, bad or suspect route status indicator, depending on the age of the
route information.
Novell adds ticks to the RIP algorithm: Ticks are dynamically assigned values that represent the delay
associated with a given route. Each tick is considered 1/18 of a second.
LAN segments are typically assigned a value of 1 tick. A T1 link may have a value of 5 to 6 ticks and a 56
Kbps line may have a value of 20 ticks. A larger number of ticks indicates a slower routing path. Three
commonest problems that can occur with RIP are shown below:
1. Routing loops: the router indicates that the shortest path is back the way the packet came from.
2. Slow Route Convergence: routers have delay timers that start counting after the RIP advertising
packet is broadcasted. This gives the routers time to receive and formulate a proper routing table
from the other routers. If the delay timer is too short, the routing table can be implemented with
incomplete data causing routing loops
3. Hop Count Exceeded: the maximum number of hop counts is 15 for RIP. A hop count of 15 is
classified as unreachable which makes RIP unsuitable for large networks where hop counts of 15
and above are normal.
EGRP was created to solve many of the problems with RIP, and has become the default routing protocol
across the Internet. EGRP is an enhanced distance vectoring protocol; it uses up to 5 metrics (conditions) to
determine the best route as shown below:
Bandwidth
Hop Count (Delay) - maximum of 255
Maximum Packet size
Reliability
Traffic (Load)
These routing metrics are much more realistic indicators (of the best routes) than simple hop counts.
OSPF is a link state premise: it has several states of routers that are linked together in a hierarchical routing
model. Let's take a look at this model in the following paragraphs....
The top of the root is the Autonomous Router that connects to other autonomous systems (the Internet). The
next is the Backbone Routers, the highest area in the OSPF system. Border routers are attached to multiple
areas and they run multiple copies of the routing algorithm. Last are internal routers that run a single routing
database for one area.
Basically, by dividing the network into a routing hierarchy, both substantial reduction of routing update
traffic--and faster route convergence--result on a local basis. Each level has a smaller routing table, and less
to update.
Underlying the network layer is the data link layer. For the layers to interoperate, they need some "glue"
protocols. ARP (Address Resolution Protocol) is used to map network layer (layer 3) addresses to data link
layer (layer 2) addresses (see the description in the following section). RARP (Reverse Address Resolution
Protocol) is used to map layer 2 addresses to layer 3 addresses.
The most common use of ARP is to resolve IP addresses, though the protocol is defined in such a way that it
is independent of the network layer protocol. The most common data link layer is Ethernet. Accordingly, the
examples in the ARP and RARP sections are based on IP and Ethernet, though the concepts are identical for
use with other protocols.
Network layer addresses are an abstract mapping defined by the network administrator--the network layer
doesn't have to worry which data link layer it is running over. However, network interfaces can only
communicate with each other according to the layer 2 address, which is dependent on the network type. These
layer 2 (hardware) addresses are derived from the layer 3 address by the Address Resolution Protocol (ARP).
An ARP request is not necessary for every datagram sent--the responses are cached in the local ARP table,
which keeps a list of <IP address, hardware address> pairs. This keeps the number of ARP packets on the
network very low. ARP is generally a low maintenance protocol that raises few problems; it is normally seen
only when there are conflicting layer 3 addresses on the network. ARP is a simple protocol, presenting few
complications.
If interface A wants to send a datagram to interface B, and it only has the IP address for B (B-IP), it must
first find the hardware address for B (B-hard). Interface A sends an ARP broadcast specifying B-IP and
requesting B-hard. Interface B receives the broadcast, and replies with a unicast to A, giving the correct B-
hard for B-IP.
Note that only interface B responds to the request, even though other interfaces on the network may have the
relevant information. This ensures that responses are correct and do not provide out-of-date information.
It is important to understand that ARP requests are sent out only for the next-hop gateway, not always for the
destination IP address. Thus, if interface A wants to send a datagram to interface B, but its routing table tells
it that traffic must pass through router C, it sends out an ARP request for router C's address, not for interface
B's address.
The details the process followed when an ARP packet is received. Note that the <IP address, hardware
address> pair of the sender is inserted into the local ARP table in addition to a reply being sent; if A wishes
to talk to B, then it is very likely that B also needs to talk to A.
IP Address Conflicts
The most commonly seen error produced by ARP is caused by a conflicting IP address. This is where two
different stations claim to own the same IP address--IP addresses must be unique on any connected set of
networks.
IP address conflicts are apparent when two replies come in answer to an ARP request--each reply specifying
a different hardware address. This is a serious error, with no easy solution--which hardware address do you
send the datagrams to?
To avoid IP address conflicts, when interface A is first initialized it sends out an ARP request for its own IP
address. If no response is sent back, interface A can assume that the IP address is not in use. However,
suppose interface B is already using the IP address in question: B sends an ARP reply with the hardware
address B-hard. Interface A now knows that the IP address is already in use--it must not use the address and
must flag an error.
There is still a problem though. Suppose that host C had an entry for the disputed IP address, mapping it to
B-hard. Looking at Figure 9.3, you see that on receipt of the ARP broadcast from interface A, host C updates
its ARP table to map the address to A-hard. To correct such errors, interface B (the "defending" system)
sends out an ARP request broadcast for the IP address again. Host C now updates its ARP entry for the
disputed IP address to B-hard again. The network state is now back as before, but host C may have sent IP
datagrams intended for host B to host A by mistake while the ARP table was (briefly) incorrect. This is
unfortunate, but as IP does not guarantee delivery, this situation does not cause major problems.
The ARP cache table is a list of <IP address, hardware address> pairs, indexed by IP address. The table can
often be managed with the arp command. Common syntax for this command includes
Add a static entry to the cache table--arp -s <IP address> <hardware address>
Delete an entry from the cache table--arp -d <IP address>
Displaying all entries in cache table--arp -a
Dynamic entries in the ARP cache table (that is, those that have not been manually added with arp -s) are
normally deleted after a period of time. This period is determined by the specific TCP/IP implementation,
but an entry would commonly be destroyed if unused for a fixed time period (for example, five minutes).
One typical use of a static ARP entry is to set up a standalone printer server. These small units can normally
be configured by way of Telnet, but first they need an IP address. There is no obvious way to feed them this
initial information, except by using the built-in serial port. However, it is often inconvenient to find an
appropriate terminal and serial cable, set up baud rates, parity settings, and so on.
Suppose we wish to set up a print server, P, with an IP address of P-IP, and that we know the print server's
hardware address to be P-hard. A static ARP entry is created on workstation A to map P-IP to P-hard. Any
IP traffic from workstation A to P-IP is now sent to P-hard, though the print server does not yet know its IP
address. We can now telnet to P-IP, which connects to the print server, and configure its IP address. Then
we can tidy up by deleting the static ARP entry.
It is often useful to configure the print server on one subnet, but use it on another. This is easy to achieve by
a very similar process to that illustrated in the preceding figure. The IP address of the print server on the
subnet it uses is P-IP. Allocate a temporary IP address, T-IP, on the subnet that you wish to configure the
print server on, and attach the print server to that subnet. On a workstation (A) connected to the configuring
subnet, create the static ARP entry mapping T-IP onto P-hard, and telnet to T-IP. Configure the print server
to use IP address P-IP. Move the print server to the subnet it will be used on, and tidy up by deleting the
static ARP entry.
Proxy ARP
It is possible to avoid configuring the routing tables on every host by using proxy ARP. This is particularly
useful where subnetting is being used, but not all hosts are capable of understanding subnetting (see the
section on subnetting later on in this section).
The basic idea is that a workstation sends out ARP requests even for machines that are not on its own subnet.
The ARP proxy server (often the gateway) responds with the hardware address of the gateway. Proxy ARP
makes the management of host configurations much simpler. However, it increases network traffic (though
not significantly) and potentially requires a much larger ARP cache. An entry for each IP address off the
local subnet is created, all mapping to the gateway's hardware address. In the eyes of a workstation using
proxy ARP, the world is just one large physical network, with no routers in sight!
IP Addressing
In routable network layer protocol, the protocol address must hold two pieces of information: the network
address and the host address. The most obvious way to store this information is in two separate fields. We
must cope with the largest possible case in both fields, perhaps allocating 16 bits for each field. Some
protocols (such as IPX) behave like this, and it works well for small- to medium-sized networks.
Another solution would be to keep the host address field small, perhaps allocating 24 bits for the network
address and just 8 for the host address. This would allow plenty of networks, but not many hosts on each
network. However, for networks with more than 256 (2 8) hosts, you could allocate multiple addresses. The
problem with this scheme is that the large number of networks created tends to place an intolerable load on
the network's routers.
IP packs the network address and host address together into one 32-bit field. Sometimes the host address
portion is short, sometimes it is long. This allows very efficient use of the address space, keeping IP addresses
short, and the total number of networks fairly low. There are two different ways of splitting the address back
into its two parts--class-based addressing and classless addressing. These are discussed in the following
section.
The distinction between hosts and gateways often causes some confusion. This is because of a shift in the
meaning of the term "host." As defined by the original RFCs (1122/3 and 1009):
A host is a device connected to one or more networks. It can send and receive traffic on any of these networks,
but it never passes traffic from one network to another.
A gateway is a device connected to more than one network. It selectively forwards traffic from one network
to another.
In other words, the terms host and gateway used to be mutually exclusive--computers were not generally
powerful enough to act as both a host and a gateway. The host was a computer that a user did some work on,
or that perhaps acted as a file server. Modern computers are powerful enough to both act as a gateway and
do useful work for a user; therefore, a more modern definition of a host might be the following:
❖ A host is a device connected to one or more networks. It can send and receive traffic on any of these
networks. It may function as a gateway, but this is not its sole purpose.
❖ A router is a dedicated gateway. The hardware is specially designed to allow the router to pass high
volumes of traffic, and with little delay for each packet (latency). However, a gateway can also be
a standard computer with multiple network interfaces, where the operating system's network layer
allows it to forward packets. Now that dedicated routing hardware is becoming less expensive, the
use of computers as gateways is becoming much less common. At a very small site with a only a
cheap dial-up connection, a user's computer might be used as a nondedicated gateway.
Class-Based Addressing
When IP first designed, the address was split into its composite parts according to the first byte of the address:
NOTE: Part of this range is for multicast addresses, sometimes referred to as class D addresses. For the
sake of simplicity, these are not discussed here.
If you needed a large network you were given a class A address, but if you only had a few hosts, you were
given a class C address. A few examples:
Subnetting
Although the class-based addressing system worked well for the Internet service provider, it was impossible
to do any routing inside a network. The intention was that a network would use layer 2 (bridging/switching)
to direct packets within a network. The lack of routing was a particular problem if you had a large class A
network, as bridging/switching on a large network becomes very difficult to manage.
The logical solution is to break down some larger networks into smaller segments, but this was not possible
within the original confines of the class-based addressing system. In the previous example, the network
address 137.89 is treated as a class B address, so it is not possible to route different parts of this network to
different sites.
To solve this problem, a new field called a subnet mask was introduced and associated with every address.
The subnet mask indicated which portion of the address was the network address, and which was the host
address (instead of deciding by the first byte).
In the subnet mask, binary 1 indicates a network address bit, and binary 0 indicates a host address bit. Thus
for the 137.89.15.88 example given earlier, the format would be:
The subnet mask given indicates that the first two bytes are the network address, the second two bytes are
the host address. Thus your traditional class addresses have subnet masks:
If you wish to use the 137.89.0.0 network address as a set of distinct class C-sized networks, the following
address would be used:
Breaking a network into subnetworks using a longer subnet mask (for example, 255.255.255.0 instead of
255.255.0.0) is called subnetting. Be aware that some very old software won't support subnetting, as it doesn't
understand subnet masks. For instance UNIX's routed routing daemon normally uses a routing protocol called
RIP version 1, which was designed before subnet masks.
Non-Byte-Aligned Subnetting
So far, I have only discussed subnet masks of 255.0.0.0, 255.255.0.0, and 255.255.255.0. These are referred
to as byte-aligned subnet masks, because they split the network and host portions on a byte boundary.
However, it is also possible (though slightly more difficult to work with) to split the address inside a byte
(using non-byte-aligned subnet masks), for instance:
Now you have only 4 bytes for the host address, giving 16 possible addresses within the network. One address
is reserved for the network itself, and one for the broadcast address, leaving a possible 14 hosts.
Note that the network address (137.89.15.80) no longer ends in the familiar 0 and the broadcast address
(137.89.15.95) no longer ends in the familiar 255. Although they don't look like the familiar network and
broadcast addresses, they are formed in exactly the same way (setting the host portion of the address to all
1s or all 0s, respectively). The subnet mask also looks unfamiliar, but it, too, is formed in exactly the same
way.
This extension of the subnetting technique makes new sizes of network possible, including tiny networks for
point-to-point links (mask 255.255.255.252, giving 30 bits network, and 2 bits host), or medium-sized
networks (for example, mask 255.255.240.0, giving 20 bits network, and 12 bits hosts--4,096 possible hosts).
Humans would find the subnet mask system a lot easier to understand (and to read on a day-to-day basis) if
the mask was just represented as the number of bits that were allocated to the host portion of the address (for
instance, in the preceding example the mask 255.255.240.0 would be 12). Unfortunately, history has handed
us a system where the representation makes it easy for computers to do an AND operation, not for us to read.
You can soon learn to think in binary though!
Writing subnet masks in hexadecimal (base 16) rather than decimal (base 10) can also help greatly--a subnet
mask of FF.FF.F0.00 is easier to work with than 255.255.240.0, as it is easier to convert between hexadecimal
and binary than between decimal and binary.
If you are using DNS (domain name system) to map between host names and IP addresses, be slightly wary
of non-byte-aligned subnet masks. They may prevent you from delegating control of the reverse lookup
records, used to map from IP addresses to host names. DNS is designed to allow only a delegation split on
an IP address byte boundary (inside the in-addr.arpa. domain).
Supernetting
Supernetting is a very similar concept to subnetting--the IP address is split into separate network address and
host address portions according to the subnet mask. However, instead of breaking down larger networks into
several smaller subnets, you group smaller networks together to make one larger supernet. Imagine that I am
given a bank of 16 class C networks, ranging from 201.66.32.0 to 201.66.47.0--my whole network can be
addressed as 201.66.32.0 with a subnet mask of 255.255.240.0 (any address on the network has the same
initial 20 bits as 201.66.32.0--to address the network, set the host address to 0). Unfortunately it's not possible
to allocate totally arbitrary groups of addresses--a range of 16 class C networks from 201.66.71.0 to
201.66.86.0 doesn't have a single network address (try to find one!). Why is this? Given the required subnet
mask of 255.255.240.0, the host portion of the beginning of the address range is not 0:
Fortunately this isn't a real problem--given a sensible address allocation strategy, you can find suitable blocks
of addresses.
If you want to split your network into multiple subnetworks of unequal size, you can use a variable length
subnet mask (VLSM). This slightly intimidating acronym just means that each of your subnetworks can have
a different length subnet mask. If you were splitting a company's network by department, some networks
might have 255.255.255.0 (for most departments), while others might have 255.255.252.0 (for a particularly
large department).
As the Internet has taken off, the number of hosts attached to the network has grown beyond all expectations.
Although there are still far fewer than 2 32 hosts connected directly to the Internet, there is a shortage of
addresses. RFC 1519, Classless Inter-Domain Routing (CIDR), was published in 1993 in an attempt to
address inefficiencies in the allocation of the address space.
CIDR is an attempt to extend the life span of IP v4; it does not address the eventual exhaustion of the whole
address space. IP v6 addresses the eventual exhaustion of the address space by using a 128-bit address rather
than a 32-bit address. However, implementing IP v6 is a mammoth task, which the Internet is not yet ready
for. CIDR gives us time to implement IP v6.
The class-based address system worked well. It provided a reasonable compromise between efficient address
usage and a low number of networks for routers to cope with. However, two major problems were caused by
the unexpected growth of the Internet:
The increased number of allocated networks meant that the number of entries in routing tables became
unmanageably large, and slowed down routers considerably. Much of the address space was being wasted by
the allocation policy--allocating inflexible blocks of 256 = 2 8 (class C), 65536 = 216 (class B), or 16777216
= 224 (class A) resulted in many wasted addresses. This has resulted particularly in a shortage of class B
addresses.
To solve the second problem, it is possible to allocate multiple smaller networks instead of one larger
network: for instance, multiple class C networks instead of one class B. Although this results in much more
efficient address allocation, it exacerbates the growth of routing tables (the first problem). Under CIDR,
addresses are assigned according to the topology of the network. This means that a consecutive group of
network addresses would be allocated to a particular service provider, allowing the whole group to be covered
by one (probably supernetted) network address.
For example, a service provider is given a bank of 256 class C networks, ranging from 213.79.0.0 to
213.79.255.0. The service provider allocates one class C address to each customer, but routing tables external
to the service provider know all of these routes by just one entry in the routing table--213.79.0.0 with a
network mask of 255.255.0.0.
This method obviously greatly reduces the growth of routing tables for each new address that is allocated.
Estimates given by the authors of the CIDR RFC (1519) indicate that if 90% of service providers used CIDR,
routing tables might grow by 54% over a 3-year period, as opposed to 776% growth without CIDR (these
figures assume CIDR is not in place at the start of the period).
If renumbering the existing address space were possible, the number of advertised routes that the Internet
backbone routers had to deal with could be massively reduced. Unfortunately, this is unlikely to be practical,
due to the huge amount of administrative effort involved.
Routing Tables
If a host has several network interfaces, how does it decide which interface to use for packets to a particular
IP address? The answer lies in the routing table. Consider the following routing table:
This example only covers hosts that are connected directly to you--what if the host in question is on a remote
network? If you are connected to network 73.0.0.0 by way of a router with an IP address of 201.66.37.254,
you can add an entry to the routing table:
This tells the machine to route packets for any hosts on the 73.0.0.0 network through 201.66.37.254--note
that there must be another entry in the table, telling the host how to send packets to 201.66.37.254! The G
(gateway) flag just means that this routing entry directs traffic through an external gateway. Similarly, a route
to a specific host through a gateway can be added, and it receives the H (host) flag:
This example covers all the basics of the routing table, apart from a few special entries:
The first of these is the loopback interface, for traffic from the host to itself. This is used for testing, and for
communications for applications that are designed to operate over IP but that happen to be communicating
locally. It is a host route to the special address 127.0.0.1 (the interface lo0 refers to a "fake" network card
internal to the IP stack).
The second entry is more interesting. To save having a route defined on the host to every possible network
on the Internet, a default route can be defined. If no other entry in the routing table matches the destination
address, the packet is sent to the default gateway (given in the default route). Most hosts in a simple setup
are connected by way of one interface card to a LAN, which has only one router to other networks. This
results in just three entries in the routing table: the loopback entry, the entry for the local subnet, and the
default entry (pointing to the router).
Overlapping Routes
The routes are said to be overlapping because all four include the address 1.2.3.4. So if I send a packet to
1.2.3.4, which route is chosen? In this case, it is the first route, through gateway 201.66.37.253; the route
with the longest (most specific) subnet mask is always chosen. Similarly, packets to 1.2.3.5 are sent by the
second route in preference to the third.
IMPORTANT: This rule applies only to indirect routes (those routing packets through gateways). Having
two interfaces defined on the same subnet is not legal in many implementations of software. The following
setup is normally illegal (though some software will attempt to load-balance over the two interfaces):
The policy on overlapping routes is extremely useful; it allows the default route to work as just a route with
a destination of 0.0.0.0, and a subnet mask of 0.0.0.0, rather than having to implement it as a special case for
the routing software.
Looking back at CIDR, let's take the preceding example where a service provider is given a bank of 256 class
C networks, ranging from 213.79.0.0 to 213.79.255.0. Routing tables that are external to the service provider
know all of these routes by just one entry in the routing table--213.79.0.0, with a network mask of
255.255.0.0.
But suppose that one customer moves to a different service provider. The customer had the network address
213.79.61.0, but must he now get a new network address from the new service provider's range of addresses?
That would mean renumbering every machine in the organization, changing every DNS entry, and so on, and
so on.
Fortunately, there is an easy solution. The old service provider keeps the route 213.79.0.0 (with subnet mask
255.255.0.0), while the new service provider advertises the route 213.79.61.0 (with subnet mask
255.255.255.0). As the new route has a longer subnet mask than the old service provider's route, it overrides
the old route.
Static Routing
Looking back at the routing table that you have been building up, there are now six entries in it. These are
listed next, and a diagram of the network is given in Figure 9.9:
NOTE: A routing table can usually be listed like this by using the command netstat -Rn. See your vendor's
documentation for netstat.
How did these entries get there? The first one is added by the network software when the routing table is
initialized. The second and third are created automatically when the network interface cards are bound to
their IP addresses. However, the last three must be added specifically. On a UNIX system, this is done by
issuing the route command, either manually by a user, or by the rc scripts upon bootup.
All these methods involve static routing. Routes are generally added on bootup, and the routing table remains
unchanged, unless manual intervention occurs.
Routing Protocols
Both hosts and gateways can use a technique called dynamic routing. This allows the routing table to be
automatically altered if, for instance, a router fails. Another router could be used instead, without user
intervention, providing a much more resilient system.
Dynamic routing requires a routing protocol, which adds and deletes entries from the routing table. The
routing table still works the same way as in static routing, but entries are added and removed automatically
rather than manually.
There are two types of routing protocols: interior and exterior. Interior protocols route inside an autonomous
system (AS), while exterior protocols route between autonomous systems. An autonomous system is a
network normally under one administrative control, perhaps by a large company or a university. Small sites
tend to be part of their Internet service provider's AS.
Only interior protocols are discussed here; few people ever have to deal with (or have even heard of!) exterior
protocols. The most common exterior protocols are EGP (Exterior Gateway Protocol) and BGP (Border
Gateway Protocol). BGP is the newer protocol, and it is slowly replacing EGP.
ICMP Redirects
ICMP is not normally considered to be a routing protocol, but ICMP redirects act in much the same way as
a routing protocol, so I'll discuss them here. Suppose that you have a routing table with the six entries given
earlier. A packet is sent to the host 201.66.43.33. Looking through the table, this does not match any route
except the default route, which sends traffic by way of the router 201.66.39.254. However, this router has
full knowledge of the network, and knows that all packets for the 201.66.43.0 subnet should go through
201.66.39.254. Accordingly, it forwards the packet to the appropriate router. But it would have been much
more efficient if the host had sent the packet straight to 201.66.39.254.
The router can instruct the host to use a different route by sending an ICMP redirect. The router knows that
there is a better route, because it is sending the packet back out on the same interface it came in on. Though
the router knows that the whole of the 201.66.43.0 subnet should go by way of 201.66.39.254, it normally
only sends an ICMP redirect for a particular host (in this case 201.66.43.33). The host creates a new entry in
the routing table:
Notice the D (redirect) flag--this is set on all routes created by an ICMP redirect. In the future, all packets
will be sent by the new route.
RIP is a simple interior routing protocol, which has been around for many years, and is widely implemented
(UNIX routed uses RIP). It is a distance-vector algorithm, which means that its routing decisions are based
purely upon the number of "hops" between two points. Traversing a router is considered to be one hop.
Both hosts and gateways can run RIP, although hosts only receive information; they do not send it.
Information can be specifically requested from a gateway, but is also broadcasted every 30 seconds in order
to keep routing tables current. RIP uses UDP to intercommunicate between hosts and gateways through port
520. The information passed between gateways is used to build up a routing table. The route chosen by RIP
is always the one with the shortest number of hops to the destination.
RIP version 1 works reasonably well on simple, fairly small networks. However, it shows several problems
working on larger networks, some of which are rectified in RIP v2, but some of the limitations are inherent
in its design. In the following discussion, points applicable to v1 and v2 are referred to simply as RIP, while
RIP v1 or RIP v2 refer to the specific versions.
RIP doesn't have any concept of quality for links; all links are considered to be the same. Thus a low-speed
serial line is considered to be as good as a high-speed fiber-optic link. RIP gives preference to the route with
the least number of hops; thus, when given a choice between going across:
RIP will choose the latter. RIP also has no concept of the traffic levels on a link; given a choice between two
Ethernet links, one of which is very busy, and one of which has no traffic at all, RIP will quite happily use
the busier link.
The maximum number of hops interpreted by RIP is 15, any more than this is considered to be unreachable.
Thus on very large autonomous systems, where the number of hops on any useful route may exceed 15, use
of RIP is impractical.
RIP v1 does not support subnetting; the subnet mask is not transmitted with each route. The method for
determining the subnet mask for each given route varies from implementation to implementation. RIP v2
corrects this shortcoming.
RIP updates are only sent every 30 seconds, so information about the failure of a link can take some time to
propagate through a large network. The time for routing information to settle down to a stable state can be
even longer, and routing loops can occur during this period of change. We can conclude that RIP is a simple
routing protocol, with some restrictive limitations, especially in version 1. However, it is often the only choice
for particular operating systems.
Summary
ARP (Address Resolution Protocol) is used to map IP addresses to hardware addresses (MAC addresses). It
is a transparent protocol that is normally only seen by the user when an IP address conflict occurs. In special
situations, the ARP cache table can be controlled manually via the arp command. An IP address splits into
two parts: the network part and the host part. How the address is split used to depend on the network class,
given by the first byte of the address. Modern implementations of IP hold an extra field called the subnet
mask, which is used to determine how the address is split. This greatly enhances the functionality of IP
routing, but also adds a lot of complexity.
Routing tables can be static or dynamic. Static routes are controlled manually, or by a sequence of commands
in a bootup script. Dynamic routes are controlled by a daemon running a routing protocol, such as RIP or
OSPF. Though, strictly speaking, ICMP isn't a routing protocol, it can still alter the routing tables in response
to a redirect message.
Gateways
Gateways were once a readily understood part of the network. In the original Internet, the term gateway
referred to a router. The router was the only tangible sign of the cyberworld beyond the local domain. This
"gateway" to the unknown was (and still is) capable of calculating routes and forwarding packetized data
across networks spanning far beyond the visible horizon of the originating local area network (LAN). Thus,
it was regarded as the gateway to the Internet.
Over time, routers became less of a mystery. The emergence and maturation of corporate IP-based wide area
networks (WANs) witnessed the proliferation of routers. Technological innovation, too, bred even more
familiarity. The routing function can now reside in servers and even in network switching hubs. The original
"gateway" no longer held the same mystique. Instead, the router grew into a multipurpose network device
that did everything from segmenting LANs into smaller segments, to interconnecting related LANs in private
WANs, to interconnecting private WANs with the Internet. Thus the router lost its synonymity with the term
gateway.
The term gateway, however, has lived on. It has been applied and reapplied so frequently, and to so many
different functions, that defining a gateway is no longer a simple task. Currently, there are three main
categories of gateways:
1. Protocol gateways
2. Application gateways
3. Security gateways
The only commonality remaining is that a gateway functions as an intermediary between two dissimilar
domains, regions, or systems. The nature of the dissimilarity you are attempting to overcome dictates the
type of gateway that is required.
Protocol Gateways
Protocol gateways usually convert protocols between network regions that use dissimilar protocols. This
physical conversion can occur at Layer 2 of the OSI Reference Model (the Network Layer), Layer 3 (the
Internetwork Layer), or between Layers 2 and 3. Two types of protocol gateways do not provide a conversion
function: security gateways and tunnels.
Security gateways that interconnect technically similar network regions are a necessary intermediary because
of logical dissimilarities between the two interconnected network regions. For example, one might be a
private WAN and the other a public one, like the Internet. This exception is discussed later in this chapter,
under the heading "Combination Filtration Gateways." The remainder of this section focuses on protocol
gateways that perform a physical protocol conversion.
Tunneling Gateways
Tunneling is a relatively common technique for passing data through an otherwise incompatible network
region. Data packets are encapsulated with framing that is recognized by the network that will be transporting
it. The original framing and formatting are retained, but are treated as data. Upon reaching its destination, the
recipient host unwraps the packet and discards the wrapper. This results in the packet being restored to its
original format. IPv4 packets are wrapped in IPv6 by Router A for transmission through an IPv6 WAN for
delivery to an IPv4 host. Router B removes the IPv6 wrapper and presents the restored IPv4 packet to the
destination host.
Tunneling techniques may be used for just about every Layer 3 protocol, from SNA to IPv6, as well as many
individual protocols within those aforementioned suites. As beneficial as tunneling can be to overcoming the
limitations of any given network topology, it has its dark side. The very nature of tunneling consists of hiding
unacceptable packets by disguising them in an acceptable format. Quite simply, tunneling can be used to
defeat firewalls by encapsulating protocols that would otherwise be blocked from entry to a private network
region.
Proprietary Gateways
A myriad of proprietary gateway products have been available to bridge the gap between legacy mainframe
systems and the emerging distributed computing environment. The typical proprietary gateway connected
PC-based clients to a protocol converter at the edge of the LAN. This converter then provided access to
mainframe systems using an X.25 network. The gateway in Figure 10.2 demonstrates tn3270 emulation
sessions from client PCs to the gateway. The gateway dumps the IP sessions onto an X.25 WAN for transport
to the mainframe.
These gateways were usually inexpensive, single-function circuit boards that needed to be installed in a LAN-
attached computer. This kept the cost of the gateway to a minimum, and facilitated technology migrations.
Layer 2 protocol gateways provide a LAN-to-LAN protocol conversion. They are usually referred to as
translating bridges rather than protocol gateways. This type of conversion may be required to permit
interconnection of LANs with different frame types or clock rates.
Frame Mismatches
Local area networks that are IEEE 802-compliant share a common media access layer. However, their frame
structures and media access mechanisms make them unable to interoperate directly. A translating bridge
takes advantage of the Layer 2 commonalties, such as the MAC address, and enables interoperability by
providing on-the-fly translation of the dissimilar portions of the frame structures.
First-generation LANs required a separate device to provide translating bridges. The current generation of
multiprotocol switching hubs usually provides a high-bandwidth backplane that functions as a translating
bridge between dissimilar frame types.
The automated behind-the-scenes nature of modern-day translation bridging has all but obscured this aspect
of protocol conversion. Discrete translation devices are no longer required. Instead, the multitopology
switching hub functions innately as a Layer 2 protocol converting gateway.
An alternative to using a Layer 2-only device like a translating bridge or multitopology switching hub would
be to use a Layer 3 device: a router. Routers have long established themselves as a viable collapsed LAN
backbone. Given that routers interconnect LANs to the WAN, they typically support standard LAN
interfaces. Configured properly, a router can easily provide the translation between mismatched frame types.
The drawback to this approach is that using a Layer 3 device, a router, requires table look-ups. This is a
software function. Layer 2-only devices like switches and hubs operate in the silicon of the device's hardware,
and are able to operate significantly faster.
Many of the older LAN technologies have been treated to an upgraded transmission rate. For example, IEEE
802.3 Ethernet is now available in 10Mbps and 100Mbps versions, and soon will be supported in a 1Gbps
version, too. The frame structures and interframe gaps are identical. The primary differences lie in their
physical layer and, consequently, in their media- access mechanisms. Of these differences, the transmission
rate is the most obvious difference.
Token Ring, too, has been upgraded to operate at higher transmission rates. The original Token Ring
specification operated at 4Mbps. Contemporary versions transmit at 16Mbps. FDDI, a 100Mbps LAN,
descended directly from Token Ring and is frequently used as a backbone transport for Token Ring LANs.
These disparities in clock rates of otherwise identical LAN technologies require a mechanism to provide a
speed-buffering interface between two otherwise compatible LANs. Many of today's multitopology, high-
bandwidth switching hubs provide a robust backplane that can buffer speed mismatches.
The current generation of multitopology LAN switching hubs can provide internal speed buf-fering for
different transmission rate versions of the same LAN technology. They can also provide Layer 2 frame
conversion for different 802-compliant LANs.
Routers, too, can buffer transmission rate differences. They offer an advantage over switching hubs in that
their memory is expandable. This memory buffers incoming and outgoing packets long enough for the router
to determine which, if any, of the router's access list permissions apply, and to determine the next hop. This
memory can also be used to buffer the speed mismatches (see Figure 10.7) that may exist between the various
network technologies that are internetworked by the router.
Application Gateways
Application gateways are systems that translate data between two dissimilar formats. Typically, these
gateways are intermediate points between an otherwise incompatible source and destination. The typical
application gateway accepts inputs in one format, translates it, and ships the outputs in a new format, as
shown in Figure 10.8. The input and output interfaces can either be separate or use the same network
connection.
A single application can have multiple application gateways. For example, electronic mail can be
implemented in a wide variety of formats. Servers that provide electronic mail may be required to interact
with other mail servers, regardless of their format. The only way to do this is to support multiple gateway
interfaces.
Application gateways can also be used to connect LAN clients to external data sources. This type of gateway
provides for local connectivity to an interactive application. Keeping the application's logic and executable
code on the same LAN as the client base avoids lower bandwidth, higher-latency WANs. This, in turn,
enables better response time to clients. The application gateway would then ship an I/O request to the
appropriate networked computer that housed the data requested by the client. The data is fetched and
reformatted as needed for presentation to the client.
This section does not conduct an exhaustive review of all the potential application gateway configurations,
but these few examples should adequately convey the network ramifications of application gateways. They
usually represent a point in the network at which traffic aggregates. To adequately support such traffic points,
an appropriate combination of network technologies and LAN and/or WAN topologies is required.
Security Gateways
Security gateways are an interesting mix of technologies that are important, and distinct, enough to warrant
their own category. They range from protocol-level filtration to fairly sophisticated application-level
filtration.
NOTE: Only one of these firewalls is a filter. The remainder are gateways.
These three mechanisms are frequently used in combination. A filter is the screening mechanism(s) used to
discriminate between packets that have legitimate need to access the specified destination port, and spurious
packets that represent an unacceptable level of risk. Each has its own capabilities and limitations that must
be carefully evaluated against the security requirements.
Packet Filters
Packet filtration is the most basic form of security screening. Routing software enables the establishment of
permissions, per packet, based on the packet's source address, destination address, or port number. Filtering
on well-known port numbers provides the ability to block or enable internetworking protocols, such as FTP,
rlogin, and so on.
Filtration can be performed on incoming and/or outgoing packets. Implementing filtering at the network layer
means that a preexisting, general-purpose machine (the router) can provide some security screening function
for all applications traversing the network.
As a resident component of the router, this filter is available for use free of charge in any routed network.
This should not be misconstrued as a preexisting panacea! Packet filtration suffers from multiple
vulnerabilities. It is better than no filtering, but not by much.
NOTE: The term router is used logically to describe a network function. The actual device performing that
function may be a router or a host.
Packet filters can be deceptively difficult to implement well, particularly if your security requirements are
poorly defined and lack critical detail.
This filter is also defeated remarkably easily. A packet filter evaluates each packet and makes a "go/no go"
determination based solely on the packet's header information relative to the router's programmed access
lists. This technique suffers from several potential vulnerabilities.
First and foremost, it is directly dependent on the router administrator to correctly program the desired set of
permissions. Typographical errors, in this situation, are hardly benign. They create holes in one's perimeter
defenses that require few, if any, special skills to exploit.
Even if the administrator programmed the set of permissions accurately, the logic behind those permissions
must be flawless. Though it may seem trivially easy to program a router, developing and maintaining a
complex, lengthy set of permissions can be quite cumbersome. The day-to-day change in a networked
computing environment must be understood and assessed relative to the firewall's set of permissions. New
servers that were not explicitly protected at the firewall may find themselves vulnerable to attack.
Over time, access look-ups can degrade the rapidity with which routers forward packets. As soon as a router
receives a packet, it must identify the next hop for that packet to reach its specified destination. This must be
accompanied by another CPU-intensive task: checking the access list to determine if it is permitted to access
its destination. The longer the access list, the more time this process will take.
The second vulnerability of packet filtration is that it accepts the packet header information as valid, and has
no way of authenticating the packet's origin. Header information is easily falsified by network-savvy hackers.
This falsification is commonly known as spoofing.
The myriad weaknesses of packet filtration leave it ill-prepared to adequately defend your networked
computing resources. It is best used in combination with other, more sophisticated filtering mechanisms,
rather than used to the exclusion of other mechanisms.
Circuit Gateways
Circuit-level gateways are ideal for protecting outbound requests that originate in a private, secured network
environment. The gateway intercepts the TCP request, or even certain UDP requests. It then retrieves the
requested information on behalf of the originator. This proxy server accepts requests for information stored
on the World Wide Web and fulfills those requests on behalf of the originator. In effect, the gateway acts like
a wire that links the originator with the destination, without exposing the originator to the risks associated
with traversing an unsecured network region.
Proxitizing requests in this manner simplifies the management of border gateway security. If access lists are
implemented, all outgoing traffic can be blocked, except for the proxy server. Ideally, this server would have
a unique address and not be a part of any other internally used network address range. This would absolutely
minimize the amount of information that would otherwise be inadvertently, and subtly, advertised throughout
the unsecured region. Specifically, only the network address of the proxy server would become known, and
not the network addresses of each network-connected computer in the secured domain.
Application gateways are almost the polar opposite of packet filtration. Packet filtration uses general-purpose
protection of all traffic that crosses the network-level packet-filtering device. Application gateways place
highly specialized application software at each host to be protected. This avoids the traps of packet filtration
and enables tighter security per host.
One example of an application gateway is the virus scanner. This specialized software has become a staple
of desktop computing. It is loaded into memory at boot time and stays resident in the background. It
continuously monitors files for known viruses, or even altered system files. Virus scanners are designed to
shield the user from the potential damage of a virus before damage can be inflicted.
This level of protection would be virtually impossible to implement at the network layer. It would require
examining the contents of each and every packet, authenticating its source, determining the appropriate
network path, and determining whether the contents were as purported or spurious. This process would incur
an unsupportable level of overhead that would severely compromise network performance.
Application gateways offer the ability to create copious logs of all inbound and outbound traffic. Residing in
a host also affords access to CPU, disk, RAM, and other useful resources that can be applied.
The principal limitation of application gateways is that they are applied on a per-host or per-application basis.
In a host and/or application-rich environment, this can become quite expensive. Economics should dictate,
however, that the most important applications are protected. Network-level security gateways, conversely,
apply protection almost equally with a broad brush.
Gateways that utilize a combination filtration approach typically provide a fairly tight set of access controls
by implementing redundant, overlapping filters. These can include any combination of packet-, circuit-, and
application-level mechanisms.
One of the more common implementations of such a security gateway is as a network sentry that guards the
point(s) of ingress and egress along the edges of a private network domain. Such gateways are more
commonly referred to as border gateways or firewalls. This critical function often requires multiple filtration
techniques to develop an adequate defense. There are two-component security gateway: a router and a
processor. In combination, they can provide protocol-, circuit-, and application-level protection.
This specialized form of gateway does not necessarily provide a conversion function, as do the other types
of protocol gateways. Given that border gateways are used at the borders of one's network, they are
responsible for regulating both ingress and egress traffic. Ostensibly, both the internal and external WANs
linked by the gateway will be using the Internet Protocol (IP). Thus, no conversion should be necessary: the
two are directly compatible. Filtration, however, becomes critical.
The reasons for protecting a network from unauthorized, external access are obvious. It's done for the same
reason that corporate employees are frequently issued identification badges and/or access cards. It provides
a mechanism for differentiating between legitimate members of the corporation who require access to its
facilities, and pretenders whose motives are assumed to be suspicious. Any inbound attempts to access
networked computing resources must be evaluated to determine the authenticity of the request.
The reasons for regulating outbound access may not be quite as obvious. Under some circumstances, it may
be necessary to provide a filtration of outbound packets. For example, the proliferation of browsers among
the user base may result in a significant increase in WAN traffic. If left unregulated, browsing, newscasting,
or any other Web-based traffic could easily compromise the WAN's ability to transport other applications.
Thus, it may be necessary to block this form of traffic, either in whole or in part.
IP (the dominant internetworking protocol) is an open protocol. It was designed to facilitate communication
across network domains. This is both its primary strength and its greatest weakness. Providing any
interconnectivity between two IP WANs, in essence, creates one big IP WAN. The sentry left to guard one's
network borders, the firewall, is tasked with discriminating between legitimate internetworking traffic and
spurious traffic that can't be trusted.
Implementation Considerations
Implementing a security gateway is not a task that can be taken trivially. Success absolutely depends upon
definition of requirements, careful planning, and flawless implementation. The first task must be establishing
a comprehensive set of rules that define an understood and accepted compromise between security and cost.
These rules constitute the security policy.
This policy can be lax, severe, or anything in between. On one extreme, a security policy can start with the
basic premise that everything is allowable. Exceptions are expected to be few and manageable. These
exceptions are explicitly added to the security infrastructure. This policy is easy to implement, requires almost
no forethought, and guarantees that even the amateurs will find a way around the minimal protection.
The other extreme is almost draconian. This policy requires that all connectivity across the network is
explicitly denied. This is relaxed, carefully and intentionally, to accommodate the user community's access
requirements. Only these requirements are permitted. This approach is difficult to implement, won't win any
popularity contests with the users, and can be extremely expensive to maintain. It will, however, provide the
intangible benefit of a secured network. From the "net.police" perspective, this is the only acceptable security
policy.
In between these extremes are an infinite series of compromises between ease of implementation and use,
and cost of maintenance. Most implementations will fall into this broad category of compromise, either
intentionally or by accident. The right balance requires careful evaluation of the risks and costs.
Summary
As networked computing continues to evolve, it is likely that more and more mission-critical applications
will find themselves internetworked in an open network environment. To the extent that such applications
may continue to rely upon disparate protocols, protocol conversion gateways will be essential. The increased
reliance on an open internetworking protocol for support of distributed computing also directly increases the
need for security gateways. All forms of security gateways, application, packet, and circuit level, can provide
much needed protection.
Regardless of their form and implementation, gateways are indispensable adjuncts to any network. Properly
selected and implemented, gateways are one of the keys to unleashing the potential of high-performance
networks.
Brouters
Brouters are protocol-dependent devices. When a brouter receives a frame to be forwarded to the remote
segment, it checks to see if it recognizes the Network layer protocol. If the Brouter does, it acts like a router
and finds the shortest path. If it doesn't recognize the Network layer protocol, it acts like a bridge and forwards
the frame to the next segment.
The key advantage to Brouters is the ability to act as both a bridge and a router. It can replace separate bridges
and routers, thus saving money. This is, of course, provided that the Brouter can satisfactorily accomplish
both functions .
Repeaters
Repeaters are physical hardware devices: they have a primary function to regenerate the electrical signal
(shown below):
❖ Reshaping the waveform
❖ Amplifying the waveform
❖ Retiming the signal
Purpose of a Repeater
The purpose of a repeater is to extend the LAN Segment beyond its physical limits (as defined by the Physical
Layer's Standards: e.g. Ethernet is 500m for 10Base5). A LAN Segment is a logical path, such as the logical
bus used by all 802.3 Ethernet types. A LAN Segment is given an identification number, called a Segment
Number or Network Number, to differentiate it from other segments.
Typically, repeaters are used to connect two physically close buildings together (when they are too far apart
to just extend the segment). They can be used to connect floors of a building that would normally surpass the
maximum allowable segment length. Note: for large extensions, as in the above example, two Repeaters are
required. For shorter extensions, only one Repeater may be required.
Repeaters work only on the same type of Physical Layer: Ethernet-to-Ethernet, or Token Ring-to-Token
Ring. They can connect 10Base5 to 10BaseT because they both use the same 802.3 MAC layer.
You can run into problems with the transfer rate (1 Mbps vs. 10 Mbps) when you connect 1Base5 to 10BaseT.
A repeater cannot connect Token Ring to Ethernet because the Physical Layer is different for each network
topology.
The MAC Layer Address is used to identify the Network Card to the Network. The Repeater is transparent
to both sides of the segment and both sides can "see" all the Mac Addresses (regardless of which side they
are on). This means that any network traffic on Floor 1 will also appear on Floor 5, and vice versa.
Nodes A & B could be furiously be exchanging files; this network traffic would also appear on Floor 1.
Repeaters don't provide isolation between segments (there is only one collision domain).
Because Repeaters provide no isolation between segments, and the repeater is transparent to both sides of the
segment, both sides of the repeater appear as 1 long segment. The Network Number, or Segment Number, is
the same on both sides of the Repeater.
When using repeaters, make sure that the overall propagation delay does not exceed the Physical Layer
Standard that is being used. Repeaters will also add a propagation delay to the signal that is being repeated.
Check that rules, such as the 5-4-3 Rule for IEEE 802.3, are not broken. For instance, check--for XNS
Ethernet--that a maximum of only 2 Repeaters are located between any 2 nodes.
You are allowed to parallel Segments using multi port repeaters. Multi port repeaters have several
inputs/outputs. Notice that all floors have the same Segment Number. You are not allowed to create a loop
between two segments (e.g. by using two repeaters).
Fiber Optic Repeaters join 2 segments together with a fiber optic link. The Transfer rate is not changed
through the fiber. The advantages are noise immunity and longer distances. Segments can be joined up to
3000m apart, and still be within the propagation delay specification for the Physical Layer. Two fiber optic
repeaters are required (one at each end of the fiber).
The network interface adapter (called the NIC when installed in a computer'sexpansion slot) is the component
that provides the link between a computer and the network of which it is a part. Every computer must have
an adapter that connects to the system's expansion bus and provides an interface to the networkmedium.
Some computers have the network interface adapter integrated into the motherboard, but in most cases the
adapter takes the form of an expansion card that plugs into the system's Industry Standard Architecture (ISA),
Peripheral Component Interconnect (PCI), or PC Card bus. The network interface itself is, in most cases, a
cable jack such as an RJ45 for UTP cables or a BNC or AUI connector for a coaxial cable connection, but it
can also be a wireless transmitter of some sort.
The network interface adapter, in cooperation with its device driver, is responsible for performing most of
the functions of the data-link layer protocol and the physical layer. When you buy a NIC for a computer, you
must select one for a particular data-link layer protocol, such as Ethernet or Token Ring; they are not
interchangeable. You must also be sure to select a NIC that supports the specific variant of your data-link
layer protocol. In the case of twisted-pair Ethernet, for example, a NIC can support standard Ethernet, Fast
Ethernet (100Base-TX or 100Base-T4), Full-Duplex Fast Ethernet, or 1000Base-T Gigabit Ethernet. You
must also select a card that plugs into the appropriate type of bus slot in the computer and has the proper
connector for the network medium.
❖ jumper configurable
❖ software configurable
❖ Plug n Play (PnP)
Jumper configurable cards have physical jumpers that you use to select the IRQ, I/O address, upper memory
block, and transceiver types (10BaseT, 10Base2 or 10Base5). Older cards will also allow selecting DMA
channel - this was used with XT and 286 PCs.
Software configurable NICs have a proprietary software program that sets the NIC's "internal jumpers". They
are usually menu driven, and have an auto configuration mode where the program will attempt to determine
the most suitable configuration. These programs are not foolproof: you still need to have a thorough
knowledge of the PC's architecture.
Plug n Play NICs will attempt to auto-configure themselves during the boot up sequence --immediately after
installation. They also come with a proprietary software program, in case anything goes wrong and you have
to manually configure them.
A combination (combo) NIC has the option of connecting to the network using either Twisted Pair (10BaseT),
Coax (10Base2) or AUI (Attachment Unit Interface for 10Base5). The NIC can only connect to one medium
type at a time: the configuration software allows you to select which medium interface to connect to. Newer
NICs will auto detect the cabling type used.
NOTE: Not all network interface adapters are intended to connect computers to standard client/server LANs.
There are also NICs available that connect computers and other devices to a specialized network called a
Storage Area Network (SAN). A SAN is a separate network dedicated to communications between servers
and external storage devices, such as redundant array of independent disks (RAID) arrays. Most SAN
adapters use a protocol called Fibre Channel rather than one of the standard LAN protocols, such as
Ethernet and Token Ring.
Network interface cards that plug into a PCI bus slot are generally preferable because the slots are self-
configuring and the bus is much faster than ISA, but you may use an ISA card if the computer has only ISA
slots available. For portable systems, the PC Card bus is usually your only choice, but you should be sure to
purchase a NIC that supports the CardBus standard if your computer supports it. CardBus is an interface
specification that provides the equivalent of PCI performance to PC Card peripherals. There are also network
interface adapters on the market that plug into a computer's universal serial bus (USB) port, but the USB
interface runs at a maximum of 1.2 Mbps and provides relatively poor performance, even when compared to
ISA. You should always ensure that the data rate of the NICs you select is compatible with the other network
components.
Network interface cards have different network cable connectors depending on the types of cables they
support. Some NICs have more than one cable connector, which enables you to connect to different types of
network media. For example, it is common to find combination Ethernet NICs with as many as three cable
connectors (RJ45, BNC, and AUI), especially in small stores that would rather stock a single card instead of
three different ones. You can only use one of the connectors at a time, however, and these combination NICs
can be much more expensive than those with only one connector.
TIP : One of the few scenarios in which combination NICs are practical is when several cards are needed
for an internetwork that uses multiple cable types and it is cheaper to buy the NICs in quantity. Many NIC
manufacturers sell their products in multiunit packs that are deeply discounted.
Network interface adapters perform a variety of functions that are crucial to getting data to and from the
computer over the network. These functions are as follows:
• Data encapsulation. The network interface adapter and its driver areresponsible for building the
frame around the data generated by the network layer protocol in preparation for transmission. The
network interface adapter also reads the contents of incoming frames and passes the data to the
appropriate network layer protocol.
• Signal encoding and decoding. The network interface adapter implements the physical layer
encoding scheme that converts the binary data generated by the network layer—now encapsulated
in the frame—into electrical voltages, light pulses, or whatever other signal type the network
medium uses, and converts received signals to binary data for use by the upper layer protocols.
• Data transmission and reception. The primary function of the network interface adapter is to
generate and transmit signals of the appropriate type over the network and to receive incoming
signals. The nature of the signals depends on the network medium and the data-link layer protocol.
On a typical LAN, every computer receives all of the packets transmitted over the network, and the
network interface adapter examines the data-link layer destination address in each packet to see if it
is intended for that computer. If so, the network interface adapter passes the packet to the computer
for processing by the next layer in the protocol stack; if not, the network interface adapter discards
the packet.
• Data buffering. Network interface adapters transmit and receive data one frame at a time, so they
have built-in buffers that enable them to store data arriving either from the computer or from the
network until a frame is complete and ready for processing.
• Serial/parallel conversion. The communication between the computer and the network interface
adapter usually runs in parallel (that is, either 16 or 32 bits at a time), depending on the bus the
adapter uses. (Only USB adapters communicate with the computer serially.) Network
communications, however, are serial (running one bit at a time), so the network interface adapter is
responsible for performing the conversion between the two types of transmissions.
• Media Access Control (MAC). The network interface adapter also implements the MAC
mechanism that the data-link layer protocol uses to regulate access to the network medium. The
nature of the MAC mechanism depends on the protocol used.
Installing a NIC
The process of installing a NIC consists of physically inserting the card intothe computer, configuring the
card to use appropriate hardware resources, and installing the card's device driver. Depending on the age and
capabilities of the computer, these processes can be very simple or quite a chore.
CAUTION: Before touching the internal components of the computer or removing the NIC from its
protective bag, be sure to ground yourself by touching the metal frame of the computer's power supply, or
use a wrist strap or static-dissipative mat to protect the equipment from damage due to electrostatic
discharge.
1. Turn off the power to the computer. Inserting a NIC in a slot while the computer is on can destroy
the NIC. Accidentally dropping a screw or slot cover can also cause serious damage if the computer
is powered up.
2. Open the computer case. In some instances, this can be the most difficult part of the installation
process. You may have to remove several screws to loosen the case cover and wrestle with the
computer a bit to get the cover off. Many newer systems, on the other hand, secure the case cover
with thumbscrews and are much easier to open.
3. Locate a free slot. There are both ISA and PCI NICs on the market, and you must check to see what
type of slots the computer has available before you select a card. An ISA card is sufficient for
average network use, but this busis gradually being phased out and replaced by PCI. The PCI bus is
preferable if you are planning to connect the computer to a Fast Ethernet or other100-Mbps network.
4. Remove the slot cover. Empty slots are protected by a metal cover that prevents them from being
exposed through the back of the computer. Loosen the screw securing the slot cover in place, and
remove both the screw and slot cover.
5. Insert the card into the slot. Line up the edge connector on the card with the slot and press it down
until it is fully seated, as shown in Figure 2.12.
6. Secure the card. Replace the screw that held the slot cover on. This secures the card firmly in the
slot. This is a step that network technicians frequently omit, but an important one, as a yank on the
network cable can pull the card partially out of the slot, causing intermittent problems that are
difficult todiagnose.
7. Replace the computer case and secure it with the fasteners provided.
TIP: It's usually a good idea to fully test the network card by connecting it to the LAN and running it before
you close the case and return the computer to its original location. It seems that newly installed components
are more likely to malfunction if you put the cover on before testing them.
The procedure just described is for installing a NIC into a standard expansion slot on a desktop computer. If
you are working with a laptop, the network interface adapter takes the form of a PC Card, which you install
simply by inserting it into a PC Card slot from the outside of the computer.
Configuring a network interface adapter is a matter of configuring it to use certain hardware resources, such
as the following:
• Interrupt requests (IRQs). These are hardware lines that peripheral devices use to send signals to
the system processor, requesting its attention.
• Input/output (I/O) port addresses. These locations in memory are assigned for use by particular
devices to exchange information with the rest of the computer.
• Memory addresses. These areas of upper memory are used by particular devices, usually for
installation of a special-purpose basic input/output system (BIOS).
• Direct memory access (DMA) channels. These are system pathways used by devices to transfer
information to and from system memory.
Network interface adapters do not usually use memory addresses or DMA channels, but this is not impossible.
Every network interface adapter requires an IRQ and an I/O port address to communicate with the computer.
When you have a computer and a network interface adapter that both support the Plug and Play standard, the
resource configuration process is automatic. The computer detects the adapter, identifies it, locates free
resources, and configures the adapter to use them. However, it is important for a network support technician
to understand more about the configuration process, because you may run into computers or network interface
adapters that do not support Plug and Play, or you may encounter situations in which Plug and Play doesn't
quite work as advertised. Improper network interface adapter configuration is one of the main reasons a
computer fails to communicate with the network, so knowing how to troubleshoot this problem is a useful
skill.
For a network interface adapter (or any type of adapter) to communicate with the computer in which it is
installed, the hardware (the adapter) and the software (the adapter driver) must both be configured to use the
same resources. Before the availability of Plug and Play, this meant that you had to configure the network
interface adapter itself to use a particular IRQ and I/O port and then configure the network interface adapter
driver to use the same settings. If the settings of the network interface adapter and the driver do not match,
it's like dialing the wrong number on a phone; the devices are speaking to someone, but it isn't the person
they expected. In addition, if the network interface adapter is configured to use the same resources as another
device in the computer, both of the conflictingdevices are likely to malfunction.
On older NICs, you configure the hardware resources by installing jumper blocks or setting Dual Inline
Package (DIP) switches. If you are working with a card like this, you must configure the card before you
install it in the computer. In fact, you may have to remove the card from the slot to reconfigure it if you find
that the settings you've chosen are unavailable. Newer NICs use a proprietary software utility supplied by the
manufacturer to set the card's resource settings. This makes it easier to reconfigure the settings in the event
of a conflict. The Plug and Play cards available today usually include a configuration utility, but you won't
need to use it unless your computer doesn't properly support Plug and Play.
When you're working with older equipment, determining the right resource settings for the NIC can be a trial-
and-error process. Older NICs often have a relatively limited number of available settings, and you might
have to try several before you find a configuration that works. Newer cards have more settings to choose
from, and when you're working with a newer computer running an operating system like Microsoft Windows
XP, Microsoft Windows 2000, Microsoft Windows 98, Microsoft Windows 95, or Microsoft Windows Me,
you have better tools to help you resolve hardware resource conflicts. The Device Manager utility (illustrated
in Figure 2.13) lists the resource settings for all of the components in the computer, and can even inform you
when a newly installed NIC is experiencing a resource conflict. You can use Device Manager to find out
which device the NIC is conflicting with and which resource you need to adjust.
The device driver is an integral part of the network interface adapter, as it enables the computer to
communicate with the adapter and implements many of therequired functions. Virtually all network interface
adapters come with driver software to support all of the major operating systems, but in many cases you won't
even need the software because operating systems like Windows include a collection of drivers for most
popular network interface adapter models.
In addition to configuring the network interface adapter's hardware resource settings, Plug and Play also
installs the appropriate driver, assuming that the operating system includes one. If it doesn't, you'll have to
supply the driver software included with the card. Like any piece of software, network interface adapter
drivers are upgraded from time to time, and you can usually obtain the latest driver from the adapter
manufacturer's Web site. However, it is not necessary to install every new driver release unless you're
experiencing problems, and the new driver is designed to address those problems. In other words, network
interface adapter drivers are usually subject to the "if it's not broken, don't fix it" rule.
When a computer fails to communicate with the network, the network interface adapter can conceivably be
at fault, but it's far more likely that some other component is causing the problem. Before addressing the
network interface adapter itself, check for the following alternative problems first:
• Make sure the network cable is firmly seated into the connector on the network interface adapter. If
you're using a hub, check the cable connection there as well. Loose connections are a common cause
of communications problems.
• Try using a different cable that you know works. If you are using a permanently installed cable run,
plug another properly working computer into it and use different patch cables. It is possible for the
cable to be causing the problem, even if there is no visible fault.
• Make sure that you have the proper driver installed on the computer. You might want to check the
driver documentation and the network interface adapter manufacturer's Web site for information on
possible driver problems on your operating system before you open up the computer.
• Check to see that all of the other software components required for network communications, such
as clients and protocols, are properly installed on the computer.
If you can find no problem with the driver, the cable, or the network configuration parameters, it's time to
start looking at the NIC itself. Before you open the computer case, check to see if the NIC manufacturer has
provided its own diagnostic software. In some cases, the same utility you use to configure the NIC's hardware
resources manually also includes diagnostic features that test the functions of the card. If you're using Plug
and Play, you might not have even looked at the disk included with the NIC, but this is an appropriate time
to do so. In troubleshooting a hardware component like this, you should exhaust all other options before you
actually open the computer.
If the NIC diagnostics indicate that the card is functioning properly, and assuming that the software providing
the upper layer protocols is correctly installed and configured, the problem you're experiencing is probably
caused by the hardware resource configuration. Either there is a resource conflict between the network
interface adapter and another device in the computer, or the network interface adapter is not configured to
use the same resources as the network interface adapter driver. Use the configuration utility supplied with
the adapter to see what resources the network interface adapter is physically configured to use, and then
compare this information with the driver configuration. You may have to adjust the settings of the card or
the driver, or even those of another device in the computer, to accommodate the card.
If the diagnostics program finds a problem with the card itself, it is time to open up the computer and
physically examine the NIC. If the NIC is actually malfunctioning due to a static discharge or a
manufacturer's defect, for example, there is not much you can do except replace it. Before you do this,
however, you should check to see that the NIC is fully seated in the slot, as this is a prime cause of
communication problems. If the card is not secured with a screw, press it down firmly into the slot at both
ends and secure it. If the problem persists, try removing the card from the slot, cleaning out the slot with a
can of compressed air, and installing the card again. If there is still a problem, you can try using another slot,
if available. After exhausting all of these avenues, try installing a different card in the computer. You can use
either a new one or one from another computer that you know is working properly. If the replacement card
functions, then you know that the card itself is to blame, and you should obtain a replacement.
When a NIC is configured, you are setting the parameters. These parameters tell the computer network
software where to find the adapter (base address), and who is "tapping the CPU on the shoulder" (IRQ). The
base address is the pointer to the rest of the world that says: "Here I am at base address xxx!." The IRQ is the
"tap on the shoulder" to the CPU that says: "Hey, it's IRQx. I've got something important to say!". The Upper
Memory Block is the NIC's BIOS, (actual program in the NIC's ROM). It is set to a free area of memory in
the PC's upper memory: this serves to avoid conflicts with other devices (e.g. video cards, internal modems,
SCSI drivers, etc.).
An IRQ is a hardware interrupt: there is a physical line run to each of the ISA slots on the motherboard. There
are 2 types of ISA slots, 8 bit and 16 bit. The 16 bit consists of the 8 bit slot plus a 16 bit extension slot. There
are 8 IRQ (IRQ0-7) lines that run to the 8 bit ISA slot. There are 8 more (IRQ8-15) that run to the 16 bit ISA
extension slot. This provides a total of 16 IRQs in a typical ISA-bus PC.
IRQ0 has the highest priority, and IRQ7 the lowest priority. IRQ8-15s have "special" priority; this will now
be explained. When IBM introduced the AT computer, they added IRQ8-15. In order to make AT (286) PCs
backward-compatible with 8 bit XT (8088) PCs, and to "up" the priority of the new IRQ lines, they cascaded
two interrupt controllers. This resulted in IRQ8-15 having the same priority as IRQ2. Priority means if two
IRQs are active at the same time, the one with the higher priority is serviced first.
IMPORTANT: An IRQ can be assigned to only ONE active device at a time. If 2 devices share the same
IRQ, it's called a CONFLICT. This means that when the IRQ line becomes active, the CPU does not know
which device needs to "talk".
For example, a conflict can occur if a modem and a NIC both use IRQ5. When the modem had some
information that needed to be passed on to the CPU, it would set IRQ5 to active. The CPU would not know
if it should talk to either the NIC or to the modem. The computer may hang, or nothing may happen.
*** IRQ conflicts are the NUMBER 1 source of PC problems! ***
The preceding table is a rule of thumb or guideline for selecting IRQs for your peripherals. For example, if
the PC does not use a SCSI adapter, then IRQ11 is available for use for another NIC card (or another device).
Most auto detecting software or operating systems expect to see the IRQs assigned as above.
Note that COM1 (DB9 on the back of the PC) and COM3 share IRQ4. This is allowed as long as only one
device is active at a time. This means that if you are running a mouse on COM1, then you cannot use COM3
for an internal modem: you'll run into a conflict.
Some communication packages will allow you to do this, but most will choke or cause flaky operation. A
common symptom occurs when you move the mouse and see garbage on your terminal program. COM2
(DB25 on the back of the PC) and COM4 have a similar problem except that most people don't use COM2.
It is usually safe to configure an internal modem to COM4. If COM2 is used, it is typically used for either an
external modem or a plotter. Usually, both are not active at the same time.
DMA stands for Direct Memory Access. This is a method that allows channels to be opened by the peripheral
(to read/write directly to memory without going through the CPU). This off-loads some of the work from the
CPU to allow it to do more important tasks.
There are 8 DMA channels available in the PC: DMA0-7. They are divided into 8 bit channels and 16 bit
channels, based on the 8 bit ISA slot and 16 bit ISA slot. Here is a table that is used as a rule of thumb for
selecting DMA channels:
Like IRQs, you are only allowed to assign one DMA channel to an active device at a time. Otherwise, you
will have a conflict, and things will not work properly. You may have one DMA channel assigned to two
devices ONLY if one device is active at a time.
Base Addresses
Base addresses are also called I/O ports, I/O addresses, I/O port addresses and base ports. They are memory
locations that provide an interface between the operating system and the I/O device (peripheral). The
peripheral communicates with the operating system through the base address. Each peripheral must have a
UNIQUE base address. Standard Base Address assignments (h - hexadecimal):
Legacy NICs
Before installing a legacy (polite way of saying old) NIC, a PC diagnostic program (Checkit or MSD) should
be run to determine available IRQs, Base Addresses and UMBs. After determining which IRQs, Base
Addresses and UMBs are available, you would configure the NIC (hopefully) to the rule of thumb tables
listed previously. In the case of the Upper Memory Block, you would also allocate that memory block using
EMM386.EXE in config.sys (x800 block size).
Ex: device=c:\dos\emm386.exe x=C000-C800
This would ensure that EMM386.EXE does not allow any other program, Windows, or TSR from using the
same memory block--thus avoiding memory conflicts. This is used as a typical job interviewer's question:
"What do you do to config.sys when installing a legacy network card?".
NICs come with software diagnostic tools that allow you to check the operation of the NIC. They are usually
called Internal Diagnostics, Loopback Test and Echo Server Test. The Internal Diagnostics checks the
internal hardware on the NIC card. It usually checks about a dozen or more different aspects of the network
card (up to the transmit/receive circuitry).
Internal Diagnostics
Loopback Test checks to see if the NIC can transmit and receive data properly. This test is usually applicable
to 10Base2 (coax) only, as a BNC TEE with 2 terminations is required. There is no 10BaseT loopback test
because you can't terminate at the NIC.
Loopback Test
Note: The first two diagnostic routines are performed while not connected to the physical network. This
prevents faulty NICs from disrupting normal network traffic. The last diagnostic routine is the Echo Server,
or Network test. Two NICs are required: a known working NIC acts as an Echo Server and the NIC under
test is the Echo client. The echo client sends a packet to the echo server, who then echoes the packet back.
This is tested on the network, and can be used for any cabling type (not just 10Base2 as per the example).
Network Interface Card Drivers are the software interface between the Network Card Hardware/Firmware
and the Network Operating System Data Link layer. The Network Card device driver is a device driver loaded
in config.sys. The Network Card consists of Firmware and Hardware.
The Firmware is the program stored on the network card's ROM (BIOS), and configuration information
stored in E2ROM. The configuration information would be the IRQ, Base Memory Address, Transceiver
Type, etc. for the Network Card. The Hardware would be the physical components: ICs, connectors, and so
on.
NDIS stands for Network Driver Interface Specification. NDIS drivers are used by Microsoft based Network
Operating Systems, such as Microsoft LAN Manager, Windows NT, Windows for Work Groups and IBM's
OS/2.
ODI stands for Open Datalink Interface. ODI drivers are used by Novell's Network Operating System and
Apple.
Packet drivers use software interrupts to interface to the network card. Many non-commercial programs
(shareware and freeware) use Crnywr packet driver interfaces.
The three Network Driver Types are not compatible with each other but most Network Operating Systems
(Novell, WFWG, etc.) can use either NDIS or ODI. The NOS (Network Operating System) determines which
type of Network Driver can be used. Regardless of the Network Driver type used, all have a network device
driver loaded into memory during boot up (and a network protocol bound to the network card).
The purpose of the Network Drivers is to decouple the network adapter's device driver from the higher layer
protocols. The higher layer protocols can be IPX/SPX for Novell, Netbios for Microsoft, TCP/IP for Unix
and so on.
Traditionally (in the old days, circa 1990), the Network Card Device Driver and NOS' Data Link layer were
generated as 1 software program--specific to the computer that it was generated on.
For example: with Novell 3.11 and earlier, a special program (called WSGen: workstation generator) was
run, that would generate a Workstation Shell. The Workstation Shell would be a software program running
as a TSR. The TSR would be a combination of the Network Card Device Driver and Novell's IPX protocol.
The Workstation Shell was specific to the computer that it was generated on, and could not be used on another
computer. This meant that every PC in a network would need to have its Workstation Shell recompiled with
every new version of Novell! In a small network, this would not be a problem, but in large networks (100+
PCs), it becomes a logistic nightmare!
Another problem emerged: the Workstation Shells directly controlled the network card, and were specific to
only one NOS. This meant that only one NOS protocol could be run. In Novell's case, the protocol was IPX.
Therefore, interconnecting Networks became a major problem.
Still another problem arose when trying to run more than 1 network card in a computer (this is done typically
in bridges, routers and servers). The Workstation Shells did not have the provision to easily allow the NOS
Protocol to "bind" to more than one Network Card. The NIDS and ODI Network Card Driver specifications
were implemented to address the following specific areas:
❖ Provide a standard separate interface between the Network Card Device Driver and the Data Link
Layer.
❖ Allow more than one NOS Protocol to access the Network Card Device Driver.
❖ Allow the NOS to "bind" to more than one Network Card Device Driver.
NDIS Drivers
The NDIS (Network Driver Interface Specification) standard was developed jointly (by Microsoft and 3Com)
for implementation in Microsoft's NOS and IBM's OS/2.
The Microsoft implementation of NDIS modifies the config.sys file, autoexec.bat file, and makes two
important initialization files: SYSTEM.INI and PROTOCOL.INI.
Microsoft loads the IFSHLP.SYS file as a device driver in the CONFIG.SYS file. The IFSHLP.SYS is the
installable file system helper file, and contains the network redirector for the NDIS interface. The
LASTDRIVE command (in the config.sys file) identifies (to the operating system) the last available drive
that can be used for mapping network drives.
ODI Drivers
The Open Datalink Interface (ODI) is a software standard developed by Novell and Apple Corporation. It
provides a layered approach to comply with the ISO Open System Interconnect (OSI) model --for the
Physical, Datalink and Network layers.
The Open Datalink Interface was developed to overcome the several limitations of the previous network
interface card driver software. Previous to the ODI standard, each workstation was required to "compile" its
own workstation's IPX.COM shell (using Novell's "WSGEN" program, or workstation generation program).
This resulted in a single program that contained the network card driver, Datalink interface and Network
layer protocol (IPX/SPX: commonly called the "workstation shell"). This approach limited the workstation
to 1network card, and only 1 Network layer protocol. Multiple network cards and Network layer protocols
were not allowed under "WSGEN".
The ODI standard broke the "workstation shell" into manageable parts that permits multiple network cards
and protocols. For example, one workstation/client can have an Ethernet 10BaseT card running IPX/SPX
protocols (Novell), and a Farallon Localtalk card for running Appletalk (Macintosh).
A comparison of the ODI standard vs. the OSI Model is shown below:
Novell Lite (very old and defunct) is Novell's peer-to-peer Network Operating system. Peer-to-peer Networks
use DOS's File Allocation Table (FAT), and Novell Lite is no exception (Novell Netware has its own high
performance disk operating system). Novell Lite follows Novell's Netware structure for the Network,
Datalink, and Physical layers. It is an excellent example of an ODI compliant NOS (Network Operating
System). At the Transport layer, it uses peer-to-peer Client and Server software instead of Novell's Netware
Transport layer software.
Packet Drivers
Packet drivers use software interrupts to identify the network cards to the data link layer. Packet drivers are
free software drivers. They were developed to address the problems of running multiple protocols over one
network card. NDIS and ODI are proprietary schemes-- that have been developed by 3COM/Microsoft and
Novell/Apple respectively--to address this problem.
The Crynwr Software collection of packet drivers are available throughout the Internet. They are free to use,
unlike shareware and commercial products.
Advantages:
Run multiple applications across the same board: TCP/IP, NetBIOS, Netware
One board fits all, no buying different boards for different applications.
No more reconfiguring and rebooting to change applications.
Connect to a Novell File Server (or servers) and still run TCP/IP or PC-NFS, or with the Novell systems
remaining active and available for file serving and printing.
The Packet Driver acts as a fast and smart secretary, bothering clients only when packets arrive specifically
for them.
Software Interrupts
Software interrupts are interrupts generated by software (unlike hardware interrupts that are physical lines
running to each device). Software interrupts that are available are 0x60 to 0x66. Table xx-1 lists the software
interrupts and their assignments.
The packet drivers are assigned software interrupts to the network interface card during the bootup process
(usually in the autoexec.bat file). For a 3c503 card, the autoexec.bat file would have this line:
3c503 0x60 5 0x300
where:
3c509 calls up the packet driver 3c509.com
0x60 is the software interrupt assigned to the NIC
5 is the hardware interrupt of the NIC
0x300 is the I/O address of the NIC
Summary
This tutorial explains which network media cables use which connectors. Learn the specifications of the most
common types of network media connectors. There are several types of network cables. Each type of network
cable uses specific types of connectors to connect to another network cable or network interface card. To join
two network cables or to connect a network cable to a NIC, you need appropriate connectors. In the following
section, we will discuss some most common and popular network media connectors.
Barrel connectors
Barrel connectors are used to join two cables. Barrel connectors are female connectors on both sides. They
allow you to extend the length of a cable. If you have two small cables, you can make a long cable by joining
them through the barrel connector.
Barrel connectors that are used to connect coaxial cables are known as BNC barrel connectors. The
following image shows BNC barrel connectors.
Barrel connectors that are used to connect STP or UTP cables are known as Ethernet LAN
jointers or couplers. The following image shows Ethernet LAN jointers or couplers.
Barrel connectors do not amplify the signals. It means, after joining, the total cable length must not exceed
the maximum supporting length of the cable. For example, a standard UTP cable supports a maximum
distance of 100 meters. You can join two UTP cables if their sum is not more than 100.
Cable 1 (45 meters) + cable 2 (30 meters) = joint cable (75 meters = 45 meters + 30 meters)
Cable 1 (65 meters) + cable 2 (45 meters) = joint cable (110 meters = 65 meters + 45 meters)
F connectors
An F connector is used to attach a coaxial cable to a device. F connectors are mostly used to install home
appliances such as dish TV, cable internet, CCTV camera, etc. The following image shows F connectors.
Terminator connectors
When a device places signals on the coaxial cable, the signals travel along the end of the cable. If another
device is connected to the other end of the cable, the device will receive the signal. But if the other end of the
cable is open, the signals will bounce and return in the same direction they came from. To stop signals from
bouncing back, all endpoints must be terminated.
A terminator connector is used to terminate the endpoint of a coaxial cable. The following image shows
terminator connectors.
T type connectors
A T connector creates a connection point on the coaxial cable. The connection point is used to connect a
device to the cable.
RJ-11 Connectors
RJ-11 connectors have the capacity for six small pins. However, in many cases, only two or four pins are
used. For example, a standard telephone connection uses only two pins, and a DSL modem connection uses
four pins. They have a small plastic flange on top of the connector to ensure a secure connection.
RJ-45 connectors
RJ-45 connectors look likes RJ-11 connectors, but they are different. They have 8 pins. They are also bigger
in size than RJ-11. RJ-45 connectors are mostly used in computer networks. They are used with STP and
UTP cables. Some old Ethernet implementations use only four of the eight pins. Modern Ethernet
implementation uses all 8 pins to achieve the fastest data transfer speed.
A DB-9 or RS-232 connector connects a device over a serial port. It has 9 pins. It is available in both male
and female connectors. It is used for asynchronous serial communication. The other side of the cable can be
connected to any popular connector type. For example, you can connect one side of the cable with a DB-9
connector and the other side of the cable with another DB-9 connector or with an RJ-45 connector or with a
USB connector.
One of the most popular uses of a DB-9 connector is to connect the serial port on a computer with an external
modem.
USB connectors are the most popular. They support 127 devices in the series. All modern computers have
USB ports. Most devices that you can connect to the system have USB ports. Some examples of devices that
support or have USB ports are mice, printers, network cards, digital cameras, keyboards, scanners, mobile
phones, and flash drives.
If the device has a USB port, you can use a cable that has a USB connector on both ends to connect the device
to the computer. If the device does not have a USB port, you can still connect the device to the USB port.
For that, you can use a cable that has a USB connector on one side and the corresponding connector on the
other.
A variety of connectors are used to connect fiber cables. Some popular connectors are ST, SC, LC, and
MTRJ. Let's discuss these connectors.
SC connectors
SC connectors are also known as subscriber connectors, standard connectors, or square connectors. An
SC connector connects to a terminating device by pushing the connector into the terminating device, and it
can be removed by pulling the connector from the terminating device. It uses a push-pull connector similar
to audio and video plugs and sockets.
Straight tip (ST) connectors are also known as bayonet connectors. They have a long tip extending from the
connector. They are commonly used with MMF cables. They use a half-twist bayonet type of lock. An ST
connector connects to a terminating device by pushing the connector into the terminating equipment and then
twisting the connector housing to lock it in place.
LC connectors
LC connectors are known as Lucent Connectors. For a secure connection, they have a flange on top, similar
to an RJ-45 connector. An LC connector connects to a terminating device by pushing the connector into the
terminating device, and it can be removed by pressing the tab on the connector and pulling it out of the
terminating device.
MTRJ connectors
An MTRJ connector connects to a terminating device by pushing the connector into the terminating device,
and it can be removed by pulling the connector from the terminating device. It includes two fiber strands: a
transmit strand and a receive strand in a single connector.
CHAPTER FIVE
DATA COMMUNICATION
Data communications is all about transmitting information from one device to another. All the controls and
procedures for communicating information are handled by communication protocols. At the most basic level,
information is converted into signals that can be transmitted across a guided (copper or fiber-optic cable) or
unguided (radio transmission) medium. At the highest level, users interact with applications. In between is
software that defines and controls how applications take advantage of the underlying network.
This section outlines data communication technologies and makes reference to other sections in this book. If
you are interested in a specific aspect of data communications, you can refer to the related topics listed at the
end of each subsection for more information.
Communication Protocols
Any discussion of data communications must begin with a discussion of protocols. Communication protocols
are the rules and procedures that networked systems use to communicate on a transmission medium.
Communication protocols are responsible for establishing and maintaining communication sessions. Two
computers engage in a session to coordinate the transfer of data. Sessions are connection-oriented. In contrast,
a connectionless transmission occurs when data is sent to a device without the sender first establishing contact
with the receiver. The Internet is a connectionless system. Connection-oriented and connectionless sessions
are discussed under “Connection-Oriented and Connectionless Services.”
Communication protocols can be compared to the diplomatic protocols used by foreign embassies. Diplomats
of various rank handle different types of negotiations. They communicate with peer diplomats in other
embassies. Likewise, communication protocols have a layered structure in which protocols at one layer in
the transmitting system communicate with protocols in the same peer layer of the receiving system. A
simplified diagram is pictured in Figure D-1. Note the top layer is a high-level, network-enabled application
where users make requests for network services. This layer talks with its peer layer in the computer it is
communicating with. The messages sent by this layer travel down the protocol stack, across the wire and up
through the protocol stack to the destination.
The top layer is where applications interact with the network and is called the application layer protocol. The
middle protocol layer, generically called the transport layer in this case, is responsible for keeping the
communication session alive and running and for coordinating the transfer of information. It also provides
“services” to the upper application layer. The lower layer defines connections to the physical transmission
medium and the signaling techniques used on the medium. Note that the physical layer might provide modem
connections, network connections, or even connections to satellites.
For later reference, you should know that data passes through the protocol stack in blocks. For example, a
file transfer might be broken up into any number of pieces, then transmitted one piece at a time. If one of the
pieces is lost, it can be re-sent without needing to retransmit the entire file. Technically, pieces of data passing
through the protocol stack are called PDUs (protocol data units). This is discussed further under “Protocol
Concepts.”
In more general terms, people talk about packets of data moving from one system to another. Another term
you will encounter is frames, which has to do with dividing serial streams of data into manageable blocks for
transmission as discussed later in this topic under the subheading “Framing in Data Transmissions.”
The reason for layering the protocol stacks is simple. Protocols are published as worldwide standards so that
one vendor can create network hardware or software that will work with another vendor’s hardware or
software. A developer references a particular part of the protocol stack that is appropriate for the product
being developed.
Long ago, the ISO (International Organization for Standardization) developed the seven-layer OSI (Open
Systems Interconnection) model. This model was supposed to have provided a framework for integrating
data processing systems everywhere. However, to date it has only served as a very useful model for discussing
how other more popular protocols operate and work together. References are made to the layers of this model
throughout this text, so you may want to refer to “OSI (Open Systems Interconnection) Model” for more
information.
The Internet protocols, including TCP/IP, are now commonly used throughout the world. Only a few years
ago, a number of other protocols were vying for this top spot, including the OSI protocols. Other network
protocol suites include Novell’s IPX/SPX, AppleTalk, and IBM SNA.
The remainder of this section looks at layers of the protocol stack from the bottom physical layer to the upper
application layer, with an emphasis on TCP/IP and other Internet protocols. Each section explains the basic
terminology only and refers you to appropriate headings in this book.
A communication system consists of a transmission medium and the devices that connect to it. The medium
may be guided or unguided, where guided media is a metal or optical cable and unguided media refers to
transmitting signals through air or the vacuum of space.
A communication system that connects two devices is said to be a point-to-point system. In contrast, a shared
system connects a number of devices that can transmit on the same medium, but only one at a time. Both
systems are illustrated in Figure D-2. Note that system A and system Z have an end-to-end link that crosses
over several individual data links.
Devices are connected to a transmission medium with an adapter that generates signals for transmitting data
over some medium. For digital communication systems, discrete high- and low-voltage values are generated
to provide the signaling for binary 1s and 0s, respectively.
In contrast, an analog communication system like the voice telephone network transmits continuous analog
signals that vary in amplitude and frequency over time. The frequency of these sine wave signals is measures
in cycles per second, or Hz (hertz). As you’ll see, the frequency of the signals plays a role in the amount of
data that can be transmitted without distortion over an analog telephone line. The bandwidth of a system
refers to its data-carrying capacity.
A modem (modulator/demodulator) is a device that can be used to transmit digital signals over analog
transmission lines. A modem is required at both ends of a transmission to modulate, then demodulate the
signal. The transmitting modem converts a digital signal into an analog signal and the receiving modem
converts the signal back to discrete digital signals.
There are a number of factors that limit the data rate (bandwidth) of a transmission system. One is the
frequency allowed on the channel. It may be limited for a number of reasons, including government
restrictions or the specifications of the transmission system. The telephone system has bandwidth limitations
due to its use as a voice communication system. When transmitting digital data over analog systems, the
higher the frequency, the higher the data rate. Note that the discrete signal is poorly represented, and this will
result in distortion at the receiving end.
Data Encoding
In its simplest form, digital data is transmitted as high- or low-voltage pulses. In a one-to-one relationship, a
binary 0 may be transmitted as a zero-voltage level, and a binary 1 may be transmitted as +5V voltage level.
However, special encoding schemes are used to more efficiently transmit signals. In these encoding schemes,
1s are not always represented by a high voltage and 0s by a low voltage (or vice versa). Instead, a change in
polarity may reverse the scheme at any time, depending on the bit value. This is explained next.
A scheme called Manchester encoding is used on Ethernet LANs. Its most important feature is that it provides
a way for sender and receiver to synchronize and track the exact location of bits in a transmission without
the need for a clocking mechanism. Note in Figure D-5 that a bit transition always takes place in the middle
of transmitting a single bit. This transition serves as a built-in clocking mechanism that the receiver can track.
This also divides each bit period into two intervals in which bits are represented as follows:
❖ A binary 1 is represented by the first interval set high and the second interval set low.
❖ A binary 0 is represented by the first interval set low and the second interval set high.
Note that Manchester encoding is not the most efficient of the encoding schemes, but it is easy to implement
and is used on many LANs today. This topic is covered further under “Modems” and “Signals.”
Synchronous and Asynchronous Transmissions
Not all transmissions are a steady flow of characters. A transmission that consists of many starts and stops is
an asynchronous transmission. Assume you are back in the 1960s, sitting at a dumb terminal connected to a
mainframe computer. As you type, each character is transmitted to the computer over an asynchronous link.
You pause and the transmission pauses. Because the systems operate in asynchronous mode, the receiver is
not expecting a steady stream of bits. It waits for further transmission at any time and does not assume that
the link has been disrupted when transmissions stop.
In contrast, a synchronous transmission is characterized by a long string of bits in which each character in
the string is demarcated with a timing signal. Both types of transmissions are commonly used to connect
computer systems over telephone lines or other channels. The choice of one over the other depends on the
installation. In fact, modems that provide asynchronous operation for users may switch to synchronous mode
for extended transmissions. See “Asynchronous Communications” and “Synchronous Communications.”
Serial Interfaces
A standard interface is required to connect communication devices like modems to computers. The most
common interface for modems is the EIA-232 standard, which was originally called RS-232. In this scheme,
computers or other similar devices are called DTE (data terminal equipment) and devices like modems are
called DCE (data circuit-terminating equipment). The interface connector has 25 pins that are wired through
to the opposite connector. Each pin represents a channel on which data is transferred or a specific control
signal is sent. For example, pin 4 is the request to send line and the DTE uses it to signal that it wants to
transmit. Pin 5 is the clear to send line and the DCE uses it to indicate that it is ready to receive. See “Serial
Communication and Interfaces” for more information about this subject.
Transmission Media
There are a variety of transmission media including copper cable, fiber-optic cable, and unguided wireless
techniques. Each has transmission characteristics that restrict data transmission rates. Some of these
restrictions are imposed by the designers of the communication systems on which the cable are used, based
on various factors such as a need to reduce signal emanation. Other restrictions are based on signal loss over
distance or even curvature of the earth in the case of ground-based microwave transmissions systems.
Designers of communication systems take all of these factors into consideration when designing network
systems such as Ethernet, token ring, FDDI (Fiber Distributed Data Interface), and others. Therefore,
networks should be assembled within the standard specifications to avoid problems.
Computer data can be transmitted over RFs (radio frequencies) in cases where wires are impractical. These
RF transmissions take place between a transmitter and a receiver within a single room or across town. RF
networks provide unique solutions for campus and business park environments where links are required
across roads, rivers, and physical space (in general, where it is not practical to run a cable). Terrestrial
microwave systems are commonly seen on the top of buildings and towers everywhere. The telephone
companies have built networks of microwave transmitters and receivers for the telephone network.
Satellite communication systems provide another solution for long-distance communication. See
“Microwave Communications,” “Satellite Communication Systems,” and “Wireless Communications” for
more details. Cable characteristics, impairments, and other factors related to transmission media are covered
further under “Transmission Media, Methods, and Equipment.” See also “Network Design and
Construction.”
The telephone system has always been an integral part of data communications. If an organization needs 24-
hour connections to remote sites, it can lease dedicated digital transmission lines from the telephone company
or other service providers or it can take advantage of packet-switched networks. These options are discussed
further under “Telecommunications and Telephone Systems” and “WAN (Wide Area Network).” An
emerging trend is to build VPNs (virtual private networks) over the Internet. This saves much of the cost of
leasing long-distance lines. Refer to “VPN (Virtual Private Network)” for more details.
COMMUNICATION IN THE ISO - OSI MODEL
The ISO (International Standards Organization) has created a layered model, called the OSI (Open Systems
Interconnect) model, to describe defined layers in a network operating system. The purpose of the layers is
to provide clearly defined functions that can improve Internetwork connectivity between "computer"
manufacturing companies. Each layer has a standard defined input and a standard defined output.
Understanding the function of each layer is instrumental in understanding data communication within Local,
Metropolitan or Wide networks.
This is a top-down explanation of the OSI Model. It starts with the user's PC and it follows what happens to
the user's file as it passes though the different OSI Model layers. The top-down approach was selected
specifically (vs. starting at the Physical Layer and working up to the Application Layer) for ease of
understanding. It is used here to show how the user's files are transformed (through the layers) into a bit
stream for transmission on the network.
A basic PC logic flowchart is shown in Fig. 1. The Keyboard & Application are shown as inputs to the CPU
(requesting access to the hard disk). The Keyboard requests accesses through user inquiries (such as "DIR"
commands) and the Application seeks access through "File Openings" and "Saves". The CPU, through the
Disk Operating System, sends and receives data from the local hard disk ("C:" in this example).
A PC setup as a network workstation has a software "Network Redirector" (the actual name depends on the
network - we will use a generic term here) placed between the CPU and DOS (as shown in Fig 2.). The
Network Redirector is a TSR (Terminate and Stay Resident) program: it presents the network hard disk as
another local hard disk ("G:" in this example) to the CPU. All CPU requests are intercepted by the "Network
Redirector". The Network Redirector checks to see if either a local or a network drive is requested. If a local
drive is requested, the request is passed on to DOS. However, if a network drive is requested, the request is
then passed on to the network operating system (NOS).
Electronic mail (E-Mail), client-server databases, games played over the network, print and file servers,
remote logons, and network management programs (or any "network aware" applications) are all aware of
the network redirector. They have the ability to communicate directly with other "network applications" on
the network. The "Network Aware Applications" and the "Network Redirector" make up Layer 7 (the
Application layer of the OSI Model, as shown in Fig).
The Network Redirector sends CPU operating system native code to the network operating system: the coding
and format of the data is not recognizable by the network operating system. The data consists of file transfers
and network calls by network aware programs.
For example, when a dumb terminal is used as a workstation (in a mainframe or minicomputer network), the
network data is translated into (and from) the format that the terminal can use. The Presentation layer presents
data to and from the terminal using special control characters to control the screen display (LF-line feed, CR-
carriage return, cursor movement, etc..). The presentation of data on the screen would depend on the type of
terminal that's used: VT100, VT52, VT420, etc.
Similarly, the Presentation layer strips the pertinent file from the workstation operating system's file
envelope. The control characters, screen formatting, and workstation operating system envelope are all
stripped or added to the file (if the workstation is receiving or transmitting data to the network). This could
also include translating ASCII file characters from a PC world to EBCDIC in an IBM Mainframe world.
The Presentation Layer also controls security at the file level: this provides both file locking and user security.
The DOS Share program is often used for file locking. When a file is in use, it is locked from other users to
prevent 2 copies of the same file from being generated. If 2 users both modified the same file, and User A
saved it, then User B saved it, then User A's changes would be erased! At this point, the data is contiguous
and complete (i.e. one large data file). See Fig. above
The Session layer manages the communications between the workstation and the network. The Session layer
directs the information to the correct destination, and identifies the source to the destination. The Session
layer identifies the type of information as data or control. The Session layer manages the initial start-up of a
session, and the orderly closing of a session. The Session layer also manages Log on procedures and Password
recognition (See Fig. below).
In order for the data to be sent across the network, the file must be broken up into usable small data segments
(typically 512 - 18K bytes). The Transport layer breaks up the file into segments for transport to the network,
and combines incoming segments into a contiguous file. The Transport layer does this logically, not
physically, and it is done in software as opposed to hardware.
The Transport layer provides error checking at the segment level (frame control sequence). This makes sure
that the datagrams are in the correct order: the Transport layer will correct out of order datagrams. The
Transport layer guarantees an error-free host to host connection. It is not concerned with the path between
machines.
The Network layer is concerned with the path through the network. It is responsible for routing, switching,
and controlling the flow of information between hosts. The Network layer converts the segments into smaller
datagrams than the network can handle: network hardware source and destination addresses are also added.
The Network layer does not guarantee that the datagram will reach its destination.
The Data Link layer is a firmware layer of the network interface card. The Data Link layer puts the datagrams
into packets (frames of bits: 1s & 0s) for transmission, and assembles received packets into datagrams. The
Data Link layer works at the bit level, and adds start / stop flags and bit error checking (CRC or parity) to the
packet frame. Error checking is at the bit level only: packets with errors are discarded and a request for re-
transmission is sent out. The Data Link layer is primarily concerned with bit sequence.
The Physical layer concerns itself with the transmission of bits. It also manages the network card's hardware
interface to the network. The hardware interface involves the type of cabling (coax, twisted pair, etc.),
frequency of operation (1 Mbps, 10Mbps, etc.), voltage levels, cable terminations, topography (star, bus,
ring, etc.), etc. Examples of Physical layer protocols are as follows: 10Base5 - Thicknet, 10Base2 - Thinnet,
10BaseT - twisted pair, ArcNet, FDDI, etc. (see Fig. below)
Layer-Specific Communication
Each layer may add a Header and a Trailer to its Data (which consists of the next higher layer's Header,
Trailer and Data as it moves through the layers). The Headers contain information that specifically addresses
layer-to-layer communication. For example, the Transport Header (TH) contains information that only the
Transport layer sees. All other layers below the Transport layer pass the Transport Header as part of their
Data.
Protocol Concepts
Network communication protocols are defined within the context of a layered architecture, usually called a
protocol stack. The OSI (Open Systems Interconnection) protocol stack is often used as a reference to define
the different types of services that are required for systems to communicate. F
The lowest layers define physical interfaces and electrical transmission characteristics. The middle layers
define how devices communicate, maintain a connection, check for errors, and perform flow control to ensure
that one system does not receive more data than it can process. The upper layers define how applications can
use the lower network layer services.
The protocol stack defines how communication hardware and software interoperate at various levels.
Layering is a design approach that specifies different functions and services at levels in the protocol stack.
Layering allows vendors to build products that interoperate with products developed by other vendors.
Each layer in a protocol stack provides services to the protocol layer just above it. The service accepts data
from the higher layer, adds its own protocol information, and passes it down to the next layer. Each layer
also carries on a “conversation” with its peer layer in the computer it is communicating with. Peers exchange
information about the status of the communication session in relation to the functions that are provided by
their particular layer.
As an analogy, imagine the creation of a formal agreement between two embassies. At the top, formal
negotiations take place between ambassadors, but in the background, diplomats and officers work on
documents, define procedures, and perform other activities. Diplomats have rank, and diplomats at each rank
perform some service for higher-ranking diplomats. The ambassador at the highest level passes orders down
to a lower-level diplomat. That diplomat provides services to the ambassador and coordinates his or her
activities with a diplomat of equal rank at the other embassy. Likewise, diplomats of lower rank, who provide
services to higher- level diplomats, also coordinate their activities with peer diplomats in the other embassy.
Diplomats follow established diplomatic procedures based on the ranks they occupy. For example, a
diplomatic officer at a particular level may provide language translation services or technical documentation.
This officer communicates with a peer at the other embassy regarding translation and documentation
procedures.
In the diplomatic world, a diplomat at one embassy simply picks up the phone and calls his or her peer at the
other embassy. In the world of network communication, software processes called entities occupy layers in
the protocol stack instead of diplomats of rank. However, these entities don’t have a direct line of
communication between one another. Instead, they use a virtual communication path in which messages are
sent down the protocol stack, across the wire, and up the protocol stack of the other computer, where they
are retrieved by the peer entity. This whole process is illustrated in Figures P-12 and P-13. Note that the
terminology used here is for the OSI protocol stack. The more popular TCP/IP protocol suite uses slightly
different terminology, but the process is similar.
As information passes down through the protocol layers, it forms a packet called the PDU (protocol data
unit). Entities in each layer add PCI (protocol control information) to the PDU in the form of messages that
are destined for peer entities in the other system. Although entities communicate with their peers, they must
utilize the services of lower layers to get those messages across. SAPs (service access points) are the
connection points that entities in adjacent layers use to communicate messages; they are like addresses that
entities in other layers or other systems can use when sending messages to a system. When the packet arrives
at the other system, it moves up through the protocol stack, and information for each entity is stripped off the
packet and passed to the entity.
Using the previous diplomatic analogy, assume the ambassador wants to send a message to the ambassador
at the other embassy. He or she creates the letter and passes it to an assistant, who is a diplomat at the next
rank down. This diplomat places the letter in an envelope and writes an instructional message on the envelope
addressed to his or her peer at the other embassy. This package then goes down to the next-ranking diplomat,
who puts it in yet another envelope and writes some instructions addressed to his or her peer at the other
embassy. This process continues down the ranks until it reaches the “physical” level, where the package is
delivered by a courier to the other embassy. At the other embassy, each diplomat reads the message addressed
to him or her and passes the enclosed envelope up to the next-ranking officer.
Each layer performs a range of services. In particular, you should refer to “Data Communication Concepts,”
“Data Link Protocols,” “Network Layer Protocols,” and “Transport Protocols and Services” for more
information. The sections “IP (Internet Protocol)” and “TCP (Transmission Control Protocol)” also provide
some insight into the functions of the two most important layers as related to the Internet protocol suite.
Now the discussion moves up the protocol stack above the hardware level. The next layer up is commonly
referred to as the data link layer. The primary purpose of the data link layer is to manage the flow of bits
between systems that are connected to a transmission medium. It is helpful to think of water flowing through
a hose. Once transmission starts, the physical network sends raw bits through the hose to the receiver.
Interference and electrical problems can disturb an electrical transmission, just like a kink in a hose can
disrupt the flow of water. Also, the “bit buckets” on the receiving end may fill up quickly and overflow before
the receiving system can process the data.
The data link layer can provide a mechanism for controlling the transmission of bits across the physical layer.
If necessary, it can detect and correct errors in transmission and tell the sending system to slow down or stop
sending data until the receiving system catches up. On the other hand, performing all these tasks in the data
link layer can reduce performance, so many networks only rely on the data link layer for fast data
transmission. Higher-level protocols in the transport layer handle error detection and recovery.
Framing
Framing provides a controlled method for transmitting bits across a physical medium and provides error
control and data retransmission in the event of an error. It is helpful to think of a freight train. A block of bits
is put into each frame and delivered to the destination. A checksum is appended so the frame can be checked
for corruption. If a frame is corrupted or lost, only that frame needs to be re-sent, rather than the entire set of
data.
Frames have a specific structure, depending on the data link protocol in use. The frame structure for a popular
data link protocol called HDLC (High-level Data Link Control). Note that the information field is where data
is placed, and it is variable in length. An entire packet of information may be placed into the information
field. The beginning flag field indicates the start of the frame. The address field holds the address of the
destination, and the control field describes whether the information field holds data, commands, or responses.
The FCS field contains error-detection coding.
The data link layer is also responsible for error detection and control. One error control method is to detect
errors and then request a retransmission. This method is easy to implement, but if errors are high, it affects
network performance. Another method is for the receiver to detect an error and then rebuild the frame. This
latter method requires that enough additional information be sent with the frame so the receiver can rebuild
it if an error is detected. This method is used when retransmissions are impractical, such as a transmission to
a space probe. These techniques are discussed further under “Error Detection and Correction.”
Flow Control
Finally, we get to flow control. As mentioned earlier, if a data transmission is like water flowing through a
hose, some control is needed to prevent the bucket at the other end from overflowing. In this analogy, the
bucket is the data buffer that the receiver uses to hold data until it can be processed. The buffers on some
NICs (network interface cards) are large enough to hold an entire transmission until the processor can get to
it. When buffers overflow, frames are usually dropped, so it is useful for the receiver to have some way to
tell the sender to slow down or stop sending frames. See “Acknowledgments” and “Flow-Control
Mechanisms” for more information.
Network Access and Logical Link Control for Shared LANs
Access methods are necessary on networks that are shared by multiple devices. Only one device can transmit
on the network at a time, so a medium access control method is needed to provide arbitration.
In the local area network environments defined by the IEEE, medium access protocols reside in a sublayer
of the data link layer called the MAC (Medium Access Control) sublayer. The MAC sublayer sits below the
LLC sublayer, which provides the data link control for any installed MAC drivers below it.
The MAC sublayer supports a variety of different network types, each of which has a specific way of
arbitrating access to the network. Three different access methods are described here.
Carrier sense methods With this technique, devices listen on the network for
transmissions and wait until the line is free before transmitting their own data. If two
stations attempt to transmit at the same time, both devices back off and wait a random
amount of time before retransmitting.
Token access methods A token ring network forms a logical ring on which each
transmission travels around the ring from station to station. Only a station that has
possession of a special token can transmit.
Reservation methods In this scheme, every transmitting device has a specific slot of time
or frequency allotted to it. A device can choose to place data in the slot for transmission.
This technique can waste bandwidth if a device has nothing to transmit.
Refer to “Data Link Protocols,” MAC (Medium Access Control),” and “Medium Access Control
Methods” for more information.
Bridging
A bridge is a device that connects two network segments. In this discussion, the segments are IEEE 802.x
LANs. A bridge can extend the distance of a LAN and can be used to split a shared LAN into two segments
so there are fewer stations trying to share each segment of the medium.
Bridges operate in the LLC layer of the protocol stack. Note that two Ethernet networks are joined by a
bridge. A frame from the Ethernet LAN enters one port of the bridge and exits out the other port for
transmission on the adjoining Ethernet segment. The bridge will only forward packets that have a destination
address on the destination segment, thus minimizing unnecessary packet deliveries. For more information,
refer to “Bridges and Bridging.”
Switching
As mentioned, a bridge can be used to split a LAN into two segments, which effectively makes two smaller
shared segments. A switch is a device that expands on this concept. Whereas a traditional bridge has two
ports to join two LAN segments, a switch has an array of ports for joining more than two segments. A hub is
usually attached to a port. Then, only the workstations on that hub contend for access to the LAN segment.
If a workstation needs to transmit to a workstation on another port, the switch will quickly set up a temporary
connection between the ports so that all the workstations attached to the two ports share what is essentially a
dedicated network segment.
The purpose of switching is to boost LAN performance by reducing the number of workstations on each
LAN segment. The switch itself moves frames between ports at very high speeds so it does not introduce any
delay to the network. The best performance is achieved with one workstation per port so that there is no
contention at all when that workstation wants to transmit. The switch sets up a port connection between the
sender and receiver for the duration of the transmission.
Note that a switch operates in the data link layer relative to the OSI protocol. The industry refers to this as
Layer 2 switching. The technique of dividing LANs is often called microsegmentation because a network can
be split into smaller and smaller segments up to the point where a single port segment may be dedicated to a
single computer.
Most switching devices provide a way to configure VLANs (virtual LANs) as well. With a traditional hub,
all the connected workstations are part of the same LAN segment. In a VLAN-capable network, workstations
can be configured to belong to one or more logical LANs.
Only a few years ago, bridges were essential devices in corporate networks. Today, routers are more often
selected because they provide a better way to connect the individual networks an organization may have
installed over the years. Internetworking is all about joining networks with routers. Routers provide the
following important services:
Limit broadcast traffic between networks and intelligently forward packets between
networks
Provide a security barrier between networks (i.e., routers can filter traffic based on IP
address, application, etc.)
Provide connections to wide area networks
Provide a way to build a network with redundant paths, as shown in Figure D-10
Routers join the autonomous networks of the Internet. Each individual network has its own network address
as defined by the IP (Internet Protocol). What IP offers is a higher-level internetwork addressing scheme
similar to the way U.S. ZIP codes provide a way to identify individual cities throughout the nation. In this
analogy, each individual network attached to the Internet is like a city or town. Routers examine the IP address
and determine the port on which to forward the packet.
To understand the role of routers, it may be useful to consider how the Internet joins the autonomous networks
of organizations throughout the globe. The TCP/IP addressing scheme is an important part of the Internet
because it provides a way to assign a unique address to all the networks and hosts attached to it. Keep in mind
that individual networks already have a MAC layer addressing scheme that identifies individual nodes on
that network. IP identifies individual networks in an internetwork.
See “Internetworking,” “Packet and Cell Switching,” “Routers,” “Routing Protocols and Algorithms,” and
“WAN (Wide Area Network).”
The transport layer provides a unique service. It allows two systems to set up a “conversational” session with
one another so they can reliably exchange data. The session achieves reliability because the transport layer
processes in each system exchange messages about the status of the session.
The network layer IP protocol is a connectionless service while the transport layer provides reliable
connection-oriented services, in some cases over highly unreliable networks. For example, if a network link
temporarily fails, a connection-oriented session does not immediately give up the connection, but attempts
to keep it alive until the underlying link is reestablished or until a time-out occurs. Once the session is
reestablished, data transmission continues from where it was interrupted. A connection-oriented session is
actively monitored and dynamically managed to ensure proper delivery of data. While connection-oriented
virtual circuits take time to set up, they are appropriate for lengthy “conversations” and data transmissions.
In contrast, connectionless services like IP send datagrams to recipient systems without first notifying
them.The recipient is expected to accept the datagrams and handle them as appropriate. If datagrams are lost,
the recipient must detect that a packet is missing and request a retransmission from the sender. Interestingly,
the Internet is based on IP, an unreliable protocol, but TCP adds reliability to the Internet. For more
information on the procedures and processes implemented in the transport layer, refer to “Transport Protocols
and Services.”
Applications that run at the highest level of the protocol stack are not really involved in communications, but
they do use communication services and so have appropriate features and user interfaces that take advantage
of the underlying network. Network file-sharing services like NCP (NetWare Core Protocol), NFS (Network
File System) in the UNIX environment, or SMB (Server Message Blocks) in the Windows environment are
specifically designed to use network services so that users can share files over networks. These systems are
designed to work with most underlying networks. Having reached the application layer, our discussion of
network concepts ends. You can refer to a number of topics related to applications for more information,
such as “Collaborative Computing,” “Electronic Mail,” “Groupware,” and “Workflow Management.”
10base-2
The version of Ethernet that uses thin coaxial cable. The name is derived from the speed of the network
(10Mbps), the signaling type (baseband), and the maximum cable length (almost 200 meters).
10base-T
The version of Ethernet that uses twisted-pair cabling. The name is derived from the speed of the network
(10Mbps), the signaling type (baseband), and the cable type (twisted pair).
100base-T
An extension to the 10base-T standard describing twisted-pair networks that operate at 100Mbps.
802.3
The IEEE standard that describes the CSMA/CD medium access method used in Ethernet networks.
802.5
The IEEE standard that describes the medium access method used in Token Ring networks.
A service allowing voice mail to be viewed on your PC screen. Instead of pressing number keys on the
telephone to access voice mail functions, you can use your PC to view and control incoming voice mail. A
special communications server on the network handles the incoming voice mail.
A high-speed modem technology that provides data services, such as Internet access, over existing telephone
lines. ADSL has a downstream (to the subscriber) data transfer rate of at least 1.5Mbps. Subscribers located
within two miles of the telephone office can attain downstream speeds as high as 6.2Mbps. Upstream data
rates vary from 16Kbps to 640Kbps, depending on line distance. See also asymmetrical transmission.
A private organization involved with setting US standards, often referred to as ANSI standards.
A protocol that allows users to transfer files between TCP/IP-connected computers. A user will log in to an
FTP server using anonymous as the user ID and guest as the password. This process gets a user into a special,
usually restricted, area of the FTP server.
AppleTalk
A seven-layer protocol stack designed by Apple Computers that allows the sharing of files and printers and
the sending of traffic between computers. Its primary design goal was to give the AppleTalk user a simple
plug-and-play environment in which the user does not need to be concerned with the details of network
configuration.
application layer
Layer 7 of the seven-layer OSI model. The application layer is responsible for interfacing with the user and
directing input from the user to the lower OSI layers. It is the part of the OSI model the user interacts with
directly.
A protocol, described in RFC 826, used to determine the hardware address of another computer on a network.
ARP is used when a computer may know the destination computer's IP address, but does not know the
destination computer's hardware address. The sender broadcasts an ARP packet and the device that
recognizes its own IP address responds with the unknown hardware address.
A character set in which each letter, number, or control character is made up of a 7-bit sequence. The term
ASCII is sometimes erroneously used when referring to Extended ASCII, an 8-bit character set.
asymmetrical transmission
A transmission method developed to overcome the high cost of high-speed full-duplex transmission. In
essence, a line's bandwidth is broken up into two subchannels: the main channel and the secondary channel.
The main channel contains the majority of the line's bandwidth, the secondary channel contains only a small
portion. The unequal division of bandwidth results in an unequal data transfer rate, but allows service
providers to overcome signal-coupling problems in large telephone cable plants.
asynchronous communications
A type of data transmission in which each character transmitted (8 bits) is framed by a start and stop bit.
These two control bits delineate the beginning and end of a character. Though there is more flexibility with
asynchronous transmission, it is much less efficient because the addition of the control bits increases the
packet size by 25percent.
AT command set
The modem command set developed by Hayes, Inc. that has become the de facto standard for programming
modems.
A high-speed connection-oriented switching technology that uses 53-byte cells (packets) to simultaneously
transmit different types of data, including video and voice. ATM is an attractive technology because it
provides dedicated bandwidth at speeds ranging from 25Mbps to 655Mbps.
The cable that attaches from a MAU or transceiver to a computer. The AUI cable consists of 15-pin D-shell
type connectors, female on the computer end and male on the transceiver end.
authentication
The computer security process of verifying a user's identity or the user's eligibility to access network
resources. See also public key encryption.
autonomous system
A group of routers or networks that fall under one network administrative organization. Autonomous systems
usually run on a single routing protocol.
B-channel
A 64Kbps ISDN channel used to transmit voice or data. The standard BRI connection contains two B-
channels, for a total uncompressed capacity of 128Kbps.
backbone
A network that interconnects individual LANs and that typically has a higher capacity than the LANs being
connected. One exception is a T-1 backbone connecting a WAN connecting two 100Mbps Ethernet LANs at
either end of the backbone. In this case, the LANs have a much higher capacity than the backbone.
backoff
From CSMA/CD, when a collision occurs on a network, the computer sensing the collision calculates a time
delay before trying to transmit again. This time delay is referred to as backoff.
balun
An impedance-matching device used when connecting different types of cable to each other. For example, a
balun is required to connect twisted pair cable to coaxial cable on an Ethernet network.
bandwidth
The width of the passband or the difference between the highest and lowest frequencies in a given range. For
example, the human voice has a passband of approximately 50Hz to -15,000Hz, which translates to a
bandwidth of 14,950Hz.
baseband
A type of transmission that uses digital signals to move data. Because the signal is digital, the entire
bandwidth of the cable is used.
The ratio of received bits that are in error. Diagnostic cable-checking tools sense BER by transmitting a
stream of data on one end of a cable and reading the output from the other end.
best-effort delivery
A network function where an attempt is made at delivering data; however, if an error such as line failure
occurs, it does not attempt recovery. There is no mechanism in best-effort delivery to buffer data then
retransmit it once the failure has been resolved.
The next generation of ISDN service. BISDN is a fiber-optic-based service using asynchronous transfer mode
(ATM) over SONET-based transmission circuits. The service is designed to handle high-bandwidth
applications, such as video, at rates of 155Mbps, 622Mbps, and higher.
An ISDN consortium name and the technique of inverse multiplexing they developed. Data is broken up into
a stream of frames, each stream using a portion of the total available bandwidth. If your ISDN configuration
has two B-channels, each with 64Kbps, your equipment will allow a data rate of 128Kbps by splitting the
data.
A protocol designed to allow diskless workstations to boot onto an IP network. A single BOOTP message
contains many pieces of information needed by a workstation at startup, such as an IP address, the address
of a gateway, and the address of a server. A workstation that boots up requests this information from a
BOOTP server.
The ISDN interface often comprised of two B-channels and one D-channel for circuit-switched
communications of voice, data, and video. Depending on connection requirements and the local telephone
company, it is possible to purchase just one B-channel.
bridge
A device that interconnects two or more LANs. A bridge is often used to segment a LAN to increase
bandwidth on the new segments. Although the segments operate logically as one LAN, the repartitioning
prevents data from being broadcast indiscriminately across the entire network.
broadband
A type of transmission using coaxial cable and analog or radio-frequency signals. Broadband uses a
frequency band that is divided into several narrower bands, so different kinds of transmission (data, voice,
and video) can be transmitted at the same time.
brouter
This term has various definitions, but it usually refers to a device that performsthe functions of both a bridge
and a multiprotocol router. The term is often misused to describe a bridge with more than two LAN
connections.
buffer
A location in memory set aside to temporarily hold data. It is often used to compensate for a difference in
data flow rates between devices or skews in event timings; many network devices such as network interface
cards (NIC) and routers have integrated buffer storage.
cable modem
A specialized, currently experimental modem service offered by cable companies that provides Internet
access at speeds of 10Mbps downstream (to the subscriber) and 768Kbps upstream. The cabling infrastructure
is already in place, but the service requires the cable company to replace existing equipment with expensive
two-way transmission hardware.
capacity planning
The process of determining the future requirements of a network. An important process, if a network is to
function properly and at peak performance, especially when users or equipment is added to the network.
category cable
Cable that complies with standard network cable specifications and is rated category 1 through 5. The higher
the number, the higher the speed capability of the cable. The wire may be shielded or unshielded and always
has an impedance of 100 ohms.
CAT-5 (Category 5)
A cabling standard for use on networks at speeds of up to 100 Mbits, including FDDI and 100base-T. The 5
refers to the number of turns per inch with which the cable is constructed. See also category cable.
Formed in 1988 by the Defense Advanced Research Projects Agency (DARPA) to help facilitate and resolve
Internet security issues. CERT was formed in response to the Internet worm written by Robert Morris, Jr.,
which infected thousands of Internet computers in 1988.
circuit switching
A method of transmission in which a fixed path is established between the nodes communicating. This fixed
path permits exclusive use of the circuit between the nodes until the connection is dropped. The public
telephone network uses circuit switching.
client/server model
A common way to describe the rules and concepts behind many network protocols. The client, usually a
user's computer and its software, makes requests for information or programs from a server located
somewhere on the network.
collision
The result of two or more computers trying to access the network medium at the same time. Ethernet uses
CSMA/CD to handle collisions and to coordinate retransmission.
community string
A password used by the Simple Network Management Protocol (SNMP) that allows an SNMP manager
station access to an agent's Management Information Base (MIB) database.
configuration management
The process of retrieving data from network devices and using the information to manage the setup of the
devices. For example, SNMP has the ability to automatically or manually retrieve data from SNMP-enabled
network devices. Based on this data, a network manager can decide whether configuration changes are
necessary to maintain network performance.
connection-oriented communications
The transmission of data across a path that stays established until one of the nodes drops the connection. This
type of logical connection guarantees that all blocks of data will be delivered reliably. Telnet is an example
of connection-oriented communications.
connectionless communications
The transmission of data across a network in which each packet is individually routed to its destination, based
on information contained in the packet header. The path the data takes is generally unknown because there
is no established connection between the computers that are communicating. Connectionless services can
drop packets or deliver them out of sequence if each of the packets gets routed differently.
cookie
A piece of information sent by content providers on the Internet that gets written to the user's local disk. The
content providers often use this information to track where visitors link to on their Web site. Most browsers
can be configured to disallow the writing of such data to user's disks.
The medium access method used in Ethernet to avoid having more than one host transmitting on a LAN
segment at a time. The transmitting host first listens for traffic on the cable and then transmits, if no traffic is
detected. If two hosts transmit at the same time, a collision occurs. Each host then waits for a random length
of time before listening and transmitting again.
The hardware interface to a Digital Data Service, for example, a T-1 line. The CSU provides line termination,
signal amplification, and has the diagnostic ability to loop a signal back to its source. See also DSU (Data
Service Unit).
datagram
A method of sending data in which some parts of the message are sent in random order. The destination
computer has the task of reassembling the parts in the correct sequence. The datagram is a connectionless,
single packet message used by the Internet Protocol (IP). A datagram is comprised of a source network
address, a destination network address, and information.
D-channel
The ISDN channel used to deliver network control information; often referred to as out-of-band signaling.
Because many telephone companies are not configured for out-of-band signaling, they combine the D-
channel information with a B-channel. The result of this combination is lower data rates, 56Kbps and
112Kbps, because of the overhead added to the B-channel.
Layer 2 of the seven-layer OSI model. The data link layer is concerned with managing network access, for
example, performing collision sensing and network control. Also, if the data link layer detects an error, it
arranges to have the sending computer resend the corrupt packet.
A leased digital transmission line offering speeds ranging from switched 56Kbps, to T-1 (1.544Mbps), or to
T-3 service operating at 44.736Mbps. When DDS is employed, special digital modems called CSUs and
DSUs are used to interface between the DDS line and the LAN.
An encryption algorithm based on a 64-bit key. DES is considered the most secure encryption algorithm
available, but not the easiest to implement and maintain.
digital ID
An emerging technology using public-key cryptography to make Internet and intranet transactions secure.
A Frame Relay term describing the identifier given to each connection point. The DLCI is used so a node
can communicate with the first Frame Relay machine. Then that machine maps the data to another DLCI it
uses for its link with the next Frame Relay machine, and so on, until the destination node is reached.
DN (Directory Number)
The directory number is the address for the ISDN line assigned by the telephone company. The type of
equipment the telephone company uses at its central office determines whether each of the two B-channels
will be assigned their own directory numbers.
A computer used to map IP addresses to computer system names. A network administrator creates a list on
the domain name server where each line contains a specific computer's IP address and a name associated
with that computer. When someone wants to access another computer, either the IP address or the name of
the computer is used. Using names is easier than remembering scores of IP addresses.
domain
Part of the naming hierarchy used on the Internet and syntactically represented bya series of names separated
by dots. Take, for example, the domain name CATJO.BONZO.BOBO.COM. Read right-to-left, the address
provides the path to a company (COM) named BOBO, to a company network named BONZO, and finally to
the destination computer named CATJO.
The connection services offered by the telephone companies through T-carriers, more commonly known as
T-1, T-2, T-3, and T-4.
Modems on either end of a single twisted-pair wire that deliver ISDN Basic Rate Access. A DSL transmits
duplex data at 160Kbps over 24-gauge copper lines at distances up to 18,000 feet. The multiplexing and de-
multiplexing of this data stream creates two B-channels (64Kbps each), a D-channel (16Kbps), and some
overhead that takes place for attached terminal equipment. DSL employs echo cancellation to separate the
transmit signal and the receive signal at both ends.
A set of protocols in ISDN designed so your equipment can ask for specific services across the network.
Directed at the carrier's switching equipment, DSS1 sends message types that provide the specific control
(for example, connect, hold, and restart) to be taken.
A DSU provides the interface between the Data Terminal Equipment (DTE) and the Channel Service Unit
(CSU) when a network is connected to a Digital Data Service (DDS). The DSU's primary functions are to
properly convert a DTE's output signals to the format required by the DDS and to provide control signaling.
A protocol used to support IP Multicast. As users join or leave multicast groups, data is broadcast to each
router in the internetwork. The routers prune out the users who do not want further transmissions.
encapsulation
A method of wrapping data in a particular protocol header. For example, Ethernet data is wrapped in a special
Ethernet header before transmission. Encapsulation is also used when sending data across dissimilar
networks. When a frame arrives at the router, it is encapsulated with the header used by the link-layer protocol
of the receiving network before it is transmitted.
encryption
Ethernet
The most widely used type of LAN environment, with common operating speeds of 10Mbps and 100Mbps.
Ethernet uses the Carrier Sense, Multiple Access with Collision Detection (CSMA/CD) discipline.
Ethernet switch
A hub-like device that reads the destination address in the header of an Ethernet packet and redirects the
packet to the proper destination port. By sending the packets only to the destination port and not all other
ports, an Ethernet switch increases the amount of data that can be transmitted on the network at one time.
Contrast a switch with a standard repeating hub, which takes incoming traffic and repeats it across all ports
regardless of the intended destination.
fast-Ethernet switch
fault tolerance
The ability of a network to function even after some hardware or software components have failed and are
not available to the user. Fault-tolerant networks attempt to maintain availability by using component
redundancy (hardware and/or software) and the concept of atomicity (that is, either all parts of a transaction
occur or none at all).
A 100Mbps fiber-optic LAN standard that operates on Token Ring mechanics and is usually installed as a
backbone. A full duplex (send and receive simultaneously) configuration is possible, which doubles the
transmission throughput to 200Mbps.
file server
A computer attached to a network that provides mass disk storage and file services to users. Most often a file
server is setup so that only select users or groups of users can access the resource.
firewall
A hardware and software device that protects and controls the connection of one network to other networks.
The firewall prevents unwanted or unauthorized traffic from entering a network and also allows only selected
traffic to leave a network.
fractional T-1
A full T-1 line consists of 24 64Kbps channels. It is possible to purchase only a portion of a T-1 line,
depending on resource needs; hence the term fractional T-1.
fragment
Part of a data packet. If a router sends data to a network that has a maximum packet size smaller than the
packet itself, the router is forced to break up the packet into smaller fragments.
Frame Relay
A technique using virtual connections to transport data between networks attached to a WAN. Packets are
routed to their destination based on the DLCI number assigned to each of the nodes that are members of the
Frame Relay cloud. The cloud is the part of the network the telephone company handles. To the user, it's
unknown what happens inside the cloud; data goes in the cloud, then comes out and arrives at the correct
destination.
The technique of dividing a specific frequency range into smaller parts, with each part maintaining enough
bandwidth to carry one channel.
fubar or foobar
Fouled Up Beyond All Repair. There are also other more colorful versions of this slang term.
full-duplex
The capability of having two-way data transmission in both directions (send and receive) simultaneously.
Contrast to half-duplex.
gateway
A network device that performs protocol conversion between dissimilar networks. A gateway is typically
used to provide access to wide area networks over asynchronous links from a LAN environment.
half-duplex
A method of two-way transmission, but data can only travel in one direction at a time. Contrast to full-duplex.
hardware address
Also called the physical address, it is a data link address associated with a particular network device.
The most widely used synchronous data link protocol in existence. It supports both half-duplex and full-
duplex transmission, point-to-point configurations, and switched or non-switched channels.
Modems on either end of one or more twisted-pair wires that deliver T-1 or E-1 transmission speeds.
Presently, T-1 service requires two lines and E-1 requires three.
A standard that extends a computer bus over short distances at speeds of 800 to 1600Mbps. HIPPI is often
associated with supercomputers.
hop
A routing term that refers to the number of times data travels through a router before reaching its destination.
hub
A device that connects to several other devices, usually in a star topology. For example, a 12-port hub
attached to a 100base-T LAN backbone allows 12 devices or segments to connect to the LAN. There are two
type of hubs: Dumb hubs simply act as repeaters and smart hubs have sophisticated features such as SNMP
support or built-in bridging or routing functions.
The protocol that handles errors and control messages at the Internet Protocol (IP) layer. For example, when
a data packet is transmitted with incorrect destination information, the router attached to the network responds
with an ICMP message indicating an error occurred within the transmission.
A protocol developed by Cisco Systems that is used on networks that are under common administration. This
protocol was designed to operate on large, complex topology networks with segments that have different
bandwidth and delay characteristics. As with other routing protocols, IGRP determines where to send data
packets that have destination addresses outside the local network.
interoperability
The ability of applications and hardware combinations on two or more computer systems to communicate
successfully with each other. Standards set by groups such as the IEEE are the reason why devices from
different vendors operating across multiple platforms are capable of working with each other.
intranet
A term that describes a spin on Web technology that uses servers and browsers to set up a private Internet.
IP (Internet Protocol)
A network layer protocol that contains addressing information and some control information so packets can
be routed across an internetwork. The ICMP control and message protocol are integrated within IP, also.
IP Multicast
A method of sending data simultaneously to a selected group of recipients. Multicast makes efficient use of
bandwidth because it unicasts to all intended recipients and avoids broadcasting to unnecessary destinations.
Ipng or IPv6
The next generation (ng) of Internet addressing. The current 32-bit Internet addressing scheme (IPv4) is
severely strained by current Internet growth. IPv6 (64-bit) is one proposed next generation method of
increasing the number of available Internet addresses while also providing additional functionality.
IP switching
An ATM switch capable of routing IP. Standard ATM switches cannot accommodate IP without complicated
and difficult-to-manage software translation. By implementing the IP protocol stack on ATM hardware, full
compatibility with existing IP networks is maintained while reaping the benefits of the high-speed
throughputs associated with ATM.
A protocol suite developed by Novell, Inc. and used by computer systems attached to a network running the
NetWare operating system. IPX provides a best-effort delivery service and is equivalent to the IP of TCP/IP.
A type of network provided by the telephone companies that allows both voice and digital services to be
combined over a single medium. ISDN services are delivered over standard POTS lines at a speed of
128Kbps.
isochronous service
A transmission service in which the data channel has a guaranteed bandwidth. Bandwidth on an isochronous
service is preallocated and stays fixed, whether the bandwidth is used or not, guaranteeing that the required
bandwidth is available when it is needed. FDDI and ATM, handling audio and video data, are examples of
technologies that support isochronous service.
A company that provides direct access to the Internet as opposed to an online service (for example, America
Online or CompuServe) that provides Internet access through a gateway. ISPs usually offer a large range of
services, such as Gopher, Archie, Telnet, FTP, or WWW.
jabber
jam signal
In Ethernet, a signal generated by a network interface to let other devices know that a collision has occurred.
keep alive
A message sent over an idle network link. The message tells a remote computer that the local computer
remains operational and is waiting.
Kerberos
An authentication system used for open systems and networks. Developed at MIT, Kerberos can be added
onto any existing protocol. The system uses an adaptation of DES (Data Encryption Standard) and tickets to
protect messages sent on a network by a user and by the system. Kerberos never transmits passwords over
the network. Contrast Kerberos to public key encryption.
Telephone companies operate within specific geographical regions divided into areas called LATAs. A
connection made between two points within the same LATA implies that a connection is local. A connection
outside the LATA requires the use of an Interexchange Carrier or long-distance company.
A new protocol, also known as X.500 Lite, that simplifies the complex structure of Internet directories
(databases) that handle client information about users and e-mail addresses.
leased line
A permanent circuit provided by the telephone company. Communications on a leased line are not established
by dialing and are usually configured as a direct point-to-point connection. A T-1 connection is an example
of a leased line.
local loop
The copper twisted-pair cable from the telephone company's central office to an end user's location. The local
loop is the determining factor in the data rate associated with your use of the telephone system.
The lower portion of the data link layer responsible for control of access to the physical medium.
managed object
Devices on a network such as workstations, hubs, servers, and routers that are all monitored via SNMP. Each
device contains hardware or software that allows it to communicate with the SNMP manager station
responsible for tracking all the managed network components.
A device that physically attaches to a LAN and allows the connection of computers or additional LAN
segments. A MAU is often referred to as a transceiver and attaches to a computer through an AUI cable.
In SNMP, the MIB is the database where information about the managed objects is stored. The structure of
an MIB is complex and can contain information about many aspects of the device being managed.
A standard set of definitions designed to handle non-ASCII e-mail. MIME specifies how binary data, such
as graphical images, can be attached to Internet e-mail. The process of attaching binary data to e-mail requires
encoding between two types of data formats. It is MIME's responsibility to handle the encoding and the
decoding at the destination.
modem (modulator-demodulator)
A communication device that performs conversion of digital signals into analog signals (transmission) and
analog signals into digital signals (receiving). This conversion is necessary if communication over standard
POTS is attempted.
multicast
The process of sending messages to a defined set of destinations. Unlike a broadcast, which is read by all
destinations that receive them, a multicast is received only by those destinations that are part of a predefined
group configured to receive multicast messages.
A multicast transmission of video. Rather than sending individual streams of video to each user (unicast),
multicast multimedia transmission sends a stream of video that is shared among users assuming the user is
configured to receive such transmissions. See also multicast.
multimode fiber
A type of fiber-optic cable. The word mode is synonymous with ray; you can think of multimode fiber as
transmitting multiple rays. Multimode fiber typically has a core diameter of 62.5 microns and is usually
selected for short haul networks (less than 2km).
multiplexer
A device used to combine data transmitted from many low-to-medium speed devices onto one or more high-
speed paths for retransmission. There are various techniques for achieving this, such as time division,
frequency division, statistical time division, and wavelength division multiplexing. A multiplexer is
sometimes called a concen- trator.
multiport repeater
A type of hub used to join multiple LAN segments. When a segment exceeds its maximum allowable nodes,
a repeater is often used to expand the network. See also segmentation.
Software developed by IBM that extends the interface between the PC operating system and the PC I/O bus
to include attachment to a network. Since its design, NetBIOS has become a de facto standard, providing the
basic framework for PCs to operate on a LAN.
network layer
Layer 3 of the seven-layer OSI model. The network layer plans the routing of packets and is responsible for
addressing and delivering messages from the sender to the final destination. A simple network comprised of
a few LANs linked by bridges would not need a Layer 3 at all, because there is no routing involved.
network management
The job of controlling a network so it can be used in an efficient manner. Network management is divided
into five management categories: performance, fault, accounting, security, and configuration.
The high-speed optical carrier networks used by the telephone companies. OC services provide much higher
speeds than T-carrier services such as T-1 or T-2.
A device demonstrated by British Telecom that is capable of routing data on fiber-optic cable at 100Gbps.
The router works by reading the destination address of the encoded pulses of light and switching the data to
the appropriate output path toward the destination. Because the data rates are about 100 times faster than
current non-optical routers, this technology has significant implications for high-speed networks in the future.
A device that simply cross-connects one or more fiber-optic cables. This type of switch allows a network to
be reconfigured quickly and easily to accommodate specific requirements or workgroup moves. For example,
an ATM LAN could be connected to other multiple protocol networks within a building by optical switches,
if needed.
OSI model
A concept developed by ISO and CCITT used to develop standards for data networking that promote
multivendor equipment interoperability. The OSI model is separated into seven layers that relate to the
interconnection of computer systems. See also application layer, presentation layer, session layer, transport
layer, network layer, data link layer, and physical layer.
A protocol that routers use to communicate between themselves. OSPF has the ability to configure topologies
and adapt to changes in the Internet. It can also balance traffic loads by determining which routes offer the
best service.
Diagnostic equipment used to calculate the length and attenuation of a fiber-optic cable. By sending a short
duration laser pulse into one end of the fiber, the fiber's length is calculated by measuring the amount of time
it takes for a reflection to return from the other end.
packet
A group of bits comprised of address, data, and control information that is combined and transmitted as one
unit. The terms frame and packet are often used synonymously.
packet-switched network
A networking technique where data is broken into small packets and then transmitted to other networks over
a WAN to computers configured as packet switches where the data is then reassembled. The packets get
routed and rerouted, depending on the size of the network or the distance the packets travel to their
destination.
passband
The range of frequencies a data line is capable of handling. Passband is often confused with bandwidth, the
width of a channel contained within the passband.
peer-to-peer
Communication between computers in which neither computer has control over the other.
performance management
The process of analyzing the characteristics of a network to monitor and increase its efficiency. For example,
a network manager may monitor a network using a Sniffer and develop statistics from that data in hopes of
finding ways to increase available bandwidth on a crowded network.
physical layer
Layer 1 of the seven-layer OSI model, which specifies the physical medium of a network. It is the wire on
which data is transmitted and it is the connectors, hubs, and repeaters that comprise the network. Some refer
to the physical layer as the hardware layer.
A utility program used to determine whether a remote computer is reachable by sending it multiple ICMP
echo requests and then waiting for a response.
The connection site where entry to a WAN or the public switched network occurs. The term is most often
heard when referring to Internet service providers (ISPs) and their dial-up access locations.
A point-to-point circuit is a network configuration where a connection exists only between two points. PPP
is the protocol for transmitting routing information over synchronous or asynchronous point-to-point circuits.
The routing information allows different vendor's equipment to interoperate over point-to-point circuits.
A secure remote access protocol, developed by Ascend Communications, Inc. and touted by Microsoft Corp.
for their Windows platforms, that allows remote users to access their corporate network(s) via the Internet.
PPTP makes use of encryption to secure the virtual private connection between the user and the corporate
network. The tunneling nature of PPTP allows users to piggyback IPX and NetBEUI on IP packets.
presentation layer
Layer 6 of the seven-layer OSI model. The presentation layer makes sure that data sent to the application
layer is in the correct format. If some conversion were required between different data types, it would take
place at this layer. Translation and byte reordering is sometimes necessary when different computers (for
example, IBM, Apple, NeXT) want to share information.
An ISDN interface consisting of 23 B-channels, operating at 64Kbps each, and one 64Kbps D-channel.
Companies installing multiple ISDN lines often use PRI to provide sufficient bandwidth for their network(s).
PRI service is referred to as 23B+D.
protocol
A set of rules governing how information flows within a network. Protocols control format, timing, and error
correction. They are essential for a device to be able to interpret incoming information. Suites of protocols
are often used in networks, with each protocol responsible for one part of a communications function.
protocol emulator
A computer that generates the protocols required by another computer. The term, protocol converter, is often
used in place of protocol emulator. A converter is slightly different in that it translates data between two
dissimilar protocols so that different systems can communicate with each other.
proxy agent
In SNMP, a device that gathers information about other SNMP-enabled devices on the network. At some
predetermined time, the proxy agent will relay the stored information to the SNMP management station for
analysis.
A form of asymmetric encryption in which encryption and decryption are performed using two separate keys.
One key is referred to as the public key, the other as the private key. The public key is made available to
everyone and is used to encrypt a message. The owner of the public key receives a message encrypted with
his public key and then decrypts the message with his private key, the only key that can decrypt the message.
A wire termination device in which wire is placed across a Y-shaped connector and then connected or
punched down using a special tool. The connections made on a punch down block are very reliable.
A circuit that is permanently dedicated, such as a leased line. The virtual aspect of PVC is that a user does
not know what path the data took to get to its destination after the data has entered the circuits of the telephone
company's central office.
The logical reverse of ARP. RARP is used to determine the IP address of a computer on a TCP/IP network
when only the hardware address is known.
repeater
A device used to increase the length of a LAN or to increase the distance between devices attached to the
LAN. The span can be increased because a repeater regenerates the signals before retransmitting them.
Documents outlining standards and procedures for the Internet. These numbered documents are controlled
by the Internet Activities Board (IAB) and are available in hard-copy from the Defense Data Network,
Network Information Center, (DDN/NIC) or electronically over the Internet.
RG58
50 ohm coaxial cable used in 10base-2 Ethernet networks. Often referred to as ThinNet or CheapNet.
RJ45
A standard 8-pin conductor modular plug. The RJ45 connector is replacing the RJ11 (6-pin) connector for
use in 10base-T networks. RJ45 connectors look very similar to the old RJ11 modular jack used on
telephones.
The standard that defines the information sent to and from devices within a network using SNMP. To ease
the difficulties in managing networks spanning large geographical areas, remote management devices or
probes are placed on remote segments to act as the eyes and ears of the network management system. RMON
MIB defines what data passes between the remote devices and the SNMP manager.
router
In general terms, a router makes decisions about which of several possible network paths data will follow. In
a TCP/IP network, a router reads IP destination addresses to determine routes.
routing table
A directory contained in a router's memory that contains the addresses of other networks or devices and how
to reach them.
A complex facility that allows a local process or program to invoke a remote process.
A high-performance bus for connecting peripherals to a computer. The SCSI interface, or host card, allows
multiple SCSI-compatible devices to attach to the bus. SCSI's design intent is two-fold: increase throughput
speed and decrease the number of problems associated with hardware compatibility.
HDSL over a single telephone line. This name has not been set by any standards group, and may not stick.
SDSL operates over POTS and would be suitable for symmetric services to the premises of individual
customers.
segment
A bus LAN term meaning an electrically continuous piece of the bus. Segments can be joined together using
repeaters or bridges.
segmentation
The process of splitting a network into multiple segments. A multiport repeater is one device often used to
segment LANs. In diagnostic terms, segmenting a network minimizes the difficulty of analyzing network
faults. Rather than the whole network being inoperable, only the segment with the fault ceases to function.
serial link
A connection where the data bits are transmitted sequentially over a single channel.
session layer
Layer 5 of the seven-layer OSI model. The session layer defines the session type between two computers and
controls the dialogue between the applications on those two computers. For example, when a user accesses
another computer, a session that allows computer applications to inform each other of any problems is created
and controlled by Layer 5.
singlemode fiber
A type of fiber-optic cable. Singlemode fiber typically has a core diameter of 8 microns and is usually selected
for high bandwidth, long haul networks (greater than 2 km). It is also the most difficult optical cable to splice
and terminate because of its small core diameter.
An Internet protocol used to run IP over serial lines, such as telephone circuits, and connecting two
computers. Though similar to PPP, SLIP supports only IP and is not as efficient as PPP.
Pronounced "smuds," SMDS is a high-speed, datagram-based, public data network. SMDS currently allows
several remotely located LANs to communicate with each other at 45Mbps (T-3) speeds.
The TCP/IP standard protocol used to transfer e-mail from one computer to another. SMTP manages mail
functions such as establishing a sender's credentials and ensuring a recipient's mailbox is correct.
Sniffer
Originally the name for the protocol analyzer from Network General, but now incorrectly used to describe
protocol analyzers in general. A Sniffer decodes and interprets frames on LANs with more than one protocol.
A user programs the Sniffer with search criteria and starts the packet capture process. When the capture is
complete, the results are displayed on the screen.
A network system framework designed to collect report information, configuration information, and
performance data with the use of SNMP managers and agents. An agent is a device such as a hub, a router,
or even a computer that has the capability to store SNMP data, such as information about whether the device
is functioning properly. A manager is the device that retrieves SNMP data from the agent devices installed
on the network.
A high-speed fiber-optic network used to interconnect high-speed networks. SONET can carry data 50 times
faster than T-3 rates while providing higher-quality signals. SONET operates by multiplexing low-speed
lines onto high-speed trunk lines.
spanning tree
An algorithm used by bridges to automatically develop routing tables, a list of possible data paths, and update
that table anytime the network topology changes. Spanning tree is used to avoid network loops by ensuring
there is only one route between any two LANs in the bridged network.
A number used to identify the ISDN device to the telephone network, much as an Ethernet address uniquely
identifies a network interface card. A SPID is assigned to each channel of an ISDN line.
spooling
The process of controlling data, usually to a printer. Spooling uses buffer storage to reduce processing delays
when transferring large amounts of data between printers and computers. The term is derived from the
expression simultaneous peripheral operation Online.
A transmission system based on the use of a dumb switch and a smart database. By using this database and
switch combination, the number of network features is significantly increased. Another advantage of SS7 is
that networks can be easily customized because more knowledge can be contained in the database than can
be embedded cost effectively in hardware.
subnet mask
A 32-bit mask used to interpret the network address from the host addresses in an IP address.
subscriber loop
The connection between the user's equipment and a telephone company's central office.
In packet switching, SVC gives the user the appearance of an actual connection. An SVC is dynamically
established when needed.
synchronous transmission
A method of data transfer in which characters are blocked together for transmission as a group. Special
synchronization characters are placed at the beginning and end of each block to delineate the start and end of
the block. Contrast with asynchronous transmission.
T-1
T-3
T-Carrier
The U.S. standard for digital transmission lines. The line types are of the form T-n, as in T-1 or T-3, and the
corresponding line signal standards of the form DS-n, as in DS-1 or DS-4.
TA (Terminal Adapter)
The terminal adapter's function is to adapt non-ISDN equipment to ISDN. For example, you will often see a
terminal adapter marked with an R interface that is a connection point typically for an analog phone, a
modem, or other devices that are not ISDN compliant.
Tap
The connecting device on cable-based LANs, such as Ethernet, linking to the main transmission medium.
For example, taps are used to connect multiport repeaters to 10base-5 coaxial cabling.
The two best-known Internet protocols that are often mistaken as a single protocol. TCP corresponds to the
transport layer (Layer 4 of the OSI model) and is responsible for the reliable transmission of data. IP
corresponds to the network layer (Layer 3) and provides for the connectionless service of data transmission.
terminal server
A device that connects terminals and modems to a network. Terminal server is synonymous with access
server.
A simplified version of FTP that transfers files from one computer to another without the need for
authentication. TFTP is sometimes used to help boot diskless workstations by retrieving boot images from a
remote server.
Token Ring
A popular LAN type in which access to the network is controlled by use of a token. A computer can transmit
only if it has possession of the token. Data is attached to the token and the token is passed to the next computer
in the sequence. Token Ring network topology is typically star-shaped but, because of the sequential nature
of token passing, the network operates logically as a ring.
topology
The physical structure and organization of a network. The most common topologies are bus, tree, ring, and
star.
transport layer
Layer 4 of the seven-layer OSI model. The transport layer is responsible for ensuring that data is delivered
reliably between nodes. Also, if more than one packet is in process at any one time, the transport layer
sequences the packets to ensure the packets get rebuilt in the correct order.
tunneling
A method of encapsulating data so it can be transmitted across a network that operates with a different
protocol.
twisted pair
A transmission media consisting of two shielded or unshielded copper wires that are arranged in a precise
spiral pattern. The spiral pattern is an important aspect of twisted-pair cables in order to minimize crosstalk
or interference between adjoining wires. See also CAT-5.
A connectionless transport protocol used by IP networks that allows an application program on one computer
to send a datagram or packet to an application program on another computer. Unlike IP packets, UDP packets
include a checksum (error- checking data) with the data being sent.
Usenet
The large group of computers set up to exchange information in the form of newsgroups. Any user that
connects to the Internet and has the proper software can access Usenet. It is not controlled by any person or
organization, so the content of each newsgroup is determined by its users.
virtual channel
A channel that appears to the user to be a simple, direct connection, but in fact is implemented in a more
complex manner.
A technique using an optical multiplexer to combine light sources of different wavelengths onto a fiber-optic
cable. When the light reaches the end of the cable, an optical demultiplexer separates the original signals by
wavelength and passes them to detector circuits for conversion back into electrical signals.
A data communications network designed to work over a large geographical area. Corporate WANs can
connect employees across many branch offices by using various telecommunication link technologies.
wiring closet
A room that often serves as the central location for network devices. For example, a wiring closet could be
located in the middle of a small building. All the network wiring would originate from this room and all the
connections to the routers, hubs, and other network devices are easily accessed in one location.
worm
A program that copies itself from one computer to another, usually over a network. Like viruses, worms may
damage data or degrade performance by overloading system resources. One famous worm in the late 1980s
virtually brought down the global WAN of a large computer company by tying up network resources each
time unwitting users opened their e-mail.