Computer Network Notes
Computer Network Notes
The term topology in computer networking refers to the way in which a network is laid out
physically. Two or more devices connect to a link; two or more links form a topology. The
topology of a network is the geometric representation of the relationship of all links and linking
devices (usually called nodes) to one another. The cost and flexibility of a network installation
are partly affected by as is system reliability. Many network topologies are commonly used, but
they all have certain similarities. Information is carried either through space (wireless) or cable.
The cable must control the movement of information on the network so that data can be
transmitted in a reliable manner. The basic topologies possible: bus, star, ring, mesh and Hybrid.
Bus Topology
The Bus topology consists of a single cable that runs to every work-station. The bus topology is
also known as linear bus. In other words, all the nodes (computers and servers) are connected to
the single cable (called bus), by the help of interface connectors. This central cable is the back
bone of the network and every workstation communicates with the other device through this bus.
In bus topology nodes are connected to the bus cable by drop lines and taps. See figure 11. A
drop line is a connection running between the device and the main cable. A tap is a connector
that either splices into the main cable or punctures the sheathing of a cable to create a contact
with the metallic core. As a signal travels along the backbone, some of its energy is transformed
into heat. Therefore, it becomes weaker and weaker as it travels farther and farther. For this
reason there is a limit on the number of taps a bus can support and on the distance between those
taps.
Ring Topology
The ring topology connects computers on a single circle of cable. There are no terminated ends.
A ring topology connects one host to the next and the last host to the first. The signal travels
around the loop in one direction and pass through each computer. Unlike the passive bus
topology, each computer acts like a repeater to boost the signal and send it on to the next
computer. Because the signal passes through each computer, the failure of one computer can
impact the entire network.
One method of transmitting data around a ring is called token passing. The token is passed from
computer to computer until it gets to a computer that has data to send. The sending computer
modifies the token, puts an electronic address on the data, and sends it around the ring.
Advantages of Ring Topology
1. This type of network topology is very organized. Each node gets to send the data when it
receives an empty token. This helps to reduces chances of collision. Also in ring topology all the
traffic flows in only one direction at very high speed.
2. Even when the load on the network increases, its performance is better than that of Bus
topology.
3. There is no need for network server to control the connectivity between workstations.
4. Additional components do not affect the performance of network.
5. Each computer has equal access to resources.
Star Topology
In the star topology, computers are connected by cable segments to centralized component,
called a hub or switch. Signals are transmitted from the sending computer through the hub or
switch to all computers on the network. This topology originated in the early days of computing
with computers connected to a centralized mainframe computer. It is now a common topology in
microcomputer networking. Each device has a dedicated point-to-point link only to a central
controller, usually called a hub. The devices are not directly linked to one another. Unlike a mesh
topology, a star topology does not allow direct traffic between devices. The controller acts as an
exchange: If one device wants to send data to another, it sends the data to the controller, which
then relays the data to the other connected device.
The star network offers centralized resources and management. However, because each
computer is connected to a central point, this topology requires a great deal of cable in a large
network installation. Also, if the central point fails, the entire network goes down.
Advantages of Star Topology
1. As compared to Bus topology it gives far much better performance, signals don’t necessarily
get transmitted to all the workstations. A sent signal reaches the intended destination after
passing through no more than 3-4 devices and 2-3 links. Performance of the network is
dependent on the capacity of central hub.
2. Easy to connect new nodes or devices. In star topology new nodes can be added easily without
affecting rest of the network. Similarly components can also be removed easily.
3. Centralized management. It helps in monitoring the network.
4. Failure of one node or link doesn’t affect the rest of network. At the same time it is easy to
detect the failure and troubleshoot it.
Mesh Topology
In a mesh topology, every device has a dedicated point-to-point link to every other device. The
term dedicated means that the link carries traffic only between the two devices it connects. In a
mesh topology, Node1 must be connected to n-1 nodes, node2 must be connected to (n – 1)
nodes, and finally node n must be connected to (n – 1) nodes. We need n (n - 1) physical links. In
other words, we can say that in a mesh topology, we need n (n – 1)/2.
To accommodate many links, every device on the network must have (n – 1) input/output (I/O)
ports to be connected to the (n – 1) stations as shown in Figure above. For these reasons a mesh
topology is usually implemented in a limited fashion, as a backbone connecting the main
computers of a hybrid network that can include several other topologies. One practical example
of a mesh topology is the connection of telephone regional offices in which each regional office
needs to be connected to every other regional office.
Tree Topology
Tree topology can be derived from the star topology. Tree has a hierarchy of various bubs, like
you have branches in a tree; hence the name. Figure 1.6 in this case every node is connected to
some hub. However, only a few nodes are connected directly to the central hub.
The central hub contains a repeater, which looks at the incoming bits and regenerates them afresh
as the full blown signals for 0 or 1 as required. This allows the digital signals to traverse over
longer distances. Therefore, the central hub is also called active hubs. The tree topology also
contains many secondary hubs, which may be active hubs or passive hubs. The merits and
demerits of tree topology are almost similar to those of the star topology.
Advantages of Tree Topology
1. It is an extension of Star and bus Topologies, so in networks where these topologies can't be
implemented individually for reasons related to scalability, tree topology is the best alternative.
2. Expansion of Network is possible and easy.
3. Here, we divide the whole network into segments (star networks), which can be easily
managed and maintained.
4. Error detection and correction is easy.
5. Each segment is provided with dedicated point-to-point wiring to the central hub.
6. If one segment is damaged, other segments are not affected.
Hybrid Topology
A network topology is a connection of various links and nodes, communicating with each other
for transfer of data. We also saw various advantages and disadvantages of Star, Bus, Ring, Mesh.
Hybrid, as the name suggests, is mixture of two different things. Similarly in this type of
topology we integrate two or more different topologies to form a resultant topology which has
good points (as well as weaknesses) of all the constituent basic topologies rather than having
characteristics of one specific topology. This combination of topologies is done according to the
requirements of the organization.
For example, if there is an existing ring topology in one office department while a bus topology
in another department, connecting these two will result in Hybrid topology. Remember
connecting two similar topologies cannot be termed as Hybrid topology. Star-Ring and Star-Bus
networks are most common examples of hybrid network.
Advantages of Hybrid Network Topology
1. Reliable: Unlike other networks, fault detection and troubleshooting is easy in this type of
topology. The part in which fault is detected can be isolated from the rest of network and
required corrective measures can be taken, WITHOUT affecting the functioning of rest of the
network.
2. Scalable: It’s easy to increase the size of network by adding new components, without
disturbing existing architecture.
3. Flexible: Hybrid Network can be designed according to the requirements of the organization
and by optimizing the available resources. Special care can be given to nodes where traffic is
high as well as where chances of fault are high.
4. Effective: Hybrid topology is the combination of two or more topologies, so we can design it
in such a way that strengths of constituent topologies are maximized while there weaknesses are
neutralized. For example we saw Ring Topology has good data reliability (achieved by use of
tokens) and Star topology has high tolerance capability (as each node is not directly connected to
other but through central device), so these two can be used effectively in hybrid star-ring
topology.
Networking Devices
All but the most basic of networks require devices to provide connectivity and functionality.
Understanding how these networking devices operate and identifying the functions they perform
are essential skills for any network administrator and requirements for a Network candidate.
We introduces commonly used networking devices, and, although it is true that you are not likely
to encounter all of the devices mentioned in this chapter on the exam, you can be assured of
working with at least some of them.
What we Learn
✓ Describe how hubs and switches work
✓ Explain how hubs and switches can be connected to create larger networks
✓ Describe how bridges, routers, and gateways work
✓ Describe how routing protocols are used for dynamic routing
✓ Explain the purpose of other networking components such as Channel Service Unit/Digital
Service Unit (CSU/DSU) and gateways
✓ Describe the purpose and function of network cards
✓ Describe how to identify a MAC address
✓ Understand the function of a transceiver
✓ Describe the purpose of a firewal
Hubs
Hubs are simple network devices, and their simplicity is reflected in their low cost. Small hubs
with four or five ports with the requisite cables, they provide everything needed to create a small
network. Hubs with more ports are available for networks that require greater capacity.
At the bottom of the networking food chain, so to speak, are hubs. Hubs are used in networks
that use twisted-pair cabling to connect devices. Hubs can also be joined together to create larger
networks. Hubs are simple devices that direct data packets to all devices connected to the hub,
regardless of whether the data package is destined for the device. This makes them inefficient
devices and can create a performance bottleneck on busy networks.
In its most basic form, a hub does nothing except provide a pathway for the electrical signals to
travel along. Such a device is called a passive hub. Far more common nowadays is an active hub,
which, as well as providing a path for the data signals, regenerates the signal before it forwards it
to all of the connected devices. A hub does not perform any processing on the data that it
forwards, nor does it perform any error checking.
Hubs come in a variety of shapes and sizes. Small hubs with five or eight connection ports are
commonly referred to as workgroup hubs. Others can accommodate larger numbers of devices
(normally up to 32). These are referred to as high-density devices. Because hubs don’t perform
any processing, they do little except enable communication between connected devices. For
today’s high-demand network applications, something with a little more intelligence is required.
That’s where switches come in.
Regeneration of the signal aside, the basic function of a hub is to take data from one of the
connected devices and forward it to all the other ports on the hub. This method of operation is
inefficient because, in most cases, the data is intended for only one of the connected devices.
Due to the inefficiencies of the hub system and the constantly increasing demand for more
bandwidth, hubs are slowly but surely being replaced with switches. As you will see in the next
section, switches offer distinct advantages over hubs.
Network Switches
On the surface, a switch looks much like a hub. Despite their similar appearance, switches are far
more efficient than hubs and are far more desirable for today’s network environments. As with a
hub, computers connect to a switch via a length of twisted-pair cable. Multiple switches are often
interconnected to create larger networks. Despite their similarity in appearance and their
identical physical connections to computers, switches offer significant operational advantages
over hubs.
Rather than forwarding data to all the connected ports, a switch forwards data only to the port on
which the destination system is connected. It looks at the Media Access Control (MAC)
addresses of the devices connected to it to determine the correct port. A MAC address is a unique
number that is stamped into every NIC. By forwarding data only to the system to which the data
is addressed, the switch decreases the amount of traffic on each network link dramatically. In
effect, the switch literally channels (or switches, if you prefer) data between the ports.
Switches can also further improve performance over the performance of hubs by using a
mechanism called full-duplex. On a standard network connection, the communication between
the system and the switch or hub is said to be half-duplex. In a half-duplex connection, data can
be either sent or received on the wire but not at the same time. Because switches manage the data
flow on the connection, a switch can operate in full-duplex mode—it can send and receive data
on the connection at the same time. In a full-duplex connection, the maximum data throughput is
double that for a half-duplex connection—for example, 10Mbps becomes 20Mbps, and 100Mbps
becomes 200Mbps. As you can imagine, the difference in performance between a 100Mbps
network connection and a 200Mbps connection is considerable.
Switching Methods
Switches use three methods to deal with data as it arrives:
» Cut-through —In a cut-through switching environment, the packet begins to be forwarded as
soon as it is received. This method is very fast, but creates the possibility of errors being
propagated through the network, as there is no error checking.
»Fragment Free — To take advantage of the error checking of store-and forward switching, but
still offer performance levels nearing that of cut through switching, Fragment Free switching can
be used. In a Fragment Free-switching environment, enough of the packet is read so that the
switch can determine whether the packet has been involved in a collision. As soon as the
collision status has been determined, the packet is forwarded.
Bridges
Bridges are networking devices that connect networks. Sometimes it is necessary to divide
networks into subnets to reduce the amount of traffic on each larger subnet or for security
reasons. Once divided, the bridge connects the two subnets and manages the traffic flow between
them. Today, network switches have largely replaced bridges.
A bridge functions by blocking or forwarding data, based on the destination MAC address
written into each frame of data. If the bridge believes the destination address is on a network
other than that from which the data was received, it can forward the data to the other networks to
which it is connected. If the address is not on the other side of the bridge, the data is blocked
from passing. Bridges “learn” the MAC addresses of devices on connected networks by
“listening” to network traffic and recording the network from which the traffic originates.
The advantages of bridges are simple and significant. By preventing unnecessary traffic from
crossing onto other network segments, a bridge can dramatically reduce the amount of network
traffic on a segment. Bridges also make it possible to isolate a busy network from a not-so-busy
one, thereby preventing pollution from busy nodes.
Working of bridge in segregate networks
» Placement — Bridges should be positioned in the network using the 80/20 rule. This rule
dictates that 80% of the data should be local and that the other 20% should be destined for
devices on the other side of the bridge.
» Bridging loops — Bridging loops can occur when more than one bridge is implemented on the
network. In this scenario, the bridges can confuse each other by leading one another to believe
that a device is located on a certain segment when it is not. To combat the bridging loop
problem, the IEEE 802.1d Spanning Tree protocol enables bridge interfaces to be assigned a
value that is then used to control the bridge-learning process.
Types of Bridges
Three types of bridges are used in networks
. Transparent bridge — A transparent bridge is invisible to the other devices on the network.
Transparent bridges perform only the function of blocking or forwarding data based on the MAC
address; the devices on the network are oblivious to these bridges’ existence. Transparent bridges
are by far the most popular types of bridges.
Translational bridge — A translational bridge can convert from one networking system to
another. As you might have guessed, it translates the data it receives. Translational bridges are
useful for connecting two different networks, such as Ethernet and Token Ring networks.
Depending on the direction of travel, a translational bridge can add or remove information and
fields from the frame as needed.
Source - route bridge —Source-route bridges were designed by IBM for use on Token Ring
networks. The source-route bridge derives its name from the fact that the entire route of the
frame is embedded within the frame. This allows the bridge to make specific decisions about
how the frame should be forwarded through the network. The diminishing popularity of Token
Ring makes the chances that you’ll work with a source-route bridge very slim.
Routers
Routers are an increasingly common sight in any network environment, from a small home
office that uses one to connect to an Internet service provider (ISP) to a corporate IT
environment where racks of routers manage data communication with disparate remote sites.
Routers make internetworking possible, and in view of this, they warrant detailed attention.
Routers are network devices that literally route data around the network. By examining data as it
arrives, the router can determine the destination address for the data; then, by using tables of
defined routes, the router determines the best way for the data to continue its journey. Unlike
bridges and switches, which use the hardware-configured MAC address to determine the
destination of the data, routers use the software-configured network address to make decisions.
This approach makes routers more functional than bridges or switches, and it also makes them
more complex because they have to work harder to determine the information.
The basic requirement for a router is that it must have at least two network interfaces. If they are
LAN interfaces, the router can manage and route the information between two LAN segments.
More commonly, a router is used to provide connectivity across wide area network (WAN) links.
A router derives can route data it receives from one network onto another. When a router
receives a packet of data, it reads the header of the packet to determine the destination address.
Once it has determined the address, it looks in its routing table to determine whether it knows
how to reach the destination and, if it does, it forwards the packet to the next hop on the route.
The next hop might be the final destination, or it might be another router.
A routing tables play a very important role in the routing process. They are the means by which
the router makes its decisions. For this reason, a routing table needs to be two things. It must be
up-to-date, and it must be complete.
There are two ways that the router can get the information
for the routing table—
1. Static routing
2. Dynamic routing
Static Routing
In environments that use static routing, routes and route information are entered into the routing
tables manually. Not only can this be a time-consuming task, but also errors are more common.
Additionally, when there is a change in the layout, or topology, of the network, statically
configured routers must be manually updated with the changes. Again, this is a time consuming
and potentially error-laden task. For these reasons, static routing is suited to only the smallest
environments with perhaps just one or two routers. A far more practical solution, particularly in
larger environments, is to use dynamic routing.
Dynamic Routing
In a dynamic routing environment, routers use special routing protocols to communicate. The
purpose of these protocols is simple; they enable routers to pass on information about themselves
to other routers so that other routers can build routing tables. There are two types of routing
protocols used—the older distance vector protocols and the newer link state protocols.
Gateways
The term gateway is applied to any device, system, or software application that can perform the
function of translating data from one format to another. The key feature of a gateway is that it
converts the format of the data, not the data itself.
A router that can route data from an IPX network to an IP network is, technically, a gateway.
The same can be said of a translational bridge that, as described earlier in this chapter, converts
from an Ethernet network to a Token Ring network and back again.
Software gateways can be found everywhere. Many companies use an email system such as
Microsoft Exchange or Novell GroupWise. These systems transmit mail internally in a certain
format. When email needs to be sent across the Internet to users using a different email system,
the email must be converted to another format, usually to Simple Mail Transfer Protocol
(SMTP). This conversion process is performed by a software gateway.
Wireless Access Point (WAP)
Wireless access points, referred to as either WAPs or wireless APs, are a transmitter and receiver
(transceiver) device used for wireless LAN (WLAN) radio signals. A WAP is typically a
separate network device with a built-in antenna, transmitter, and adapter. WAPs use the wireless
infrastructure network mode to provide a connection point between WLANs and a wired
Ethernet LAN. WAPs also typically have several ports allowing a way to expand the network to
support additional clients.
Depending on the size of the network, one or more WAPs may be required. Additional WAPs
are used to allow access to more wireless clients and to expand the range of the wireless network.
Each WAP is limited by a transmissions range, the distance a client can be from a WAP and still
get a useable signal. The actual distance depends on the wireless standard being used and the
obstructions and environmental conditions between the client and the WAP.
Modems can be internal add-in expansion cards, external devices that connect to the serial or
USB port of a system, PCMCIA cards designed for use in laptops, or proprietary devices
designed for use on other devices such as portables and handhelds.
For external modems, you need not concern yourself directly with these port assignments, as the
modem connects to the serial port and uses the resources assigned to it. This is a much more
straightforward approach and one favored by those who work with modems on a regular basis.
For PCMCIA and USB modems, the plug-and-play nature of these devices makes them simple to
configure, and no manual resource assignment is required. Once the modem is installed and
recognized by the system, drivers must be configured to enable use of the device.
Modems
Two factors directly affect the speed of the modem connection—the speed of the modem itself
and the speed of the Universal Asynchronous Receiver/Transmitter (UART) chip in the
computer that is connected to the modem. The UART chip controls the serial communication of
a computer, and although modern systems have UART chips that can accommodate far greater
speeds than the modem is capable of, older systems should be checked to make sure that the
UART chip is of sufficient speed to support the modem speed. The UART chip installed in the
system can normally be determined by looking at the documentation that comes with the system.
Although transceivers are found in network cards, they can be external devices as well. As far as
networking is concerned, transceivers can ship as a module or chip type. Chip transceivers are
small and are inserted into a system board or wired directly on a circuit board. Module
transceivers are external to the network and are installed and function similarly to other computer
peripherals, or they can function as standalone devices.
Firewalls
A firewall is a networking device, either hardware or software based, that controls access to your
organization’s network. This controlled access is designed to protect data and resources from an
outside threat. To do this, firewalls are typically placed at entry/exit points of a network—for
example, placing a firewall between an internal network and the Internet. Once there, it can
control access in and out of that point.
Although firewalls typically protect internal networks from public networks, they are also used
to control access between specific network segments within a network—for example, placing a
firewall between the Accounts and the Sales departments.
Hardware firewalls are used in networks of all sizes today. Hardware firewalls are often
dedicated network devices that can be implemented with very little configuration and protect all
systems behind the firewall from outside sources. Hardware firewalls are readily available and
often combined with other devices today. For example, many broadband routers and wireless
access points have firewall functionality built in. In such case, the router or WAP might have a
number of ports available to plug systems in to.
TDMA/FDD Standards
1. Global System for Mobile (GSM)
The GSM standard, introduced by Groupe Special Mobile, was aimed at designing a uniform
pan-European mobile system. It was the first fully digital system utilizing the 900 MHz
frequency band. The initial GSM had 200 KHz radio channels, 8 full-rate or 16 half-rate TDMA
channels per carrier, encryption of speech, low speed data services and support for SMS for
which it gained quick popularity.
CDMA/FDD Standard
» Interim Standard 95 (IS-95)
The IS-95 standard, also popularly known as CDMA One, uses 64 orthogonally coded users and
code words are transmitted simultaneously on each of 1.25 MHz channels. Certain services that
have been standardized as a part of IS-95 standard are: short messaging service, slotted paging,
over-the-air activation (meaning the mobile can be activated by the service provider without any
third party intervention), enhanced mobile station identities etc.
2.5G networks also brought into the market some popular application, a few of which are:
Wireless Application Protocol (WAP), General Packet Radio Service (GPRS), High Speed
Circuit Switched Dada (HSCSD), Enhanced Data rates for GSM Evolution (EDGE) etc.
3G networks enable network operators to offer users a wider range of more advanced services
while achieving greater network capacity through improved spectral efficiency. Services include
wide-area wireless voice telephony, video calls, and broadband wireless data, all in a mobile
environment. Additional features also include HSPA data transmission capabilities able to
deliver speeds up to 14.4Mbit/s on the down link and 5.8Mbit/s on the uplink.
3G networks are wide area cellular telephone networks which evolved to incorporate high-speed
internet access and video telephony. IMT-2000 defines a set of technical requirements for the
realization of such targets, which can be summarized as follows:
» high data rates: 144 kbps in all environments and 2 Mbps in low-mobility and indoor
environments
» symmetrical and asymmetrical data transmission
» circuit-switched and packet-switched-based services
» speech quality comparable to wire-line quality
» improved spectral efficiency
» several simultaneous services to end users for multimedia services
» seamless incorporation of second-generation cellular systems
» global roaming
» open architecture for the rapid introduction of new services and technology.
These days in 3G we can access the internet through our mobile phone with the help of various
technologies, like Wi-Fi, Wi-Max, GPRS, EDGE, WAP and Wi-Bro. But the problem is that if
you are accessing the internet through your mobile phone within the help of any of these
technologies and you move to place where inter-operability between different networks obtains,
you are stuck. If you are using 4G, you can access the net through any of the aforesaid
technologies even while moving from one place to another. Expected issues considered to be
resolved in this 4G mobile technology which are as under:-
» It is considered to embed IP feature in the set for more security purpose as high data rates are
send and receive through the phone using 4G mobile technology.
» 4G mobile technology is going to be able to download at a rate of 100Mbps like mobile access
and less mobility of 1GBps for local access of wireless
» Instead of hybrid technology used in 3G with the combination of CDMA and IS-95 a new
technology OFDMA is introduced 4G. In OFDMA, the concept is again of division multiple
accesses but this is neither time like TDMA nor code divided CDMA rather frequency domain
equalization process symbolizes as OFDMA.
» CDMA sends data through one channel but with the division of time in three slots. While
CDMA also sends data through one channel identifying the receiver with the help of code.
Whereas in 4G mobile technology OFDMA is going to introduce in which data packets sends by
dividing the channel into a narrow band for the greater efficiency comprises a prominent feature
of 4G mobile technology.
» IEEE 802.16m is processing for the IEE802.16e comprising the 4G brand will define it as
WMBA (Wireless Mobile Broadband Access). This is a plain indicator for the internet
availability. The implementation is in progress to avoid the call interference in case of data
download from a website. It will propose 128 Mbps downlink data rate and 56Mbps uplink data
rate which is an extra ordinary step in 4G mobile technology. The service will limit as the
availability of hotspot is condition for the internet connectivity.
» Parallel with WiMAX, LTE is intended to incorporate in 4G mobiles. It is also a wireless
technology for the broadband access. The difference between WiMAX and LTE is that LTE goes
for the IP Address. It follows the same TCP / IP concept inherited from networking technology.
Restricted for the IP addresses it will provide great security as well as high data transferability,
avoid latency, having the ability to adjust the bandwidth. LTE is compatible with CDMA so able
to back n forth the data in between both networks.
» 3GPP Organization is going to introduce two major wireless standards; LTE and
IEEE802.16m. Former is granted permission for the further process while second is under
consideration and that will become a part of 4G mobile technology.
» IPv6 is approved by Version as a 4G standard on June 2009.
This paper concludes by looking back at existing wireless technologies and summarizing the
next generation wireless communication media in the following table. These technologies,
indeed, have a long way to go and exciting and amazing products are bound to emerge in the
years to come.
WiMax
» Provides up to 70 Mb/sec symmetric broadband speed without the need for cables. The
technology is based on the IEEE 802.16 standard (also called WirelessMAN)
» WiMAX can provide broadband wireless access (BWA) up to 30 miles (50 km) for _xed
stations, and 3 - 10 miles (5 - 15 km) for mobile stations. In contrast, the WiFi/802.11 wireless
local area network standard is limited in most cases to only 100 - 300 feet (30 - 100m).
» The 802.16 specification applies across a wide range of the RF spectrum, and WiMAX could
function on any frequency below 66 GHz (higher frequencies would decrease the range of a Base
Station to a few hundred meters in an urban environment).
Zigbee
» ZigBee is the specification for a suite of high level communication protocols using small, low-
power digital radios based on the IEEE 802.15.4-2006 standard for wireless personal area
networks (WPANs), such as wireless headphones connecting with cell phones via short-range
radio.
» This technology is intended to be simpler and cheaper. ZigBee is targeted at radio-frequency
(RF) applications that require a low data rate, long battery life, and secure networking.
» ZigBee operates in the industrial, scientific and medical (ISM) radio bands; 868 MHz in
Europe, 915 MHz in countries such as USA and Australia, and 2.4 GHz in most worldwide.
Wibree
» Wibree is a digital radio technology (intended to become an open standard of wireless
communications) designed for ultra low power consumption (button cell batteries) within a
short range (10 meters / 30 ft) based around low-cost transceiver microchips in each device.
» Wibree is known as Bluetooth with low energy technology.
» It operates in 2.4 GHz ISM band with physical layer bit rate of 1 Mbps.
Multiple Access Techniques
Multiple access techniques are used to allow a large number of mobile users to share the
allocated spectrum in the most efficient manner. As the spectrum is limited, so the sharing is
required to increase the capacity of cell or over a geographical area by allowing the available
bandwidth to be used at the same time by different users. And this must be done in a way such
that the quality of service doesn't degrade within the existing users.
A cellular system divides any given area into cells where a mobile unit in each cell
communicates with a base station. The main aim in the cellular system design is to be able to
increase the capacity of the channel i.e. to handle as many calls as possible in a given bandwidth
with a sufficient level of quality of service.
There are several different ways to allow access to the channel :
» Frequency division multiple-access (FDMA)
» Time division multiple-access (TDMA)
» Code division multiple-access (CDMA)
» Space Division Multiple access (SDMA)
FDMA,TDMA and CDMA are the three major multiple access techniques that are used to share
the available bandwidth in a wireless communication system. Depending on how the available
bandwidth is allocated to the users these techniques can be classified as narrowband and
wideband systems
Features of FDMA
FDMA/FDD in AMPS
The first U.S. analog cellular system, AMPS (Advanced Mobile Phone System) is based on
FDMA/FDD. A single user occupies a single channel while the call is in progress, and the single
channel is actually two simplex channels which are frequency duplexed with a 45 MHz split.
When a call is completed or when a handoff occurs the channel is vacated so that another mobile
subscriber may use it. Multiple or simultaneous users are accommodated in AMPS by giving
each user a unique signal. Voice signals are sent on the forward channel from the base station to
the mobile unit, and on the reverse channel from the mobile unit to the base station. In AMPS,
analog narrowband frequency modulation (NBFM) is used to modulate the carrier.
FDMA/TDD in CT2
Using FDMA, CT2 system splits the available bandwidth into radio channels in the assigned
frequency domain. In the initial call setup, the handset scans the available channels and locks on
to an unoccupied channel for the duration of the call. Using TDD(Time Division Duplexing ),
the call is split into time blocks that alternate between transmitting and receiving.
After FDMA, TDMA is the second most popular mechanism for communication using satellites.
In case of TDMA, there is no modulation of frequencies (unlike FDMA). Instead, the
transmitting earth station transmits data in the form of packets of data. These data packets arrive
at the satellite one by one (hence the name TDMA). By the same logic that was discussed during
TDM, TDMA is also a digital form of data transmission; TDMA operates in time domain, rather
than frequency domain. Bit rates of 10-100 Mbps are common for TDMA transmissions. This
can be translated into roughly 1800 simultaneous voice calls using 64 Kbps PCM.
In digital systems, continuous transmission is not required because users do not use the allotted
bandwidth all the time. In such cases, TDMA is a complimentary access technique to FDMA.
Global Systems for Mobile communications (GSM) uses the TDMA technique. In TDMA, the
entire bandwidth is available to the user but only for a finite period of time. In most cases the
available bandwidth is divided into fewer channels compared to FDMA and the users are allotted
time slots during which they have the entire channel bandwidth at their disposal.
TDMA requires careful time synchronization since users share the bandwidth in the frequency
domain. The number of channels are less, inter channel interference is almost negligible. TDMA
uses different time slots for transmission and reception. This type of duplexing is referred to as
Time division duplexing(TDD).
CDMA/FDD in IS-95
In this standard, the frequency range is: 869-894 MHz (for Rx) and 824-849 MHz (for Tx). In
such a system, there are a total of 20 channels and 798 users per channel. For each channel, the
bit rate is 1.2288 Mbps. For orthogonality, it usually combines 64 Walsh-Hadamard codes and a
m-sequence.
Layered Tasks
We use the concept of layers in our daily life. As an example, let us consider two friends who
communicate through postal mail The process of sending a letter to a friend would be complex if
there were no services available from the post office. Below Figure shows the steps in this task.
In Figure we have a sender, a receiver, and a carrier that transports the letter. There is a hierarchy
of tasks.
The OSI model has seven layers. The principles that were applied to arrive at the seven layers
can be briefly summarized as follows:
1. A layer should be created where a different abstraction is needed.
2. Each layer should perform a well-defined function.
3. The function of each layer should be chosen with an eye toward defining internationally
standardized protocols.
4. The layer boundaries should be chosen to minimize the information flow across the interfaces.
5. The number of layers should be large enough that distinct functions need not be thrown
together in the same layer out of necessity and small enough that the architecture does not
become unwieldy.
Application Layer
This layer is responsible for providing interface to the application user. This layer encompasses
protocols which directly interact with the user.
Presentation Layer
This layer defines how data in the native format of remote host should be presented in the native
format of host.
Session Layer
This layer maintains sessions between remote hosts. For example, once user/password
authentication is done, the remote host maintains this session for a while and does not ask for
authentication again in that time span.
Transport Layer
This layer is responsible for end-to-end delivery between hosts. Network Layer: This layer is
responsible for address assignment and uniquely addressing hosts in a network.
Physical Layer
This layer defines the hardware, cabling, wiring, power output, pulse rate etc.
Physical Layer
The physical layer, the lowest layer of the OSI model, is concerned with the transmission and
reception of the unstructured raw bit stream over a physical medium. It describes the
electrical/optical, mechanical, and functional interfaces to the physical medium, and carries the
signals for all of the higher layers.
Data encoding
Modifies the simple digital signal pattern (1s and 0s) used by the PC to better accommodate the
characteristics of the physical medium, and to aid in bit and frame synchronization. It
determines:
» What signal state represents a binary 1
» How the receiving station knows when a "bit-time" starts
» How the receiving station delimits a frame
Physical medium
Physical medium attachment, accommodating various possibilities in the medium:
» Will an external transceiver (MAU) be used to connect to the medium?
» How many pins do the connectors have and what is each pin used for?
Transmission technique
Determines whether the encoded bits will be transmitted by baseband (digital) or broadband
(analog) signaling.
Another issue that arises in the data link layer (and most of the higher layers as well) is how to
keep a fast transmitter from drowning a slow receiver in data. Some traffic regulation mechanism
is often needed to let the transmitter know that how much buffer space the receiver has at .the
moment. Frequently, this flow regulation and the error handling are integrated.
Broadcast networks have an additional issue in the data link layer: how to control access to the
shared channel. A special sublayer of the data link layer, the medium access control sublayer,
deals with this problem.
Addressing
Headers and trailers are added, containing the physical addresses of the adjacent nodes, and
removed upon a successful delivery.
Flow Control
This avoids overwriting on the receiver's buffer by regulating the amount of data that can be sent.
Error control
It checks the CRC to ensure the correctness of the frame. If incorrect, it asks for retransmission.
Again, here there are multiple schemes (positive acknowledgement, negative acknowledgement,
go-back-n, sliding window, etc.).
Node-to-node delivery
Finally, it is responsible for error free delivery of the entire frame/ packet to the next adjacent
node (node to node delivery)
Network Layer
The network layer controls the operation of the subnet. A key design issue is determining how
packets are routed from source to destination. Routes can be based on static tables that are "wired
into" the network and rarely changed. They can also be determined at the start of each
conversation, for example, a terminal session (e.g., a login to a remote machine). Finally, they
can be highly dynamic, being determined a new for each packet, to reflect the current network
load.
If too many packets are present in the subnet at the same time, they will get in one another's way,
forming bottlenecks. The control of such congestion also belongs to the network layer. More
generally, the quality of service provided (delay, transit time, jitter, etc.) is also a network layer
issue.
When a packet has to travel from one network to another to get to its destination, many problems
can arise. The addressing used by the second network may be different from the first one. The
second one may not accept the packet at all because it is too large. The protocols may differ, and
so on. It is up to the network layer to overcome all these problems to allow heterogeneous
networks to be interconnected.
In broadcast networks, the routing problem is simple, so the network layer is often thin or even
nonexistent.
Transport Layer
The basic 'function' of the transport layer is to accept data from above, split it up into smaller
units if need be, pass these to the network layer, and ensure that the pieces all arrive correctly at
the other end. Furthermore, all this must be done, efficiently and in a way that isolates the upper
layers from the inevitable changes in the hardware technology.
The transport layer also determines what type of service to provide to the session layer, and,
ultimately, to the users of the network. The most popular type of transport connection is an error-
free point-to-point channel that delivers messages or bytes in the order in which they were sent.
However, other possible kinds of transport service are the transporting of isolated messages, with
no guarantee about the order of delivery, and the broadcasting of messages to multiple
destinations. The type' of service is determined when the connection is established. (As an aside,
an error-free channel is impossible to achieve; what people really mean by this term is that the
error rate is low enough to ignore in practice.
The transport layer is a true end-to-end layer, all the way from the source to the destination. In
other words, a program on the source machine carries on a conversation with a similar program
on the destination machine, using the message headers and control messages. In the lower layers,
the protocols are between each machine and its immediate neighbors, and not between the
ultimate source and destination machines, which may be separated by many routers. The
difference between layers 1 through 3, which are chained, and layers 4 through 7, which are end-
to-end.
Session Layer
The session layer allows users on different machines to establish sessions between them.
Sessions offer various services, including dialog control (keeping track of whose turn it is to
transmit), token management (preventing two parties from attempting the same critical.
operation at the same time), and synchronization (check pointing long transmissions to allow
them to continue from where they were after a crash).
» Session and sub sub-sessions : The layer divides a session into sub sessions for avoiding
retransmission of entire messages by adding check pointing feature.
» Synchronization : The session layer decides the order in which data need to be passed to the
transport layer.
» Dialog control : The session layer also decides which user/ application send data, and at what
point of time, and whether the communication is simplex, half duplex or full duplex.
» Session closure : The session layer ensure that the session between the hosts is closed grace
fully.
Presentation Layer
Unlike lower layers, which are mostly concerned with moving bits around, the presentation layer
is concerned with the syntax and semantics of the information transmitted. In order to make it
possible for computers with different data representations to communicate, the data structures to
be exchanged can be defined in an abstract way, along with a standard encoding to be used "on
the wire." The presentation layer manages these abstract data structures and allows higher-level
data structures (e.g., banking records); to be defined and exchanged.
» Encryption : The presentation layer performs data encryption and decryption for security.
» Compression : For the efficient transmission, the presentation layer performs data
compression before sending and decompression at the destination.
Application Layer
The application layer contains a variety of protocols that are commonly needed by users. One
widely used application protocol is HTTP (Hyper Text Transfer Protocol), which is the basis for
the World Wide Web. When a browser wants a Web page, it sends the name of the page it wants
to the server using HTTP. The server then sends the page back other application protocols are
used for file transfer, electronic mail, and network news.
The responsibilities of the application layer are as follows :
» Network abstraction : The application layer provides an abstraction of the underlaying
network to an end user and an application.
» File access and transfer : It allows a use to access, download or upload files from/to a remote
host.
» World Wide Web (WWW) : Accessing the Web pages is also a part of this layer.
Unlike OSI reference model, TCP/IP reference model has only 4 layers : 1. Host-to-Network
Layer
2. Internet Layer
3. Transport Layer
4. Application Layer
Host-to-Network Layer
The TCP/IP reference model does not really say much about what happens here, except to point
out that the host has to connect to the network using some protocol so it can send IP packets to it.
This protocol is not defined and varies from host to host and network to network.
Internet Layer
This layer, called the internet layer, is the linchpin that holds the whole architecture together. Its
job is to permit hosts to inject packets into any network and have they travel independently to the
destination (potentially on a different network). They may even arrive in a different order than
they were sent, in which case it is the job of higher layers to rearrange them, if in-order delivery
is desired. Note that ''internet'' is used here in a generic sense, even though this layer is present in
the Internet.
The internet layer defines an official packet format and protocol called IP (Internet Protocol).
The job of the internet layer is to deliver IP packets where they are supposed to go. Packet
routing is clearly the major issue here, as is avoiding congestion. For these reasons, it is
reasonable to say that the TCP/IP internet layer is similar in functionality to the OSI network
layer. Fig. shows this correspondence.
The protocols used to determine who goes next on a multi-access channel belong to a sub-layer
of the data link layer called the MAC (Medium Access Control) sub-layer. The MAC sub-layer
is especially important in LANs, many of which use a multi-access channel as the basis for
communication. For most people, understanding protocols involving multiple parties is easier
after two party protocols are well understood. For that reason we have deviated slightly from a
strict bottom-up order of presentation.
Access Methods
Access method is the term given to the set of rules by which networks arbitrate the use of a
common medium. It is the way the LAN keeps different streams of data from crashing into each
other as they share the network.
Networks need access methods for the same reason streets need traffic lights-to keep people from
hitting each other. Think of the access method as traffic law. The network cable is the street.
Traffic law (or the access method) regulates the use of the street (or cable), determining who can
drive (or send data) where and at what time. On a network, if two or more people try to send data
at exactly the same time, their signals will interfere with each other, ruining the data being
transmitted. The access method prevents this.
The access method works at the data-link layer (layer 2) because it is concerned with the use of
the medium that connects users. The access method doesn't care what is being sent over the
network, just like the traffic law doesn't stipulate what you can carry. It just says you have to
drive on the right side of the road and obey the traffic lights and signs.
Three traditional access methods are used today, although others exist and may become
increasingly important. They are Ethernet, Token Ring, and ARCnet. Actually, these
technologies encompass wider-ranging standards than their access methods. They also define
other features of network transmission, such as the electrical characteristics of signals, and the
size of data packets sent. Nevertheless, these standards are best known by the access methods
they employ these in accessing channels.
ALOHA
In the 1970s, Norman Abramson and his colleagues at the University of Hawaii devised a new
and elegant method to solve the channel allocation problem. Their work has been extended by
many researchers since then (Abramson, 1985).
Although Abramson's work, called the ALOHA system, used ground-based radio broadcasting,
the basic idea is applicable to any system in which uncoordinated users are competing for the use
of a single shared channel. There are two versions of ALOHA: pure and slotted. They differ with
respect to whether time is divided into discrete slots into which all frames must fit. Pure ALOHA
does not require global time synchronization; slotted ALOHA does.
Pure ALOHA
The basic idea of an ALOHA system is simple: let users transmit whenever they have data to be
sent. There will be collisions, of course, and the colliding frames will be damaged. However, due
to the feedback property of broadcasting, a sender can always find out whether its frame was
destroyed by listening to the channel, the same way other users do. If the frame was destroyed,
the sender just waits a random amount of time and sends it again.
The waiting time must be random or the same frames will collide over and over, in lockstep.
Systems in which multiple users share a common channel in a way that can lead to conflicts are
widely known as contention systems.
We have made the frames all the same length because the throughput of ALOHA systems is
maximized by having a uniform frame size rather than by allowing variable length frames.
Pure ALOHA
Whenever two frames try to occupy the channel at the same time, there will be a collision and
both will be garbled. If the first bit of a new frame overlaps with just the last bit of a frame
almost finished, both frames will be totally destroyed and both will have to be retransmitted
later. The checksum cannot (and should not) distinguish between a total loss and a near miss.
Bad is bad.
The station then transmits a frame containing the line and checks the channel to see if it was
successful. If so, the user sees the reply and goes back to typing. If not, the user continues to wait
and the frame is retransmitted over and over until it has been successfully sent. Let the ''frame
time'' denote the amount of time needed to transmit the standard, fixed-length frame. At this
point we assume that the infinite population of users generates new frames according to a
Poisson distribution with mean N frames per frame time. If N > 1, the user community is
generating frames at a higher rate than the channel can handle, and nearly every frame will suffer
a collision. For reasonable throughput we would expect 0 < N < 1. In addition to the new frames,
the stations also generate retransmissions of frames that previously suffered collisions.
Advantages :
» Superior to fixed assignment when there are large number of bursty stations.
» Adapts to varying number of stations.
Disadvantages :
» Theoretically proven throughput maximum of 18.4%.
» Requires queuing buffers for retransmission of packets.
Slotted ALOHA
In 1972, Roberts published a method for doubling the capacity of an ALOHA system (Robert,
1972). His proposal was to divide time into discrete intervals, each interval corresponding to one
frame. This approach requires the users to agree on slot boundaries. One way to achieve
synchronization would be to have one special station emit a pip at the start of each interval, like a
clock.
In Roberts' method, which has come to be known as slotted ALOHA, in contrast to Abramson's
pure ALOHA, a computer is not permitted to send whenever a carriage return is typed. Instead, it
is required to wait for the beginning of the next slot. Thus, the continuous pure ALOHA is turned
into a discrete one. Since the vulnerable period is now halved, the probability of no other traffic
during the same slot as our test frame is e-G which leads to
S=G e -2G
The slotted ALOHA peaks at G = 1, with a throughput of S =1/e or about 0.368, twice that of
pure ALOHA as shown in figure 3.4. If the system is operating at G = 1, the probability of an
empty slot is 0.368. The best we can hope for using slotted ALOHA is 37 percent of the slots
empty, 37 percent successes, and 26 percent collisions. Operating at higher values of G reduces
the number of empties but increases the number of collisions exponentially. To see how this
rapid growth of collisions with G comes about, consider the transmission of a test frame.
Slotted ALOHA
Advantages :
» Doubles the efficiency of Aloha.
» Adaptable to a changing station population.
Disadvantages :
» Theoretically proven throughput maximum of 36.8%.
» Requires queuing buffers for retransmission of packets.
Synchronization required
» Synchronous system: time divided into slots
» Slot size equals fixed packet transmission time
» When Packet ready for transmission, wait until start of next slot
» Packets overlap completely or not at all
This protocol, known as CSMA/CD (CSMA with Collision Detection) is widely used on LANs
in the MAC sublayer. In particular, it is the basis of the popular Ethernet LAN, so it is worth
devoting some time to looking at it in detail. CSMA/CD, as well as many other LAN protocols,
uses the conceptual model of Fig.5. At the point marked t0, a station has finished transmitting its
frame. Any other station having a frame to send may now attempt to do so. If two or more
stations decide to transmit simultaneously, there will be a collision. Collisions can be detected by
looking at the power or pulse width of the received signal and comparing it to the transmitted
signal.
CSMA/CD can be in one of three states: contention, transmission, or idle
After a station detects a collision, it aborts its transmission, waits a random period of time, and
then tries again, assuming that no other station has started transmitting in the meantime.
Therefore, our model for CSMA/CD will consist of alternating contention and transmission
periods, with idle periods occurring when all stations are quiet (e.g., for lack of work).
Now let us look closely at the details of the contention algorithm. Suppose that two stations both
begin transmitting at exactly time t0. How long will it take them to realize that there has been a
collision? The answer to this question is vital to determining the length of the contention period
and hence what the delay and throughput will be. The minimum time to detect the collision is
then just the time it takes the signal to propagate from one station to the other.
Based on this reasoning, you might think that a station not hearing a collision for a time equal to
the full cable propagation time after starting its transmission could be sure it had seized the
cable. By ''seized,'' we mean that all other stations knew it was transmitting and would not
interfere. This conclusion is wrong. Consider the following worst-case scenario. Let the time for
a signal to propagate between the two farthest stations be . At t0, one station begins transmitting.
At , an instant before the signal arrives at the most distant station, that station also begins
transmitting. Of course, it detects the collision almost instantly and stops, but the little noise
burst caused by the collision does not get back to the original station until time . In other words,
in the worst case a station cannot be sure that it has seized the channel until it has transmitted for
without hearing a collision. For this reason we will model the contention interval as a slotted
ALOHA system with slot width . On a 1-km long coaxial cable, . For simplicity we will assume
that each slot contains just 1 bit. Once the channel has been seized, a station can transmit at any
rate it wants to, of course, not just at 1 bit per sec.
Collision-Free Protocols
Although collisions do not occur with CSMA/CD once a station has unambiguously captured the
channel, they can still occur during the contention period. These collisions adversely affect the
system performance, especially when the cable is long and the frames are short. And CSMA/CD
is not universally applicable. In this section, we will examine some protocols that resolve the
contention for the channel without any collisions at all, not even during the contention period.
Most of these are not currently used in major systems, but in a rapidly changing field, having
some protocols with excellent properties available for future systems is often a good thing. In the
protocols to be described, we assume that there are exactly N stations, each with a unique
address from 0 to N - 1 ''wired'' into it. It does not matter that some stations may be inactive part
of the time. We also assume that propagation delay is negligible.
A Bit-Map Protocol
In this collision-free protocol, the basic bit-map method, each contention period consists of
exactly N slots. If station 0 has a frame to send, it transmits a 1 bit during the zeroth slot. No
other station is allowed to transmit during this slot. Regardless of what station 0 does, station 1
gets the opportunity to transmit a 1 during slot 1, but only if it has a frame queued. In general,
station j may announce that it has a frame to send by inserting a 1 bit into slot j. After all N slots
have passed by, each station has complete knowledge of which stations wish to transmit. At that
point, they begin transmitting in numerical order.
Since everyone agrees on who goes next, there will never be any collisions. After the last ready
station has transmitted its frame, an event all stations can easily monitor, another N bit
contention period is begun. If a station becomes ready just after its bit slot has passed by, it is out
of luck and must remain silent until every station has had a chance and the bit map has come
around again. Protocols like this in which the desire to transmit is broadcast before the actual
transmission are called reservation protocols.
Binary Countdown
A problem with the basic bit-map protocol is that the overhead is 1 bit per station, so it does not
scale well to networks with thousands of stations. We can do better than that by using binary
station addresses. A station wanting to use the channel now broadcasts its address as a binary bit
string, starting with the high-order bit. All addresses are assumed to be the same length. The bits
in each address position from different stations are BOOLEAN ORed together. We will call this
protocol binary countdown. It implicitly assumes that the transmission delays are negligible so
that all stations see asserted bits essentially instantaneously. To avoid conflicts, an arbitration
rule must be applied: as soon as a station sees that a high-order bit position that is 0 in its address
has been overwritten with a 1.
Station give ups in Binary Countdown
Even if we know what type of errors can occur, we can’t simple recognize them. We can do this
simply by comparing this copy received with another copy of intended transmission. In this
mechanism the source data block is send twice. The receiver compares them with the help of a
comparator and if those two blocks differ, a request for re-transmission is made. To achieve
forward error correction, three sets of the same data block are sent and majority decision selects
the correct block. These methods are very inefficient and increase the traffic two or three times.
Fortunately there are more efficient error detection and correction codes. There are two basic
strategies for dealing with errors. One way is to include enough redundant information (extra bits
are introduced into the data stream at the transmitter on a regular and logical basis) along with
each block of data sent to enable the receiver to deduce what the transmitted character must have
been. The other way is to include only enough redundancy to allow the receiver to deduce that
error has occurred, but not which error has occurred and the receiver asks for retransmission. The
former strategy uses Error-Correcting Codes and latter uses Error-detecting Codes.
Types of errors
These interferences can change the timing and shape of the signal. If the signal is carrying binary
encoded data, such changes can alter the meaning of the data.
These errors can be divided into two types :
1. Single-bit error
2. Burst error.
Single-bit Error
The term single-bit error means that only one bit of given data unit (such as a byte, character, or
data unit) is changed from 1 to 0 or from 0 to 1
Single bit errors are least likely type of errors in serial data transmission. To see why, imagine a
sender sends data at 10 Mbps. This means that each bit lasts only for 0.1 μs (micro-second). For
a single bit error to occur noise must have duration of only 0.1 μs (micro-second), which is very
rare. However, a single-bit error can happen if we are having a parallel data transmission. For
example, if 16 wires are used to send all 16 bits of a word at the same time and one of the wires
is noisy, one bit is corrupted in each word.
Burst Error
The term burst error means that two or more bits in the data unit have changed from 0 to 1 or
vice-versa. Note that burst error doesn’t necessary means that error occurs in consecutive bits.
The length of the burst error is measured from the first corrupted bit to the last corrupted bit.
Some bits in between may not be corrupted.
Burst Error
Burst errors are mostly likely to happen in serial transmission. The duration of the noise is
normally longer than the duration of a single bit, which means that the noise affects data; it
affects a set of bits. The number of bits affected depends on the data rate and duration of noise.
Blocks of data from the source are subjected to a check bit or Parity bit generator form, where a
parity of 1 is added to the block if it contains an odd number of 1’s (ON bits) and 0 is added if it
contains an even number of 1’s. At the receiving end the parity bit is computed from the received
data bits and compared with the received parity bit. This scheme makes the total number of 1’s
even, that is why it is called even parity checking. Considering a 4-bit word, different
combinations of the data words and the corresponding code words
Even-parity checking scheme
An observation of the table reveals that to move from one code word to another, at least two data
bits should be changed. Hence these set of code words are said to have a minimum distance
(hamming distance) of 2, which means that a receiver that has knowledge of the code word set
can detect all single bit errors in each code word. However, if two errors occur in the code word,
it becomes another valid member of the set and the decoder will see only another valid code
word and know nothing of the error. Thus errors in more than one bit cannot be detected. In fact
it can be shown that a single parity check code can detect only odd number of errors in a code
word.
Two- Dimension Parity Checking increases the likelihood of detecting burst errors. As we have
shown in Fig. 3.2.4 that a 2-D Parity check of n bits can detect a burst error of n bits. A burst
error of more than n bits is also detected by 2-D Parity check with a high-probability. There is,
however, one pattern of error that remains elusive. If two bits in one data unit are damaged and
two bits in exactly same position in another data unit are also damaged, the 2-D Parity check
checker will not detect an error.
Example, if two data units: 11001100 and 10101100. If first and second from last bits in each of
them is changed, making the data units as 01001110 and 00101110, the error cannot be detected
by 2-D Parity check.
Checksum
In checksum error detection scheme, the data is divided into k segments each of m bits. In the
sender’s end the segments are added using 1’s complement arithmetic to get the sum. The sum is
complemented to get the checksum. The checksum segment is sent along with the data
segments.. At the receiver’s end, all received segments are added using 1’s complement
arithmetic to get the sum. The sum is complemented. If the result is zero, the received data is
accepted; otherwise discarded.
The checksum detects all errors involving an odd number of bits. It also detects most errors
involving even number of bits.
(a) Sender’s end for the calculation of the checksum, (b) Receiving end for checking the
checksum
This mathematical operation performed is illustrated in Fig. by dividing a sample 4-bit number
by the coefficient of the generator polynomial x3+x+1, which is 1011, using the modulo-2
arithmetic. Modulo-2 arithmetic is a binary addition process without any carry over, which is just
the Exclusive-OR operation. Consider the case where k=1101. Hence we have to divide 1101000
(i.e. k appended by 3 zeros) by 1011, which produces the remainder r=001, so that the bit frame
(k+r) =1101001 is actually being transmitted through the communication channel. At the
receiving end, if the received number, i.e., 1101001 is divided by the same generator polynomial
1011 to get the remainder as 000, it can be assumed that the data is free of errors.
Cyclic Redundancy Checks
The transmitter can generate the CRC by using a feedback shift register circuit. The same circuit
can also be used at the receiving end to check whether any error has occurred. All the values can
be expressed as polynomials of a dummy variable X. For example, for P = 11001 the
corresponding polynomial is X4+X3+1. A polynomial is selected to have at least the following
properties:
o It should not be divisible by X.
o It should not be divisible by (X+1)
The first condition guarantees that all burst errors of a length equal to the degree of polynomial
are detected. The second condition guarantees that all burst errors affecting an odd number of
bits are detected.
CRC is a very effective error detection technique. If the divisor is chosen according to the
previously mentioned rules, its performance can be summarized as follows:
→ CRC can detect all single-bit errors
→ CRC can detect all double-bit errors (three 1’s)
→ CRC can detect any odd number of errors (X+1)
→ CRC can detect all burst errors of less than the degree of the polynomial.
→ CRC detects most of the larger burst errors with a high probability.
→ For example CRC-12 detects 99.97% of errors with a length 12 or more
In theory it is possible to correct any number of errors atomically. Error-correcting codes are
more sophisticated than error detecting codes and require more redundant bits. The number of
bits required to correct multiple-bit or burst error is so high that in most of the cases it is
inefficient to do so. For this reason, most error correction is limited to one, two or at the most
three-bit errors.
To calculate the numbers of redundant bits (r) required to correct d data bits, let us find out the
relationship between the two. So we have (d+r) as the total number of bits, which are to be
transmitted; then r must be able to indicate at least d+r+1 different values. Of these, one value
means no error, and remaining d+r values indicate error location of error in each of d+r
locations. So, d+r+1 states must be distinguishable by r bits, and r bits can indicates 2r states.
Hence, 2r must be greater than d+r+1.
2r >= d + r + 1
The value of r must be determined by putting in the value of d in the relation. For example, if d is
7, then the smallest value of r that satisfies the above relation is 4. So the total bits, which are to
be transmitted is 11 bits ( d + r = 7 + 4 = 11).
Now let us examine how we can manipulate these bits to discover which bit is in error. A
technique developed by R.W.Hamming provides a practical solution. The solution or coding
scheme he developed is commonly known as Hamming Code. Hamming code can be applied to
data units of any length and uses the relationship between the data bits and redundant bits as
discussed.
Positions of redundancy bits in hamming code
hamming code is used for correction for 4-bit numbers (d4d3d2d1) with the help of three
redundant bits (r3r2r1). For the example data 1010, first r1 (0) is calculated considering the parity
of the bit positions, 1, 3, 5 and 7. Then the parity bits r2 is calculated considering bit positions 2,
3, 6 and 7. Finally, the parity bits r4 is calculated considering bit positions 4, 5, 6 and 7 as shown.
If any corruption occurs in any of the transmitted code 1010010, the bit position in error can be
found out by calculating r3r2r1 at the receiving end. For example, if the received code word is
1110010, the recalculated value of r3r2r1 is 110, which indicates that bit position in error is 6, the
decimal value of 110.
Example:
Let us consider an example for 5-bit data. Here 4 parity bits are required. Assume that during
transmission bit 5 has been changed from 1 to 0 . The receiver receives the code word and
recalculates the four new parity bits using the same set of bits used by the sender plus the
relevant parity (r) bit for each set . Then it assembles the new parity values into a binary number
in order of r positions (r8, r4, r2, r1).
Calculations :
Parity recalculated (r8, r4, r2, r1) = 01012 = 510.
Hence, bit 5th is in error i.e. d5 is in error.
So, correct code-word which was transmitted is :
1. Confidentiality : Requires that data only be accessible by authorized parties. This type of
access includes printing, displaying, and other forms of disclosure, including simply revealing
the existence of an object.
2. Integrity : Requires that only authorized parties can modify data. Modification includes
writing, changing, changing status, deleting, and creating.
3. Availability : Requires that data are available to authorized parties.
4. Authenticity : Requires that a host or service be able to verify the identity of a user.
1. Attacks against IP
A number of attacks against IP are possible. Typically, these exploit the fact that IP does not
perform a robust mechanism for authentication, which is proving that a packet came from where
it claims it did. A packet simply claims to originate from a given address, and there isn't a way to
be sure that the host that sent the packet is telling the truth. This isn't necessarily a weakness, per
se, but it is an important point, because it means that the facility of host authentication has to be
provided at a higher layer on the ISO/OSI Reference Model. Today, applications that require
strong host authentication (such as cryptographic applications) do this at the application layer.
2. Denial-of-Service
DoS (Denial-of-Service) attacks are probably the nastiest, and most difficult to address. These
are the nastiest, because they're very easy to launch, difficult (sometimes impossible) to track,
and it isn't easy to refuse the requests of the attacker, without also refusing legitimate requests for
service. The premise of a DoS attack is simple: send more requests to the machine than it can
handle. There are toolkits available in the underground community that make this a simple
matter of running a program and telling it which host to blast with requests. The attacker's
program simply makes a connection on some service port, perhaps forging the packet's header
information that says where the packet came from, and then dropping the connection. If the host
is able to answer 20 requests per second, and the attacker is sending 50 per second, obviously the
host will be unable to service all of the attacker's requests, much less any legitimate requests (hits
on the web site running there, for example). Such attacks were fairly common in late 1996 and
early 1997, but are now becoming less popular. Some things that can be done to reduce the risk
of being stung by a denial of service attack include :
» Not running your visible-to-the-world servers at a level too close to capacity
» Using packet filtering to prevent obviously forged packets from entering into your network
address space. Obviously forged packets would include those that claim to come from your own
hosts, addresses reserved for private networks as defined in RFC 1918 and the loopback network
(127.0.0.0).
» Keeping up-to-date on security-related patches for your hosts' operating systems.
3. Unauthorized Access
Unauthorized access is a very high-level term that can refer to a number of different sorts of
attacks. The goal of these attacks is to access some resource that your machine should not
provide the attacker. For example, a host might be a web server, and should provide anyone with
requested web pages. However, that host should not provide command shell access without
being sure that the person making such a request is someone who should get it, such as a local
administrator.
5. Confidentiality Breaches
We need to examine the threat model: what is it that you're trying to protect yourself against?
There is certain information that could be quite damaging if it fell into the hands of a competitor,
an enemy, or the public. In these cases, it's possible that compromise of a normal user's account
on the machine can be enough to cause damage (perhaps in the form of PR, or obtaining
information that can be used against the company, etc.)
While many of the perpetrators of these sorts of break-ins are merely thrill-seekers interested in
nothing more than to see a shell prompt for your computer on their screen, there are those who
are more malicious, as we'll consider next. (Additionally, keep in mind that it's possible that
someone who is normally interested in nothing more than the thrill could be persuaded to do
more: perhaps an unscrupulous competitor is willing to hire such a person to hurt you.)
6. Destructive Behavior
Among the destructive sorts of break-ins and attacks, there are two major categories :
Data Diddling : The data diddler is likely the worst sort, since the fact of a break-in might not be
immediately obvious. Perhaps he's toying with the numbers in your spreadsheets, or changing the
dates in your projections and plans. Maybe he's changing the account numbers for the auto-
deposit of certain paychecks. In any case, rare is the case when you'll come in to work one day,
and simply know that something is wrong. An accounting procedure might turn up a discrepancy
in the books three or four months after the fact. Trying to track the problem down will certainly
be difficult, and once that problem is discovered, how can any of your numbers from that time
period be trusted? How far back do you have to go before you think that your data is safe?
Data Destruction : Some of those perpetrate attacks are simply twisted jerks who like to delete
things. In these cases, the impact on your computing capability -- and consequently your
business -- can be nothing less than if a fire or other disaster caused your computing
Cryptographic Algorithms
There are several ways of classifying cryptographic algorithms. For purposes of this paper, they
will be categorized based on the number of keys that are employed for encryption and
decryption, and further defined by their application and use. The three types of algorithms that
will be :
» Secret Key Cryptography (SKC) : Uses a single key for both encryption and decryption,
» Public Key Cryptography (PKC) : Uses one key for encryption and another for decryption.
» Hash Functions : Uses a mathematical transformation to irreversibly "encrypt" information.
Cryptographic Algorithms
Public-Key Cryptography
PKC depends upon the existence of so-called one-way functions, or mathematical functions that
are easy to computer whereas their inverse function is relatively difficult to compute. In PKC,
one of the keys is designated the public key and may be advertised as widely as the owner wants.
The other key is designated the private key and is never revealed to another party. It is straight
forward to send messages under this scheme. Suppose Shams wants to send Aadil a message.
Shams encrypts some information using Aadil‟s ublic key; Aadil decrypts the ciphertext using
his private key. This method could be also used to prove who sent a message; Shams, for
example, could encrypt some plaintext with his private key; when Aadil decrypts using Shams‟s
public key, he knows that Shams sent the message and Shams cannot deny having sent the
message (non-repudiation).
Hash Functions
Hash functions, also called message digests and one-way encryption, and are algorithms that, in
some sense, use no key Instead, a fixed-length hash value is computed based upon the plaintext
that makes it impossible for either the contents or length of the plaintext to be recovered. Hash
algorithms are typically used to provide a digital fingerprint of a file's contents often used to
ensure that the file has not been altered by an intruder or virus. Hash functions are also
commonly employed by many operating systems to encrypt passwords. Hash functions, then,
provide a measure of the integrity of a file.
» Hash functions : Hash functions are well-suited for ensuring data integrity because any change
made to the contents of a message will result in the receiver calculating a different hash value
than the one placed in the transmission by the sender. Since it is highly unlikely that two
different messages will yield the same hash value, data integrity is ensured to a high degree of
confidence.
» Secret key cryptography : Secret key cryptography is ideally suited to encrypting messages,
thus providing privacy and confidentiality. The sender can generate a session key on a per-
message basis to encrypt the message; the receiver, of course, needs the same session key to
decrypt the message.
» Public-key cryptography : Public-key cryptography asymmetric schemes can also be used for
non-repudiation and user authentication; if the receiver can obtain the session key encrypted with
the sender's private key, then only this sender could have sent the message. Public-key
cryptography could, theoretically, also be used to encrypt messages although this is rarely done
because secret-key cryptography operates about 1000 times faster than public-key cryptography.
Figure puts all of this together and shows how a hybrid cryptographic scheme combines all of
these functions to form a secure transmission comprising digital signature and digital envelope.
In this example, the sender of the message is Shams and the receiver is Bello.
Combination of Encryption Techniques
A digital envelope comprises an encrypted message and an encrypted session key. Shams uses
secret key cryptography to encrypt his message using the session key, which he generates at
random with each session. Shams then encrypts the session key using Bello's public key. The
encrypted message and encrypted session key together form the digital envelope. Upon receipt,
Bello recovers the session secret key using his private key and then decrypts the encrypted
message.
The digital signature is formed in two steps. First, Shams computes the hash value of her
message; next, he encrypts the hash value with his private key. Upon receipt of the digital
signature, Bello recovers the hash value calculated by Shams by decrypting the digital signature
with Shams's public key. Bello can then apply the hash function to Shams's original message,
which he has already decrypted. If the resultant hash value is not the same as the value supplied
by Shams, then Bello knows that the message has been altered; if the hash values are the same,
Bello should believe that the message he received is identical to the one that Shams sent.
This scheme also provides nonrepudiation since it proves that Shams sent the message; if the
hash value recovered by Bello using Shams's public key proves that the message has not been
altered, then only Shams could have created the digital signature. Bello also has proof that he is
the intended receiver; if he can correctly decrypt the message, then he must have correctly
decrypted the session key meaning that his is the correct private key.
Firewall
A firewall is simply a group of components that collectively form a barrier between two
networks. A firewall is a hardware or software system that prevents unauthorized access to or
from a network. They can be implemented in both hardware and software, or a combination of
both. Firewalls are frequently used to prevent unauthorized Internet users from accessing private
networks connected to the Internet. All data entering or leaving the Intranet pass through the
firewall, which examines each packet and blocks those that do not meet the specified security
criteria.
Types of Firewall
Firewalls can be divided into five basic types :
1. Packet filters
2. Stateful Inspection
3. Proxys
4. Dynamic
5. Kernel firewall
The divisions above however are not quite well defined as most modern firewalls have a mix of
abilities that place them in more than one of the categories listed.
To simplify the most commonly used firewalls, expert breaks them down into three categories :
i. Application firewalls
ii. Network layer firewalls
iii. Proxy firewall
One important difference about many network layer firewalls is that they route traffic directly
through them, which means in order to use one, you either need to have a validly-assigned IP
address block or a private Internet address block. Network layer firewalls tend to be very fast and
almost transparent to their users.
In some cases, having an application in the way may impact performance and may make the
firewall less transparent. Early application layer firewalls are not particularly transparent to end-
users and may require some training. However, more modern application layer firewalls are
often totally transparent. Application layer firewalls tend to provide more detailed audit reports
and tend to enforce more conservative security models than network layer firewalls.
The future of firewalls sits somewhere between both network layer firewalls and application
layer firewalls. It is likely that network layer firewalls will become increasingly aware of the
information going through them, and application layer firewalls will become more and more
transparent. The end result will be kind of a fast packet-screening system that logs and checks
data as it passes through.
Proxy Firewalls
Proxy firewalls offer more security than other types of firewalls, but this is at the expense of
speed and functionality, as they can limit which applications your network can support. Why are
they more secure? Unlike stateful firewalls, or application layer firewalls, which allow or block
network packets from passing to and from a protected network, traffic does not flow through a
proxy. Instead, computers establish a connection to the proxy, which serves as an intermediary,
and initiate a new network connection on behalf of the request. This prevents direct connections
between systems on either side of the firewall and makes it harder for an attacker to discover
where the network is, because they will never receive packets created directly by their target
system. Proxy firewalls also provide comprehensive, protocol-aware security analysis for the
protocols they support. This allows them to make better security decisions than products that
focus purely on packet header information.