Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
20 views72 pages

Computer Network Notes

The document discusses various computer network topologies, including bus, ring, star, mesh, tree, and hybrid topologies, detailing their structures, advantages, and disadvantages. It also covers essential networking devices such as hubs, switches, and routers, explaining their roles in network connectivity and functionality. The document emphasizes the importance of understanding these concepts for effective network administration.

Uploaded by

mubiru
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views72 pages

Computer Network Notes

The document discusses various computer network topologies, including bus, ring, star, mesh, tree, and hybrid topologies, detailing their structures, advantages, and disadvantages. It also covers essential networking devices such as hubs, switches, and routers, explaining their roles in network connectivity and functionality. The document emphasizes the importance of understanding these concepts for effective network administration.

Uploaded by

mubiru
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 72

Computer Network topologies

The term topology in computer networking refers to the way in which a network is laid out
physically. Two or more devices connect to a link; two or more links form a topology. The
topology of a network is the geometric representation of the relationship of all links and linking
devices (usually called nodes) to one another. The cost and flexibility of a network installation
are partly affected by as is system reliability. Many network topologies are commonly used, but
they all have certain similarities. Information is carried either through space (wireless) or cable.
The cable must control the movement of information on the network so that data can be
transmitted in a reliable manner. The basic topologies possible: bus, star, ring, mesh and Hybrid.

Bus Topology
The Bus topology consists of a single cable that runs to every work-station. The bus topology is
also known as linear bus. In other words, all the nodes (computers and servers) are connected to
the single cable (called bus), by the help of interface connectors. This central cable is the back
bone of the network and every workstation communicates with the other device through this bus.

Computers on a bus topology network communicate by addressing data to a particular computer


and putting that data on the cable in the form of electronic signals. To understand how computers
communicate on a bus you need to be familiar with three concepts:
i. Sending the signal
ii. Signal Bounce
iii. The Terminator

i. Sending the signal


Network data in the form of electronic signals is sent to all of the computers on the network;
however, the information is accepted only by the computer whose address matches the address
encoded in the original signal. Only one computer at a time can send messages.
ii. Signal Bounce
Because the data, or electronic signal, is sent to the entire network, it will travel from one end of
the cable to the other. If the signal were allowed to continue uninterrupted, it would keep
bouncing back and forth along the cable and prevent other computers from sending signals.
Therefore, the signal must be stopped.

iii. The Terminator


To stop the signal from bouncing, a component called a terminator is placed at each end of the
cable to absorb free signals. Absorbing the signal clears the cable so that other computers can
send data. Every cable end on the network must be plugged into something. For example, a cable
end could be plugged into a computer or a connector to extend the cable length. Any open cable
ends-ends not plugged into something – must be terminated to prevent signal bounce.

In bus topology nodes are connected to the bus cable by drop lines and taps. See figure 11. A
drop line is a connection running between the device and the main cable. A tap is a connector
that either splices into the main cable or punctures the sheathing of a cable to create a contact
with the metallic core. As a signal travels along the backbone, some of its energy is transformed
into heat. Therefore, it becomes weaker and weaker as it travels farther and farther. For this
reason there is a limit on the number of taps a bus can support and on the distance between those
taps.

Advantages of Linear Bus Topology


1. It is easy to set-up and extend bus network.
2. Cable length required for this topology is the least compared to other networks.
3. Bus topology very cheap.
4. Linear Bus network is mostly used in small networks.

Disadvantages of Linear Bus Topology


1. There is a limit on central cable length and number of nodes that can be connected.
2. Dependency on central cable in this topology has its disadvantages. If the main cable (i.e. bus)
encounters some problem, whole network breaks down.
3. Proper termination is required to dump signals. Use of terminators is must.
4. It is difficult to detect and troubleshoot fault at individual station.
5. Maintenance costs can get higher with time.
6. Efficiency of Bus network reduces, as the number of devices connected to it increases.
7. It is not suitable for networks with heavy traffic.
8. Security is very low because all the computers receive the sent signal from the source.

Ring Topology
The ring topology connects computers on a single circle of cable. There are no terminated ends.
A ring topology connects one host to the next and the last host to the first. The signal travels
around the loop in one direction and pass through each computer. Unlike the passive bus
topology, each computer acts like a repeater to boost the signal and send it on to the next
computer. Because the signal passes through each computer, the failure of one computer can
impact the entire network.

One method of transmitting data around a ring is called token passing. The token is passed from
computer to computer until it gets to a computer that has data to send. The sending computer
modifies the token, puts an electronic address on the data, and sends it around the ring.
Advantages of Ring Topology
1. This type of network topology is very organized. Each node gets to send the data when it
receives an empty token. This helps to reduces chances of collision. Also in ring topology all the
traffic flows in only one direction at very high speed.
2. Even when the load on the network increases, its performance is better than that of Bus
topology.
3. There is no need for network server to control the connectivity between workstations.
4. Additional components do not affect the performance of network.
5. Each computer has equal access to resources.

Disadvantages of Ring Topology


1. Each packet of data must pass through all the computers between source and destination. This
makes it slower than Star topology.
2. If one workstation or port goes down, the entire network gets affected.
3. Network is highly dependent on the wire which connects different components.
4. MAU's and network cards are expensive as compared to Ethernet cards and hubs.

Star Topology
In the star topology, computers are connected by cable segments to centralized component,
called a hub or switch. Signals are transmitted from the sending computer through the hub or
switch to all computers on the network. This topology originated in the early days of computing
with computers connected to a centralized mainframe computer. It is now a common topology in
microcomputer networking. Each device has a dedicated point-to-point link only to a central
controller, usually called a hub. The devices are not directly linked to one another. Unlike a mesh
topology, a star topology does not allow direct traffic between devices. The controller acts as an
exchange: If one device wants to send data to another, it sends the data to the controller, which
then relays the data to the other connected device.

The star network offers centralized resources and management. However, because each
computer is connected to a central point, this topology requires a great deal of cable in a large
network installation. Also, if the central point fails, the entire network goes down.
Advantages of Star Topology
1. As compared to Bus topology it gives far much better performance, signals don’t necessarily
get transmitted to all the workstations. A sent signal reaches the intended destination after
passing through no more than 3-4 devices and 2-3 links. Performance of the network is
dependent on the capacity of central hub.
2. Easy to connect new nodes or devices. In star topology new nodes can be added easily without
affecting rest of the network. Similarly components can also be removed easily.
3. Centralized management. It helps in monitoring the network.
4. Failure of one node or link doesn’t affect the rest of network. At the same time it is easy to
detect the failure and troubleshoot it.

Disadvantages of Star Topology


1. Too much dependency on central device has its own drawbacks. If it fails whole network goes
down.
2. The use of hub, a router or a switch as central device increases the overall cost of the network.
3. Performance and as well number of nodes which can be added in such topology is depended
on capacity of central device.

Mesh Topology
In a mesh topology, every device has a dedicated point-to-point link to every other device. The
term dedicated means that the link carries traffic only between the two devices it connects. In a
mesh topology, Node1 must be connected to n-1 nodes, node2 must be connected to (n – 1)
nodes, and finally node n must be connected to (n – 1) nodes. We need n (n - 1) physical links. In
other words, we can say that in a mesh topology, we need n (n – 1)/2.

To accommodate many links, every device on the network must have (n – 1) input/output (I/O)
ports to be connected to the (n – 1) stations as shown in Figure above. For these reasons a mesh
topology is usually implemented in a limited fashion, as a backbone connecting the main
computers of a hybrid network that can include several other topologies. One practical example
of a mesh topology is the connection of telephone regional offices in which each regional office
needs to be connected to every other regional office.

Advantages of Mesh topology


1. Data can be transmitted from different devices simultaneously. This topology can withstand
high traffic.
2. Even if one of the components fails there is always an alternative present. So data transfer
doesn’t get affected.
3. Expansion and modification in topology can be done without disrupting other nodes.

Disadvantages of Mesh topology


1. There are high chances of redundancy in many of the network connections.
2. Overall cost of this network is way too high as compared to other network topologies.
3. Set-up and maintenance of this topology is very difficult. Even administration of the network
is tough.

Tree Topology
Tree topology can be derived from the star topology. Tree has a hierarchy of various bubs, like
you have branches in a tree; hence the name. Figure 1.6 in this case every node is connected to
some hub. However, only a few nodes are connected directly to the central hub.

The central hub contains a repeater, which looks at the incoming bits and regenerates them afresh
as the full blown signals for 0 or 1 as required. This allows the digital signals to traverse over
longer distances. Therefore, the central hub is also called active hubs. The tree topology also
contains many secondary hubs, which may be active hubs or passive hubs. The merits and
demerits of tree topology are almost similar to those of the star topology.
Advantages of Tree Topology
1. It is an extension of Star and bus Topologies, so in networks where these topologies can't be
implemented individually for reasons related to scalability, tree topology is the best alternative.
2. Expansion of Network is possible and easy.
3. Here, we divide the whole network into segments (star networks), which can be easily
managed and maintained.
4. Error detection and correction is easy.
5. Each segment is provided with dedicated point-to-point wiring to the central hub.
6. If one segment is damaged, other segments are not affected.

Disadvantages of Tree Topology


1. Because of its basic structure, tree topology, relies heavily on the main bus cable, if it breaks
whole network is crippled.
2. As more and more nodes and segments are added, the maintenance becomes difficult.
3. Scalability of the network depends on the type of cable used.

Hybrid Topology
A network topology is a connection of various links and nodes, communicating with each other
for transfer of data. We also saw various advantages and disadvantages of Star, Bus, Ring, Mesh.
Hybrid, as the name suggests, is mixture of two different things. Similarly in this type of
topology we integrate two or more different topologies to form a resultant topology which has
good points (as well as weaknesses) of all the constituent basic topologies rather than having
characteristics of one specific topology. This combination of topologies is done according to the
requirements of the organization.

For example, if there is an existing ring topology in one office department while a bus topology
in another department, connecting these two will result in Hybrid topology. Remember
connecting two similar topologies cannot be termed as Hybrid topology. Star-Ring and Star-Bus
networks are most common examples of hybrid network.
Advantages of Hybrid Network Topology
1. Reliable: Unlike other networks, fault detection and troubleshooting is easy in this type of
topology. The part in which fault is detected can be isolated from the rest of network and
required corrective measures can be taken, WITHOUT affecting the functioning of rest of the
network.
2. Scalable: It’s easy to increase the size of network by adding new components, without
disturbing existing architecture.
3. Flexible: Hybrid Network can be designed according to the requirements of the organization
and by optimizing the available resources. Special care can be given to nodes where traffic is
high as well as where chances of fault are high.
4. Effective: Hybrid topology is the combination of two or more topologies, so we can design it
in such a way that strengths of constituent topologies are maximized while there weaknesses are
neutralized. For example we saw Ring Topology has good data reliability (achieved by use of
tokens) and Star topology has high tolerance capability (as each node is not directly connected to
other but through central device), so these two can be used effectively in hybrid star-ring
topology.

Disadvantages of Hybrid Topology


1. Complexity of Design: One of the biggest drawbacks of hybrid topology is its design. It is not
easy to design this type of architecture and it’s a tough job for designers. Configuration and
installation process needs to be very efficient.
2. Costly Hub: The hubs used to connect two distinct networks, are very expensive. These hubs
are different from usual hubs as they need to be intelligent enough to work with different
architectures and should be function even if a part of network is down.
3. Costly Infrastructure: As hybrid architectures are usually larger in scale, they require a lot of
cables; cooling systems, sophisticate network devices, etc.

Networking Devices
All but the most basic of networks require devices to provide connectivity and functionality.
Understanding how these networking devices operate and identifying the functions they perform
are essential skills for any network administrator and requirements for a Network candidate.
We introduces commonly used networking devices, and, although it is true that you are not likely
to encounter all of the devices mentioned in this chapter on the exam, you can be assured of
working with at least some of them.

The main Networking Devices are:


✓ Hubs
✓ Switches
✓ Bridges
✓ Routers
✓ Gateways
✓ CSU/DSU (Channel Service Unit/Data Service Unit)
✓ NICs (Network Interface Card)
✓ ISDN (Integrated Services Digital Network) adapters
✓ WAPs (Wireless Access Point)
✓ Modems
✓ Transceivers (media converters)
✓ Firewalls

What we Learn
✓ Describe how hubs and switches work
✓ Explain how hubs and switches can be connected to create larger networks
✓ Describe how bridges, routers, and gateways work
✓ Describe how routing protocols are used for dynamic routing
✓ Explain the purpose of other networking components such as Channel Service Unit/Digital
Service Unit (CSU/DSU) and gateways
✓ Describe the purpose and function of network cards
✓ Describe how to identify a MAC address
✓ Understand the function of a transceiver
✓ Describe the purpose of a firewal

Hubs
Hubs are simple network devices, and their simplicity is reflected in their low cost. Small hubs
with four or five ports with the requisite cables, they provide everything needed to create a small
network. Hubs with more ports are available for networks that require greater capacity.

At the bottom of the networking food chain, so to speak, are hubs. Hubs are used in networks
that use twisted-pair cabling to connect devices. Hubs can also be joined together to create larger
networks. Hubs are simple devices that direct data packets to all devices connected to the hub,
regardless of whether the data package is destined for the device. This makes them inefficient
devices and can create a performance bottleneck on busy networks.

In its most basic form, a hub does nothing except provide a pathway for the electrical signals to
travel along. Such a device is called a passive hub. Far more common nowadays is an active hub,
which, as well as providing a path for the data signals, regenerates the signal before it forwards it
to all of the connected devices. A hub does not perform any processing on the data that it
forwards, nor does it perform any error checking.

Hubs come in a variety of shapes and sizes. Small hubs with five or eight connection ports are
commonly referred to as workgroup hubs. Others can accommodate larger numbers of devices
(normally up to 32). These are referred to as high-density devices. Because hubs don’t perform
any processing, they do little except enable communication between connected devices. For
today’s high-demand network applications, something with a little more intelligence is required.
That’s where switches come in.

Regeneration of the signal aside, the basic function of a hub is to take data from one of the
connected devices and forward it to all the other ports on the hub. This method of operation is
inefficient because, in most cases, the data is intended for only one of the connected devices.
Due to the inefficiencies of the hub system and the constantly increasing demand for more
bandwidth, hubs are slowly but surely being replaced with switches. As you will see in the next
section, switches offer distinct advantages over hubs.

Network Switches
On the surface, a switch looks much like a hub. Despite their similar appearance, switches are far
more efficient than hubs and are far more desirable for today’s network environments. As with a
hub, computers connect to a switch via a length of twisted-pair cable. Multiple switches are often
interconnected to create larger networks. Despite their similarity in appearance and their
identical physical connections to computers, switches offer significant operational advantages
over hubs.

Rather than forwarding data to all the connected ports, a switch forwards data only to the port on
which the destination system is connected. It looks at the Media Access Control (MAC)
addresses of the devices connected to it to determine the correct port. A MAC address is a unique
number that is stamped into every NIC. By forwarding data only to the system to which the data
is addressed, the switch decreases the amount of traffic on each network link dramatically. In
effect, the switch literally channels (or switches, if you prefer) data between the ports.

Switches can also further improve performance over the performance of hubs by using a
mechanism called full-duplex. On a standard network connection, the communication between
the system and the switch or hub is said to be half-duplex. In a half-duplex connection, data can
be either sent or received on the wire but not at the same time. Because switches manage the data
flow on the connection, a switch can operate in full-duplex mode—it can send and receive data
on the connection at the same time. In a full-duplex connection, the maximum data throughput is
double that for a half-duplex connection—for example, 10Mbps becomes 20Mbps, and 100Mbps
becomes 200Mbps. As you can imagine, the difference in performance between a 100Mbps
network connection and a 200Mbps connection is considerable.

Switching Methods
Switches use three methods to deal with data as it arrives:
» Cut-through —In a cut-through switching environment, the packet begins to be forwarded as
soon as it is received. This method is very fast, but creates the possibility of errors being
propagated through the network, as there is no error checking.

»Store-and-forward — Unlike cut-through, in a store-and-forward switching environment, the


entire packet is received and error checked before being forwarded. The upside of this method is
that errors are not propagated through the network. The downside is that the error checking
process takes a relatively long time, and store-and-forward switching is considerably slower as a
result.

»Fragment Free — To take advantage of the error checking of store-and forward switching, but
still offer performance levels nearing that of cut through switching, Fragment Free switching can
be used. In a Fragment Free-switching environment, enough of the packet is read so that the
switch can determine whether the packet has been involved in a collision. As soon as the
collision status has been determined, the packet is forwarded.

Bridges
Bridges are networking devices that connect networks. Sometimes it is necessary to divide
networks into subnets to reduce the amount of traffic on each larger subnet or for security
reasons. Once divided, the bridge connects the two subnets and manages the traffic flow between
them. Today, network switches have largely replaced bridges.

A bridge functions by blocking or forwarding data, based on the destination MAC address
written into each frame of data. If the bridge believes the destination address is on a network
other than that from which the data was received, it can forward the data to the other networks to
which it is connected. If the address is not on the other side of the bridge, the data is blocked
from passing. Bridges “learn” the MAC addresses of devices on connected networks by
“listening” to network traffic and recording the network from which the traffic originates.

The advantages of bridges are simple and significant. By preventing unnecessary traffic from
crossing onto other network segments, a bridge can dramatically reduce the amount of network
traffic on a segment. Bridges also make it possible to isolate a busy network from a not-so-busy
one, thereby preventing pollution from busy nodes.
Working of bridge in segregate networks

Bridge Placement and Bridging Loops


There are two issues that you must consider when using bridges. The first is the bridge
placement, and the other is the elimination of bridging loops:

» Placement — Bridges should be positioned in the network using the 80/20 rule. This rule
dictates that 80% of the data should be local and that the other 20% should be destined for
devices on the other side of the bridge.

» Bridging loops — Bridging loops can occur when more than one bridge is implemented on the
network. In this scenario, the bridges can confuse each other by leading one another to believe
that a device is located on a certain segment when it is not. To combat the bridging loop
problem, the IEEE 802.1d Spanning Tree protocol enables bridge interfaces to be assigned a
value that is then used to control the bridge-learning process.

Types of Bridges
Three types of bridges are used in networks
. Transparent bridge — A transparent bridge is invisible to the other devices on the network.
Transparent bridges perform only the function of blocking or forwarding data based on the MAC
address; the devices on the network are oblivious to these bridges’ existence. Transparent bridges
are by far the most popular types of bridges.
Translational bridge — A translational bridge can convert from one networking system to
another. As you might have guessed, it translates the data it receives. Translational bridges are
useful for connecting two different networks, such as Ethernet and Token Ring networks.
Depending on the direction of travel, a translational bridge can add or remove information and
fields from the frame as needed.

Source - route bridge —Source-route bridges were designed by IBM for use on Token Ring
networks. The source-route bridge derives its name from the fact that the entire route of the
frame is embedded within the frame. This allows the bridge to make specific decisions about
how the frame should be forwarded through the network. The diminishing popularity of Token
Ring makes the chances that you’ll work with a source-route bridge very slim.

Routers
Routers are an increasingly common sight in any network environment, from a small home
office that uses one to connect to an Internet service provider (ISP) to a corporate IT
environment where racks of routers manage data communication with disparate remote sites.
Routers make internetworking possible, and in view of this, they warrant detailed attention.

Routers are network devices that literally route data around the network. By examining data as it
arrives, the router can determine the destination address for the data; then, by using tables of
defined routes, the router determines the best way for the data to continue its journey. Unlike
bridges and switches, which use the hardware-configured MAC address to determine the
destination of the data, routers use the software-configured network address to make decisions.
This approach makes routers more functional than bridges or switches, and it also makes them
more complex because they have to work harder to determine the information.
The basic requirement for a router is that it must have at least two network interfaces. If they are
LAN interfaces, the router can manage and route the information between two LAN segments.
More commonly, a router is used to provide connectivity across wide area network (WAN) links.

A router derives can route data it receives from one network onto another. When a router
receives a packet of data, it reads the header of the packet to determine the destination address.
Once it has determined the address, it looks in its routing table to determine whether it knows
how to reach the destination and, if it does, it forwards the packet to the next hop on the route.
The next hop might be the final destination, or it might be another router.

A routing tables play a very important role in the routing process. They are the means by which
the router makes its decisions. For this reason, a routing table needs to be two things. It must be
up-to-date, and it must be complete.
There are two ways that the router can get the information
for the routing table—
1. Static routing
2. Dynamic routing

Static Routing
In environments that use static routing, routes and route information are entered into the routing
tables manually. Not only can this be a time-consuming task, but also errors are more common.
Additionally, when there is a change in the layout, or topology, of the network, statically
configured routers must be manually updated with the changes. Again, this is a time consuming
and potentially error-laden task. For these reasons, static routing is suited to only the smallest
environments with perhaps just one or two routers. A far more practical solution, particularly in
larger environments, is to use dynamic routing.

Dynamic Routing
In a dynamic routing environment, routers use special routing protocols to communicate. The
purpose of these protocols is simple; they enable routers to pass on information about themselves
to other routers so that other routers can build routing tables. There are two types of routing
protocols used—the older distance vector protocols and the newer link state protocols.

Gateways
The term gateway is applied to any device, system, or software application that can perform the
function of translating data from one format to another. The key feature of a gateway is that it
converts the format of the data, not the data itself.

A router that can route data from an IPX network to an IP network is, technically, a gateway.
The same can be said of a translational bridge that, as described earlier in this chapter, converts
from an Ethernet network to a Token Ring network and back again.

Software gateways can be found everywhere. Many companies use an email system such as
Microsoft Exchange or Novell GroupWise. These systems transmit mail internally in a certain
format. When email needs to be sent across the Internet to users using a different email system,
the email must be converted to another format, usually to Simple Mail Transfer Protocol
(SMTP). This conversion process is performed by a software gateway.
Wireless Access Point (WAP)
Wireless access points, referred to as either WAPs or wireless APs, are a transmitter and receiver
(transceiver) device used for wireless LAN (WLAN) radio signals. A WAP is typically a
separate network device with a built-in antenna, transmitter, and adapter. WAPs use the wireless
infrastructure network mode to provide a connection point between WLANs and a wired
Ethernet LAN. WAPs also typically have several ports allowing a way to expand the network to
support additional clients.

Depending on the size of the network, one or more WAPs may be required. Additional WAPs
are used to allow access to more wireless clients and to expand the range of the wireless network.
Each WAP is limited by a transmissions range, the distance a client can be from a WAP and still
get a useable signal. The actual distance depends on the wireless standard being used and the
obstructions and environmental conditions between the client and the WAP.

An infrastructure wireless network uses a WAP


An AP is used to extend a wired LAN to wireless clients doesn’t give you the complete picture.
A wireless AP today can provide different services in addition to just an access point. Today, the
APs might provide many ports that can be used to easily increase the size of the network.
Systems can be added and removed from the network with no affect on other systems on the
network. Also, many APs provide firewall capabilities and DHCP service. When they are hooked
up, they will provide client systems with a private IP address and then prevent Internet traffic
from accessing client systems. So in effect, the AP is a switch, a DHCP Server, router, and a
firewall.
Modems
A modem, short for modulator/demodulator, is a device that converts the digital signals
generated by a computer into analog signals that can travel over conventional phone lines. The
modem at the receiving end converts the signal back into a format the computer can understand.
Modems can be used as a means to connect to an ISP or as a mechanism for dialing up to a LAN.

Modems can be internal add-in expansion cards, external devices that connect to the serial or
USB port of a system, PCMCIA cards designed for use in laptops, or proprietary devices
designed for use on other devices such as portables and handhelds.

The configuration of a modem depends on whether it is an internal or external device. For


internal devices, the modem must be configured with an interrupt request (IRQ) and a memory
I/O address. It is common practice, when installing an internal modem, to disable the built-in
serial interfaces and assign the modem the resources of one of those (typically COM2).

For external modems, you need not concern yourself directly with these port assignments, as the
modem connects to the serial port and uses the resources assigned to it. This is a much more
straightforward approach and one favored by those who work with modems on a regular basis.
For PCMCIA and USB modems, the plug-and-play nature of these devices makes them simple to
configure, and no manual resource assignment is required. Once the modem is installed and
recognized by the system, drivers must be configured to enable use of the device.

Modems

Two factors directly affect the speed of the modem connection—the speed of the modem itself
and the speed of the Universal Asynchronous Receiver/Transmitter (UART) chip in the
computer that is connected to the modem. The UART chip controls the serial communication of
a computer, and although modern systems have UART chips that can accommodate far greater
speeds than the modem is capable of, older systems should be checked to make sure that the
UART chip is of sufficient speed to support the modem speed. The UART chip installed in the
system can normally be determined by looking at the documentation that comes with the system.

Transceivers (Media Converters)


The term transceiver does describe a separate network device, but it can also be technology built
and embedded in devices such as network cards and modems. In a network environment, a
transceiver gets its name from being both a transmitter and a receiver of signals—thus the name
transceivers. Technically, on a LAN, the transceiver is responsible for placing signals onto the
network media and also detecting incoming signals traveling through the same wire. Given the
description of the function of a transceiver, it makes sense that that technology would be found
with network cards.

Although transceivers are found in network cards, they can be external devices as well. As far as
networking is concerned, transceivers can ship as a module or chip type. Chip transceivers are
small and are inserted into a system board or wired directly on a circuit board. Module
transceivers are external to the network and are installed and function similarly to other computer
peripherals, or they can function as standalone devices.

Transceivers (Media Converters)


There are many types of transceivers—RF transceivers, fiber optic transceivers, Ethernet
transceivers, wireless (WAP) transceivers, and more. Though each of these media types are
different, the function of the transceiver remains the same. Each type of the transceiver used has
different characteristics, such as the number of ports available to connect to the network and
whether full-duplex communication is supported.
Listed with transceivers in the Comp TIA objectives are media converters. Media converters are
a technology that allows administrators to interconnect different media types—for example,
twisted pair, fiber, and Thin or thick coax—within an existing network. Using a media converter,
it is possible to connect newer 100Mbps, Gigabit Ethernet, or ATM equipment to existing
networks such as 10BASE-T or 100BASE-T. They can also be used in pairs to insert a fiber
segment into copper networks to increase cabling distances and enhance immunity to
electromagnetic interference (EMI).

Firewalls
A firewall is a networking device, either hardware or software based, that controls access to your
organization’s network. This controlled access is designed to protect data and resources from an
outside threat. To do this, firewalls are typically placed at entry/exit points of a network—for
example, placing a firewall between an internal network and the Internet. Once there, it can
control access in and out of that point.

Although firewalls typically protect internal networks from public networks, they are also used
to control access between specific network segments within a network—for example, placing a
firewall between the Accounts and the Sales departments.

As mentioned, firewalls can be implemented through software or through a dedicated hardware


device. Organizations implement software firewalls through network operating systems (NOS)
such as Linux/UNIX, Windows servers, and Mac OS servers. The firewall is configured on the
server to allow or permit certain types of network traffic. In small offices and for regular home
use, a firewall is commonly installed on the local system and configured to control traffic. Many
third-party firewalls are available.

Hardware firewalls are used in networks of all sizes today. Hardware firewalls are often
dedicated network devices that can be implemented with very little configuration and protect all
systems behind the firewall from outside sources. Hardware firewalls are readily available and
often combined with other devices today. For example, many broadband routers and wireless
access points have firewall functionality built in. In such case, the router or WAP might have a
number of ports available to plug systems in to.

Wireless or Mobile Computing


Wireless communication started from 1970s and it was continuously upgraded to 5G.That is in
next four decades, a mobile wireless technology has evolved from 1G to 5G generations. Fifth
generation wireless mobile communication systems offer very high bandwidth that user never
experienced before and it gives new advanced features which makes it most powerful and in
huge demand in the future . The current trends of different wireless and mobile communications
technologies are present such as third generation mobile networks (UMTS-Universal Mobile
Telecommunication system with CDMA2000), fourth generation mobile technology LTE (Long
Time Evolution), WiMAX, as well as sensor and personal area networks (e.g. Bluetooth). Figure
1 shows the evolution of mobile communication systems with more services, data, use and
benefits to the upcoming generation over 4G.5G will be smarter technology with no limits and to
interconnect the whole world without limits.

Evolution of mobile communication systems

Evolution of mobile communication systems


The future wireless communication system is fifth generation wireless mobile multimedia
internet networks can be completely wireless communication without limitation, which makes
perfect wireless real world – World Wide Wireless Web (WWWW). That fifth generation is
based on 4G technologies. The 5th wireless mobile internet networks are real wireless world
which shall be supported by LAS-CDMA (Large Area Synchronized Code-Division Multiple
Access), OFDM (Orthogonal frequency-division multiplexing), MCCDMA (Multi-Carrier Code
Division Multiple Access), UWB (Ultra-wideband), Network-LMDS (Local Multipoint
Distribution Service), and IPv6. Fifth generation technologies offers tremendous data capabilities
and unrestricted call volumes and infinite data broadcast together within latest mobile operating
system. Fifth generation should make an important difference and add more services and benefits
to the world over fourth generation. Fifth generation should be more intelligent technology that
interconnects the entire world without limits. This generation is expected to be released around
2020. The world of universal, uninterrupted access to information, entertainment and
communication will open new dimension to our lives and change our life style significantly.
Evolution of mobile communication systems
Wireless mobile communication system has become more popular due to rapid changes in
mobile technology. Fast development of wireless communication systems are due to very high
increase in telecoms customers. The revolution of mobile communications is from 1G-the first
generation, 2G-the second generation, 3G-the third generation, 4G-the fourth generation, 5G-the
fifth generation.

First Generation (1G)


The first generation of mobile communication technology emerged in 1980s. The first generation
mobile communication system used analog transmission of speech signal services. In the year
1979, the first cellular system in the world operated by Nippon Telephone and Telegraph (NTT)
in Tokyo, Japan. At that time the most popular analogue systems were Nordic Mobile
Telephones (NMT) and Total Access Communication Systems (TACS), some other analog
systems also introduced in 1980s across the Europe. The main drawback of the first generation is
all of those systems offered handover and roaming capabilities but cellular networks were unable
to interoperate between the countries. In the year 1982s Advanced Mobile Phone System
(AMPS) was launched in United States. AMPS and TACS use the Frequency Modulation (FM)
technique and frequency division duplex (FDD) for radio transmission. In this generation uses
Frequency Division Multiple Access (FDMA), channel bandwidth is 30KHz.

Second Generation (2G)


Second generation enabled to provide the services such as text messages, picture messages and
Multimedia messages (MMS) for various mobile phone networks. The second generation
telecommunication networks were commercially launched on the Global system for Mobile
communications (GSM) standard in 1991.Three primary goals and benefits of 2G networks over
their predecessors were that phone conversations were digitally encrypted; 2G systems were
significantly more efficient on the spectrum allowing for far greater mobile phone penetration
levels; and 2G introduced data services for mobile, starting with SMS text messages. Second
generation can be divided into two standards based multiple access used: TDMA based and
CDMA based. 2.5G was GPRS which could enable much faster communications uses packet
switching and circuit switching domain to provide data rate up to 144kbps. In less populous
areas, the weaker digital signal may not be sufficient to reach a cell tower. This tends to be a
particular problem on 2G systems deployed on higher frequencies, but is mostly not a problem
on 2G systems deployed on lower frequencies.

Third Generation (3G)


Third generation technology is carried out by the International Telecommunication Union (ITU)
in the year 1980. 3G communication frequency spectrum between 400 MHz to 3GHz. 3G
technology approved by both the government and communication companies unanimously. 3G
technical specifications were made available to the public united the name International Mobile
Telecommunications-2000 (IMT-2000). The first commercial 3G technology was launched by
NTT DoCoMo in Japan on 1 October 2001 of W-CDMA. It was initially somewhat limited in
scope; broader availability of the system was delayed by apparent concerns over its reliability.
3rd generation is a set of standards used for mobile devices and mobile telecommunication
services and networks that comply with the IMT-2000. Advantages of using 3rd generation in
fixed Wireless Internet Access, Wireless Voice Telephony, Video calls, Mobile Internet Access
and Mobile TV.

Fourth Generation (4G)


Increasing growth of user demand and also the emergence of new technologies in the mobile
communications have triggered researchers and industries to come up with comprehensive
manifestations of the upcoming fourth generation (4G) wireless communications in mobile
technology. The main concept in fourth generation for the transition to the All-IP is to have a
common platform for all the technologies that have to develop so far and to harmonize with user
expectations of the many service to be provided. The main difference between the All-IP and
GSM/3G is that the functionality of RNC and BSC is now distributed to BTS and a set of servers
and gateways. In contrast to 3G, the new 4G framework to be established will try to accomplish
new levels of user experience and multi service capacity by also integrating all the mobile
technologies that exist (e.g. GSM, GPRS, IMT-2000, Wi-Fi, Bluetooth, ZigBee). 4G technology
data transfer will be much faster and will be less expensive. 4G will be so smart for friendly
operating functions flexibility and any desired service with reasonable quality of services at
anytime, anywhere. Fourth generation mobile communication technology started in 2010 but will
mass market in about 2015-2016. Forth generation technology may provide peak data rate of
1Gbps for downlink and 500Mbps for Uplink. 4G is considered as Long Term Evolution (LTE)
and gives the additional features of 3G, like wireless broadband access, Multimedia Messaging
Service (MMS), Video chart, Mobile TV, HDTV content, Digital Video Broadcasting (DVB),
minimal services: voice and data. 4G is widely accepted that the individual (wireless or/and
wire) access networks will interface to core and/or backbone network elements over the IP
protocol, the lingua franca of networking technology. Regardless of their particular technological
blueprints these wireless access networks are expected to have a dynamic address assignment
mechanism that is capable of associating a short-lived or long-lived IP address to the respective
wireless interface at the mobile terminal, A transparent IP forwarding service that is accessible
over the logical termination of the IP layer at the mobile terminal and one or more gateways at
the wireless access network infrastructure.

Fifth Generation (5G)


Fifth generation technology is very fast and reliable to be a new mobile revolution in mobile
market. All the services of the networks and applications are going to be accessed by the single
IP as telephony, gaming and many other multimedia applications. Through this 5G technology,
worldwide cellular technology comes under one umbrella.5G networks carriers’ extraordinary
data capabilities and has ability to tie together unrestricted call volumes and infinite data
broadcast with in the upcoming mobile operating system. Fifth generation mobile with Nanocore
is a convergence with Nanotechnology, Cloud Computing and the entire IP platform. Fifth
generation requires secure and reliable service providers, capabilities that operators have deep
expertise in. 5G technology provides subscriber supervision tools for action and offer high
resolution for cell phone users and bi-directional large bandwidth. The uploading and
downloading data speed touching the peak.

Second Generation Networks


Digital modulation formats were introduced in this generation with the main technology as
TDMA/FDD and CDMA/FDD. The 2G systems introduced three popular TDMA standards and
one popular CDMA standard in the market.

TDMA/FDD Standards
1. Global System for Mobile (GSM)
The GSM standard, introduced by Groupe Special Mobile, was aimed at designing a uniform
pan-European mobile system. It was the first fully digital system utilizing the 900 MHz
frequency band. The initial GSM had 200 KHz radio channels, 8 full-rate or 16 half-rate TDMA
channels per carrier, encryption of speech, low speed data services and support for SMS for
which it gained quick popularity.

2. Interim Standard 136 (IS-136)


It was popularly known as North American Digital Cellular (NADC) system. In this system,
there were 3 full-rate TDMA users over each 30 KHz channel. The need of this system was
mainly to increase the capacity over the earlier analog (AMPS) system.

3. Pacific Digital Cellular (PDC)


This standard was developed as the counterpart of NADC in Japan. The main advantage of this
standard was its low transmission bit rate which led to its better spectrum utilization.

CDMA/FDD Standard
» Interim Standard 95 (IS-95)
The IS-95 standard, also popularly known as CDMA One, uses 64 orthogonally coded users and
code words are transmitted simultaneously on each of 1.25 MHz channels. Certain services that
have been standardized as a part of IS-95 standard are: short messaging service, slotted paging,
over-the-air activation (meaning the mobile can be activated by the service provider without any
third party intervention), enhanced mobile station identities etc.

» 2.5G Mobile Networks


In an effort to retrofit the 2G standards for compatibility with increased throughput rates to
support modern Internet application, the new data centric standards were developed to be
overlaid on 2G standards and this is known as 2.5G standard.
The main upgradation techniques are:
» supporting higher data rate transmission for web browsing
» supporting e-mail traffic
» enabling location-based mobile service

2.5G networks also brought into the market some popular application, a few of which are:
Wireless Application Protocol (WAP), General Packet Radio Service (GPRS), High Speed
Circuit Switched Dada (HSCSD), Enhanced Data rates for GSM Evolution (EDGE) etc.

Third Generation Networks


3G is the third generation of mobile phone standards and technology, superseding 2.5G. It is
based on the International Telecommunication Union (ITU) family of standards under the
International Mobile Telecommunications-2000 (IMT-2000). ITU launched IMT-2000 program,
which, together with the main industry and standardization bodies worldwide, targets to
implement a global frequency band that would support a single, ubiquitous wireless
communication standard for all countries, to provide the framework for the definition of the 3G
mobile systems. Several radio access technologies have been accepted by ITU as part of the
IMT-2000 framework.

3G networks enable network operators to offer users a wider range of more advanced services
while achieving greater network capacity through improved spectral efficiency. Services include
wide-area wireless voice telephony, video calls, and broadband wireless data, all in a mobile
environment. Additional features also include HSPA data transmission capabilities able to
deliver speeds up to 14.4Mbit/s on the down link and 5.8Mbit/s on the uplink.

3G networks are wide area cellular telephone networks which evolved to incorporate high-speed
internet access and video telephony. IMT-2000 defines a set of technical requirements for the
realization of such targets, which can be summarized as follows:
» high data rates: 144 kbps in all environments and 2 Mbps in low-mobility and indoor
environments
» symmetrical and asymmetrical data transmission
» circuit-switched and packet-switched-based services
» speech quality comparable to wire-line quality
» improved spectral efficiency
» several simultaneous services to end users for multimedia services
» seamless incorporation of second-generation cellular systems
» global roaming
» open architecture for the rapid introduction of new services and technology.

3G Standards and Access Technologies


There are several different radio access technologies defined within ITU, based on either CDMA
or TDMA technology. An organization called 3rd Generation Partnership Project (3GPP) has
continued that work by defining a mobile system that fulfills the IMT-2000 standard. This
system is called Universal Mobile Telecommunications System (UMTS). After trying to
establish a single 3G standard, ITU finally approved a family of five 3G standards, which are
part of the 3G framework known as IMT-2000:
» W-CDMA
» CDMA2000
» TD-SCDMA

Fourth Generation Networks


4G is a concept of inter-operability between different sorts of networks, which is all about high
speed data transfer such as 0-100MBPS of either the server or the data receiver set is moving at a
speed of 60 Kmph. If the server and the receiver are stationary, the data transfer would be a
minimum of 1GBPS. 4G is the next generation wireless networks that will replace 3G networks
sometimes in future. In other context, 4G is simply an initiative by academic, R & D labs to
move beyond the limitations and problems of 3G which is having trouble getting deployed and
meeting its promised performance and throughput.

These days in 3G we can access the internet through our mobile phone with the help of various
technologies, like Wi-Fi, Wi-Max, GPRS, EDGE, WAP and Wi-Bro. But the problem is that if
you are accessing the internet through your mobile phone within the help of any of these
technologies and you move to place where inter-operability between different networks obtains,
you are stuck. If you are using 4G, you can access the net through any of the aforesaid
technologies even while moving from one place to another. Expected issues considered to be
resolved in this 4G mobile technology which are as under:-
» It is considered to embed IP feature in the set for more security purpose as high data rates are
send and receive through the phone using 4G mobile technology.
» 4G mobile technology is going to be able to download at a rate of 100Mbps like mobile access
and less mobility of 1GBps for local access of wireless
» Instead of hybrid technology used in 3G with the combination of CDMA and IS-95 a new
technology OFDMA is introduced 4G. In OFDMA, the concept is again of division multiple
accesses but this is neither time like TDMA nor code divided CDMA rather frequency domain
equalization process symbolizes as OFDMA.
» CDMA sends data through one channel but with the division of time in three slots. While
CDMA also sends data through one channel identifying the receiver with the help of code.
Whereas in 4G mobile technology OFDMA is going to introduce in which data packets sends by
dividing the channel into a narrow band for the greater efficiency comprises a prominent feature
of 4G mobile technology.
» IEEE 802.16m is processing for the IEE802.16e comprising the 4G brand will define it as
WMBA (Wireless Mobile Broadband Access). This is a plain indicator for the internet
availability. The implementation is in progress to avoid the call interference in case of data
download from a website. It will propose 128 Mbps downlink data rate and 56Mbps uplink data
rate which is an extra ordinary step in 4G mobile technology. The service will limit as the
availability of hotspot is condition for the internet connectivity.
» Parallel with WiMAX, LTE is intended to incorporate in 4G mobiles. It is also a wireless
technology for the broadband access. The difference between WiMAX and LTE is that LTE goes
for the IP Address. It follows the same TCP / IP concept inherited from networking technology.
Restricted for the IP addresses it will provide great security as well as high data transferability,
avoid latency, having the ability to adjust the bandwidth. LTE is compatible with CDMA so able
to back n forth the data in between both networks.
» 3GPP Organization is going to introduce two major wireless standards; LTE and
IEEE802.16m. Former is granted permission for the further process while second is under
consideration and that will become a part of 4G mobile technology.
» IPv6 is approved by Version as a 4G standard on June 2009.

Fifth Generation (5G)


5G (5th generation mobile networks or 5th generation wireless systems) is a name used in some
research papers and projects to denote the next major phase of mobile telecommunications
standards beyond the upcoming 4G standards, which are expected to be finalized between
approximately 2011 and 2013. Currently 5G is not a term officially used for any particular
specification or in any official document yet made public by telecommunication companies or
standardization bodies such as 3GPP, WiMAX Forum or ITU-R. New 3GPP standard releases
beyond 4G and LTE Advanced are in progress, but not considered as new mobile generations.

5G network is assumed as the perfection level of wireless communication in mobile technology.


Cable network is now become the memory of past. Mobiles are not only a communication tool
but also serve many other purposes. All the previous wireless technologies are entertaining the
ease of telephone and data sharing but 5G is bringing a new touch and making the life real
mobile life. The new 5G network is expected to improve the services and applications offered by
it.

This paper concludes by looking back at existing wireless technologies and summarizing the
next generation wireless communication media in the following table. These technologies,
indeed, have a long way to go and exciting and amazing products are bound to emerge in the
years to come.

Wireless Transmission Protocols


There are several transmission protocols in wireless manner to achieve different application
oriented tasks.

Wireless Local Loop (WLL) and LMDS


Microwave wireless links can be used to create a wireless local loop. The local loop can be
thought of as the "last mile" of the telecommunication network that resides between the central
office (CO) and the individual homes and business in close proximity to the CO. An advantage
of WLL technology is that once the wireless equipment is paid for, there are no additional costs
for transport between the CO and the customer premises equipment. Many new services have
been proposed and this includes the concept of Local Multipoint Distribution Service (LMDS),
which provides broadband telecommunication access in the local exchange.
Bluetooth
» Facilitates ad-hoc data transmission over short distances from fixed and mobile devices
» Uses a radio technology called frequency hopping spread spectrum. It chops up the data being
sent and transmits chunks of it on up to 79 different frequencies. In its basic mode, the
modulation is Gaussian frequency shift keying (GFSK). It can achieve a gross data rate of 1
Mb/s
» Primarily designed for low power consumption, with a short range (power class-dependent: 1
meter, 10 meters, 100 meters) based on low-cost transceiver microchips in each device

Wireless Local Area Networks (W-LAN)


» IEEE 802.11 WLAN uses ISM band (5.275-5.825GHz)
» Uses 11Mcps DS-SS spreading and 2Mbps user data rates (will fallback to 1Mbps in noisy
conditions)
» IEEE 802.11a standard provides up to 54Mbps throughput in the 5GHz band. The DS-SS IEEE
802.11b has been called Wi-Fi. Wi-Fi networks have limited range. A typical Wi-Fi home router
using 802.11b or 802.11g with a stock antenna might have a range of 32 m (120 ft) indoors and
95 m (300 ft) outdoors. Range also varies with frequency band.
» IEEE 802.11g uses Complementary Code Keying Orthogonal Frequency Division
Multiplexing (CCK-OFDM) standards in both 2.4GHz and 5GHz bands.

WiMax
» Provides up to 70 Mb/sec symmetric broadband speed without the need for cables. The
technology is based on the IEEE 802.16 standard (also called WirelessMAN)
» WiMAX can provide broadband wireless access (BWA) up to 30 miles (50 km) for _xed
stations, and 3 - 10 miles (5 - 15 km) for mobile stations. In contrast, the WiFi/802.11 wireless
local area network standard is limited in most cases to only 100 - 300 feet (30 - 100m).
» The 802.16 specification applies across a wide range of the RF spectrum, and WiMAX could
function on any frequency below 66 GHz (higher frequencies would decrease the range of a Base
Station to a few hundred meters in an urban environment).

Zigbee
» ZigBee is the specification for a suite of high level communication protocols using small, low-
power digital radios based on the IEEE 802.15.4-2006 standard for wireless personal area
networks (WPANs), such as wireless headphones connecting with cell phones via short-range
radio.
» This technology is intended to be simpler and cheaper. ZigBee is targeted at radio-frequency
(RF) applications that require a low data rate, long battery life, and secure networking.
» ZigBee operates in the industrial, scientific and medical (ISM) radio bands; 868 MHz in
Europe, 915 MHz in countries such as USA and Australia, and 2.4 GHz in most worldwide.
Wibree
» Wibree is a digital radio technology (intended to become an open standard of wireless
communications) designed for ultra low power consumption (button cell batteries) within a
short range (10 meters / 30 ft) based around low-cost transceiver microchips in each device.
» Wibree is known as Bluetooth with low energy technology.
» It operates in 2.4 GHz ISM band with physical layer bit rate of 1 Mbps.
Multiple Access Techniques
Multiple access techniques are used to allow a large number of mobile users to share the
allocated spectrum in the most efficient manner. As the spectrum is limited, so the sharing is
required to increase the capacity of cell or over a geographical area by allowing the available
bandwidth to be used at the same time by different users. And this must be done in a way such
that the quality of service doesn't degrade within the existing users.

Multiple Access Techniques for Wireless Communication


In wireless communication systems it is often desirable to allow the subscriber to send
simultaneously information to the base station while receiving information from the base station.

A cellular system divides any given area into cells where a mobile unit in each cell
communicates with a base station. The main aim in the cellular system design is to be able to
increase the capacity of the channel i.e. to handle as many calls as possible in a given bandwidth
with a sufficient level of quality of service.
There are several different ways to allow access to the channel :
» Frequency division multiple-access (FDMA)
» Time division multiple-access (TDMA)
» Code division multiple-access (CDMA)
» Space Division Multiple access (SDMA)

FDMA,TDMA and CDMA are the three major multiple access techniques that are used to share
the available bandwidth in a wireless communication system. Depending on how the available
bandwidth is allocated to the users these techniques can be classified as narrowband and
wideband systems

Frequency Division Multiple Access


FDMA splits the total bandwidth into multiple channels. Each ground station on the earth is
allocated a particular frequency group (or a range of frequencies). Within each group, the ground
station can allocate different frequencies to individual channels, which are used by different
stations connected to that ground station. Before the transmission begins, the transmitting ground
station looks for an empty channel within the frequency range that is allocated to it and once it
finds an empty channel, it allocates it to the particular transmitting station.
This is the most popular method for communication using satellites. The transmission station on
earth combines multiple signals to be sent into a single carrier signal of a unique frequency by
multiplexing them, similar to the way a TV transmission works. The satellite receives this single
multiplexed signal. The satellite, in turn, broadcasts the received signal to the receiving earth
station. The receiving station is also supposed to agree to this carrier frequency, so that it can
demultiplex the received signals.

Basic concept of FDMA

The features of FDMA are as follows:


» The FDMA channel carries only one phone circuit at a time. If an FDMA channel is not in use,
then it sits idle and it cannot be used by other users to increase share capacity. After the
assignment of the voice channel the BS and the MS transmit simultaneously and continuously.
» The bandwidths of FDMA systems are generally narrow i.e. FDMA is usually implemented in
a narrow band system
» The symbol time is large compared to the average delay spread.
» The complexity of the FDMA mobile systems is lower than that of TDMA mobile systems
» FDMA requires tight filtering to minimize the adjacent channel interference.

Features of FDMA
FDMA/FDD in AMPS
The first U.S. analog cellular system, AMPS (Advanced Mobile Phone System) is based on
FDMA/FDD. A single user occupies a single channel while the call is in progress, and the single
channel is actually two simplex channels which are frequency duplexed with a 45 MHz split.
When a call is completed or when a handoff occurs the channel is vacated so that another mobile
subscriber may use it. Multiple or simultaneous users are accommodated in AMPS by giving
each user a unique signal. Voice signals are sent on the forward channel from the base station to
the mobile unit, and on the reverse channel from the mobile unit to the base station. In AMPS,
analog narrowband frequency modulation (NBFM) is used to modulate the carrier.

FDMA/TDD in CT2
Using FDMA, CT2 system splits the available bandwidth into radio channels in the assigned
frequency domain. In the initial call setup, the handset scans the available channels and locks on
to an unoccupied channel for the duration of the call. Using TDD(Time Division Duplexing ),
the call is split into time blocks that alternate between transmitting and receiving.

Time Division Multiple Access


Unlike FDMA, TDMA allows access to the full bandwidth of the frequency spectrum. In
TDMA, each transmitter is allocated a predefined time slot. Each transmitter receives the time
slot in turn and it is allowed to transmit data for the duration of the time slot.

Time Division Multiple Access

After FDMA, TDMA is the second most popular mechanism for communication using satellites.
In case of TDMA, there is no modulation of frequencies (unlike FDMA). Instead, the
transmitting earth station transmits data in the form of packets of data. These data packets arrive
at the satellite one by one (hence the name TDMA). By the same logic that was discussed during
TDM, TDMA is also a digital form of data transmission; TDMA operates in time domain, rather
than frequency domain. Bit rates of 10-100 Mbps are common for TDMA transmissions. This
can be translated into roughly 1800 simultaneous voice calls using 64 Kbps PCM.

In digital systems, continuous transmission is not required because users do not use the allotted
bandwidth all the time. In such cases, TDMA is a complimentary access technique to FDMA.
Global Systems for Mobile communications (GSM) uses the TDMA technique. In TDMA, the
entire bandwidth is available to the user but only for a finite period of time. In most cases the
available bandwidth is divided into fewer channels compared to FDMA and the users are allotted
time slots during which they have the entire channel bandwidth at their disposal.

TDMA requires careful time synchronization since users share the bandwidth in the frequency
domain. The number of channels are less, inter channel interference is almost negligible. TDMA
uses different time slots for transmission and reception. This type of duplexing is referred to as
Time division duplexing(TDD).

Features of TDMA includes the following :


TDMA shares a single carrier frequency with several users where each users makes use of non
overlapping time slots. The number of time slots per frame depends on several factors such as
modulation technique, available bandwidth etc. Data transmission in TDMA is not continuous
but occurs in bursts. This results in low battery consumption since the subscriber transmitter can
be turned OFF when not in use. Because of a discontinuous transmission in TDMA the handoff
process is much simpler for a subscriber unit, since it is able to listen to other base stations
during idle time slots. TDMA uses different time slots for transmission and reception thus
duplexers are not required. TDMA has an advantage that is possible to allocate different numbers
of time slots per frame to different users. Thus bandwidth can be supplied on demand to different
users by concatenating or reassigning time slot based on priority.

Code Division Multiple Access


In CDMA, the same bandwidth is occupied by all the users, however they are all assigned
separate codes, which differentiates them from each other. CDMA utilize a spread spectrum
technique in which a spreading signal (which is uncorrelated to the signal and has a large
bandwidth) is used to spread the narrow band message signal.

Direct Sequence Spread Spectrum (DS-SS)


This is the most commonly used technology for CDMA. In DS-SS, the message signal is
multiplied by a Pseudo Random Noise Code. Each user is given his own code word which is
orthogonal to the codes of other users and in order to detect the user, the receiver must know the
code word used by the transmitter. There are, however, two problems in such systems which are
discussed in the sequel.
Basic concept of CDMA

CDMA/FDD in IS-95
In this standard, the frequency range is: 869-894 MHz (for Rx) and 824-849 MHz (for Tx). In
such a system, there are a total of 20 channels and 798 users per channel. For each channel, the
bit rate is 1.2288 Mbps. For orthogonality, it usually combines 64 Walsh-Hadamard codes and a
m-sequence.

CDMA and Self-interference Problem


In CDMA, self-interference arises from the presence of delayed replicas of signal due to
multipath. The delays cause the spreading sequences of the different users to lose their
orthogonality, as by design they are orthogonal only at zero phase offset. Hence in dispreading a
given user's waveform, nonzero contributions to that user's signal arise from the transmissions of
the other users in the network. This is distinct from both TDMA and FDMA, wherein for
reasonable time or frequency guard bands, respectively, orthogonality of the received signals can
be preserved.

CDMA and Near-Far Problem


The near-far problem is a serious one in CDMA. This problem arises from the fact that signals
closer to the receiver of interest are received with smaller attenuation than are signals located
further away. Therefore the strong signal from the nearby transmitter will mask the weak signal
from the remote transmitter. In TDMA and FDMA, this is not a problem since mutual
interference can be filtered. In CDMA, however, the near-far effect combined with imperfect
orthogonality between codes (e.g. due to different time sifts), leads to substantial interference.
Accurate and fast power control appears essential to ensure reliable operation of multiuser DS-
CDMA systems

Space Division Multiple Access


SDMA utilizes the spatial separation of the users in order to optimize the use of the frequency
spectrum. A primitive form of SDMA is when the same frequency is reused in different cells in a
cellular wireless network. The radiated power of each user is controlled by Space division
multiple access. SDMA serves different users by using spot beam antenna. These areas may be
served by the same frequency or different frequencies. However for limited co-channel
interference it is required that the cells be sufficiently separated. This limits the number of cells a
region can be divided into and hence limits the frequency re-use factor. A more advanced
approach can further increase the capacity of the network. This technique would enable
frequency re-use within the cell. In a practical cellular environment it is improbable to have just
one transmitter fall within the receiver beam width. Therefore it becomes imperative to use other
multiple access techniques in conjunction with SDMA. When different areas are covered by the
antenna beam, frequency can be re-used, in which case TDMA or CDMA is employed, for
different frequencies FDMA can be used.

Function of Space Division Multiple Access


» All users can communicate at the same time using the same channel.
» SDMA is completely free from interference.
» A single satellite can communicate with more satellites receivers of the same frequency.
» The directional spot-beam antennas are used and hence the base station in SDMA, can track a
moving user.
» Controls the radiated energy for each user in space.

omputer Network Model


Network models define a set of network layers and how they interact. There are several different
network models depending on what organization or company started them. The Open Systems
Interconnection model (OSI) is a conceptual model that characterizes and standardizes the
internal functions of a communication system by partitioning it into abstraction layers. The
model is a product of the Open Systems Interconnection project at the International Organization
for Standardization (ISO), maintained by the identification ISO/IEC 7498-1. The model groups
communication functions into seven logical layers. A layer serves the layer above it and is served
by the layer below it. For example, a layer that provides error-free communications across a
network provides the path needed by applications above it, while it calls the next lower layer to
send and receive packets that make up the contents of that path. Two instances at one layer are
connected by a horizontal connection on that layer.

There the two main Network Models :


1. Open Systems Interconnection model (OSI) Model
2. TCP/IP Model

Layered Tasks
We use the concept of layers in our daily life. As an example, let us consider two friends who
communicate through postal mail The process of sending a letter to a friend would be complex if
there were no services available from the post office. Below Figure shows the steps in this task.

In Figure we have a sender, a receiver, and a carrier that transports the letter. There is a hierarchy
of tasks.

At the Sender Site


The activities that take place at the sender site :
» Higher layer : The sender writes the letter, inserts the letter in an envelope, writes the sender
and receiver addresses, and drops the letter in a mailbox.
» Middle layer : The letter is picked up by a letter carrier and delivered to the post office.
» Lower layer : The letter is sorted at the post office; a carrier transports the letter
0n the Way
The letter is then on its way to the recipient. On the way to the recipient's local post office, the
letter may actually go through a central office. In addition, it may be transported by truck, train,
airplane, boat, or a combination of these.

At the Receiver Site


» Lower layer : The carrier transports the letter to the post office.
» Middle layer : The letter is sorted and delivered to the recipient's mailbox.
» Higher layer : The receiver picks up the letter, opens the envelope, and reads it.

The OSI Model


The OSI model (minus the physical medium) is based on a proposal developed by the
International Standards Organization (ISO) as a first step toward international standardization of
the protocols used in the various layers (Day and Zimmermann, 1983). It was revised in 1995.
The model is called the ISO OSI (Open Systems Interconnection) Reference Model because it
deals with connecting open systems-that is, systems that are open for communication with other
systems. We will just call it the OSI model for short.

The OSI model has seven layers. The principles that were applied to arrive at the seven layers
can be briefly summarized as follows:
1. A layer should be created where a different abstraction is needed.
2. Each layer should perform a well-defined function.
3. The function of each layer should be chosen with an eye toward defining internationally
standardized protocols.
4. The layer boundaries should be chosen to minimize the information flow across the interfaces.
5. The number of layers should be large enough that distinct functions need not be thrown
together in the same layer out of necessity and small enough that the architecture does not
become unwieldy.

This model has seven layers


Layers of OSI Model

Application Layer
This layer is responsible for providing interface to the application user. This layer encompasses
protocols which directly interact with the user.

Presentation Layer
This layer defines how data in the native format of remote host should be presented in the native
format of host.

Session Layer
This layer maintains sessions between remote hosts. For example, once user/password
authentication is done, the remote host maintains this session for a while and does not ask for
authentication again in that time span.

Transport Layer
This layer is responsible for end-to-end delivery between hosts. Network Layer: This layer is
responsible for address assignment and uniquely addressing hosts in a network.

Data Link Layer


This layer is responsible for reading and writing data from and onto the line. Link errors are
detected at this layer.

Physical Layer
This layer defines the hardware, cabling, wiring, power output, pulse rate etc.

Physical Layer
The physical layer, the lowest layer of the OSI model, is concerned with the transmission and
reception of the unstructured raw bit stream over a physical medium. It describes the
electrical/optical, mechanical, and functional interfaces to the physical medium, and carries the
signals for all of the higher layers.

Data encoding
Modifies the simple digital signal pattern (1s and 0s) used by the PC to better accommodate the
characteristics of the physical medium, and to aid in bit and frame synchronization. It
determines:
» What signal state represents a binary 1
» How the receiving station knows when a "bit-time" starts
» How the receiving station delimits a frame

Physical medium
Physical medium attachment, accommodating various possibilities in the medium:
» Will an external transceiver (MAU) be used to connect to the medium?
» How many pins do the connectors have and what is each pin used for?

Transmission technique
Determines whether the encoded bits will be transmitted by baseband (digital) or broadband
(analog) signaling.

Physical medium transmission


Transmits bits as electrical or optical signals appropriate for the physical medium, and
determines:
» What physical medium options can be used
» How many volts/db should be used to represent a given signal state, using a given physical
medium

Data Link Layer


The main task of the data link layer is to transform a raw transmission facility into a line that
appears free of undetected transmission errors to the network layer. It accomplishes this task by
having the sender break up the input data into data frames (typically a few hundred or a few
thousand bytes) and transmit the frames sequentially. If the service is reliable, the receiver
confirms correct receipt of each frame by sending back an acknowledgement frame.

Another issue that arises in the data link layer (and most of the higher layers as well) is how to
keep a fast transmitter from drowning a slow receiver in data. Some traffic regulation mechanism
is often needed to let the transmitter know that how much buffer space the receiver has at .the
moment. Frequently, this flow regulation and the error handling are integrated.

Broadcast networks have an additional issue in the data link layer: how to control access to the
shared channel. A special sublayer of the data link layer, the medium access control sublayer,
deals with this problem.

Addressing
Headers and trailers are added, containing the physical addresses of the adjacent nodes, and
removed upon a successful delivery.

Flow Control
This avoids overwriting on the receiver's buffer by regulating the amount of data that can be sent.

Media Access Control (MAC)


In LANs it decides who can send data, when and how much.
Synchronization
Headers have bits, which tell the receiver when a frame is arriving. It also contains bits to
synchronize its timing to know the bit interval to recognize the bit correctly. Trailers mark the
end of a frame, apart from containing the error control bits.

Error control
It checks the CRC to ensure the correctness of the frame. If incorrect, it asks for retransmission.
Again, here there are multiple schemes (positive acknowledgement, negative acknowledgement,
go-back-n, sliding window, etc.).

Node-to-node delivery
Finally, it is responsible for error free delivery of the entire frame/ packet to the next adjacent
node (node to node delivery)

Network Layer
The network layer controls the operation of the subnet. A key design issue is determining how
packets are routed from source to destination. Routes can be based on static tables that are "wired
into" the network and rarely changed. They can also be determined at the start of each
conversation, for example, a terminal session (e.g., a login to a remote machine). Finally, they
can be highly dynamic, being determined a new for each packet, to reflect the current network
load.

If too many packets are present in the subnet at the same time, they will get in one another's way,
forming bottlenecks. The control of such congestion also belongs to the network layer. More
generally, the quality of service provided (delay, transit time, jitter, etc.) is also a network layer
issue.

When a packet has to travel from one network to another to get to its destination, many problems
can arise. The addressing used by the second network may be different from the first one. The
second one may not accept the packet at all because it is too large. The protocols may differ, and
so on. It is up to the network layer to overcome all these problems to allow heterogeneous
networks to be interconnected.

In broadcast networks, the routing problem is simple, so the network layer is often thin or even
nonexistent.

Transport Layer
The basic 'function' of the transport layer is to accept data from above, split it up into smaller
units if need be, pass these to the network layer, and ensure that the pieces all arrive correctly at
the other end. Furthermore, all this must be done, efficiently and in a way that isolates the upper
layers from the inevitable changes in the hardware technology.

The transport layer also determines what type of service to provide to the session layer, and,
ultimately, to the users of the network. The most popular type of transport connection is an error-
free point-to-point channel that delivers messages or bytes in the order in which they were sent.
However, other possible kinds of transport service are the transporting of isolated messages, with
no guarantee about the order of delivery, and the broadcasting of messages to multiple
destinations. The type' of service is determined when the connection is established. (As an aside,
an error-free channel is impossible to achieve; what people really mean by this term is that the
error rate is low enough to ignore in practice.

The transport layer is a true end-to-end layer, all the way from the source to the destination. In
other words, a program on the source machine carries on a conversation with a similar program
on the destination machine, using the message headers and control messages. In the lower layers,
the protocols are between each machine and its immediate neighbors, and not between the
ultimate source and destination machines, which may be separated by many routers. The
difference between layers 1 through 3, which are chained, and layers 4 through 7, which are end-
to-end.

The responsibilities of the transport layer are as follows :


» Host-to-host message delivery : Ensuring that all the packets of a message sent by a source
node arrive at the intended destination.
» Application to application communication : The transport layer enables communication
between two applications running on different computers.
» Segmentation and reassembly : The transport layer breaks a message into packets, numbers
them by adding sequence numbers at the destination to reassemble the original message.
» Connection : The transport layer might create a logical connection between the source and the
destination for the duration of the complete message transfer for better control over the message
transfer.

The network layer performs the following functions


» Routing As discussed earlier.
» Congestion Control As discussed before.
» Logical addressing Source and destination logical addresses (e.g. IP addresses)
» Address transformations Interpreting logical addresses to get their physical equivalent (e.g.
ARP protocol.)
» Source to destination error – free delivery of a packet.

Session Layer
The session layer allows users on different machines to establish sessions between them.
Sessions offer various services, including dialog control (keeping track of whose turn it is to
transmit), token management (preventing two parties from attempting the same critical.
operation at the same time), and synchronization (check pointing long transmissions to allow
them to continue from where they were after a crash).

» Session and sub sub-sessions : The layer divides a session into sub sessions for avoiding
retransmission of entire messages by adding check pointing feature.

» Synchronization : The session layer decides the order in which data need to be passed to the
transport layer.

» Dialog control : The session layer also decides which user/ application send data, and at what
point of time, and whether the communication is simplex, half duplex or full duplex.

» Session closure : The session layer ensure that the session between the hosts is closed grace
fully.

Presentation Layer
Unlike lower layers, which are mostly concerned with moving bits around, the presentation layer
is concerned with the syntax and semantics of the information transmitted. In order to make it
possible for computers with different data representations to communicate, the data structures to
be exchanged can be defined in an abstract way, along with a standard encoding to be used "on
the wire." The presentation layer manages these abstract data structures and allows higher-level
data structures (e.g., banking records); to be defined and exchanged.

The responsibilities of the presentation layer are as follows :


» Translation : The translation between the sender's and the receiver's massage formats is done
by the presentation layer if the two formats are different.

» Encryption : The presentation layer performs data encryption and decryption for security.

» Compression : For the efficient transmission, the presentation layer performs data
compression before sending and decompression at the destination.

Application Layer
The application layer contains a variety of protocols that are commonly needed by users. One
widely used application protocol is HTTP (Hyper Text Transfer Protocol), which is the basis for
the World Wide Web. When a browser wants a Web page, it sends the name of the page it wants
to the server using HTTP. The server then sends the page back other application protocols are
used for file transfer, electronic mail, and network news.
The responsibilities of the application layer are as follows :
» Network abstraction : The application layer provides an abstraction of the underlaying
network to an end user and an application.

» File access and transfer : It allows a use to access, download or upload files from/to a remote
host.

» Mail services : It allows the users to use the mail services.

» Remote login : It allows logging into a host which is remote

» World Wide Web (WWW) : Accessing the Web pages is also a part of this layer.

TCP/IP Reference Model


The TCP/IP reference model was developed prior to OSI model. The major design goals of this
model were, Internet uses TCP/IP protocol suite, also known as Internet suite. This defines
Internet Model which contains four layered architecture. OSI Model is general communication
model but Internet Model is what the internet uses for all its communication. The internet is
independent of its underlying network architecture so is its Model.
This model has the following layers :
1. To connect multiple networks together so that they appear as a single network.
2. To survive after partial subnet hardware failures.
3. To provide a flexible architecture.

Unlike OSI reference model, TCP/IP reference model has only 4 layers : 1. Host-to-Network
Layer
2. Internet Layer
3. Transport Layer
4. Application Layer

Host-to-Network Layer
The TCP/IP reference model does not really say much about what happens here, except to point
out that the host has to connect to the network using some protocol so it can send IP packets to it.
This protocol is not defined and varies from host to host and network to network.

Internet Layer
This layer, called the internet layer, is the linchpin that holds the whole architecture together. Its
job is to permit hosts to inject packets into any network and have they travel independently to the
destination (potentially on a different network). They may even arrive in a different order than
they were sent, in which case it is the job of higher layers to rearrange them, if in-order delivery
is desired. Note that ''internet'' is used here in a generic sense, even though this layer is present in
the Internet.

The internet layer defines an official packet format and protocol called IP (Internet Protocol).
The job of the internet layer is to deliver IP packets where they are supposed to go. Packet
routing is clearly the major issue here, as is avoiding congestion. For these reasons, it is
reasonable to say that the TCP/IP internet layer is similar in functionality to the OSI network
layer. Fig. shows this correspondence.

The Transport Layer


The layer above the internet layer in the TCP/IP model is now usually called the transport layer.
It is designed to allow peer entities on the source and destination hosts to carry on a
conversation, just as in the OSI transport layer. Two end-to-end transport protocols have been
defined here. The first one, TCP (Transmission Control Protocol), is a reliable connection-
oriented protocol that allows a byte stream originating on one machine to be delivered without
error on any other machine in the internet. It fragments the incoming byte stream into discrete
messages and passes each one on to the internet layer. At the destination, the receiving TCP
process reassembles the received messages into the output stream. TCP also handles flow control
to make sure a fast sender cannot swamp a slow receiver with more messages than it can handle.
Relation between OSI & TCP/IP Model

The Application Layer


The TCP/IP model does not have session or presentation layers. On top of the transport layer is
the application layer. It contains all the higher-level protocols. The early ones included virtual
terminal (TELNET), file transfer (FTP), and electronic mail (SMTP), as shown in Fig.6.2. The
virtual terminal protocol allows a user on one machine to log onto a distant machine and work
there. The file transfer protocol provides a way to move data efficiently from one machine to
another. Electronic mail was originally just a kind of file transfer, but later a specialized protocol
(SMTP) was developed for it. Many other protocols have been added to these over the years: the
Domain Name System (DNS) for mapping host names onto their network addresses, NNTP, the
protocol for moving USENET news articles around, and HTTP, the protocol for fetching pages
on the World Wide Web, and many others.

Medium Access Methods


The medium access sub layer, which is part of the data link layer, it deals how to determine that
who may use the network next when the network consists of a single shared channel, as in most
networks. This layer is also known as the Medium Access Control Sub-layer. Networks can be
divided into two categories: those using point-to-point connections and those using broadcast
channels. In any broadcast network, the key issue is how to determine who gets to use the
channel when there is competition for it. To make this point clearer, consider a conference call in
which six people, on six different telephones, are all connected so that each one can hear and talk
to all the others. It is very likely that when one of them stops speaking, two or more will start
talking at once, leading to chaos. When only a single channel is available, determining who
should go next is much harder.

The protocols used to determine who goes next on a multi-access channel belong to a sub-layer
of the data link layer called the MAC (Medium Access Control) sub-layer. The MAC sub-layer
is especially important in LANs, many of which use a multi-access channel as the basis for
communication. For most people, understanding protocols involving multiple parties is easier
after two party protocols are well understood. For that reason we have deviated slightly from a
strict bottom-up order of presentation.

Access Methods
Access method is the term given to the set of rules by which networks arbitrate the use of a
common medium. It is the way the LAN keeps different streams of data from crashing into each
other as they share the network.

Networks need access methods for the same reason streets need traffic lights-to keep people from
hitting each other. Think of the access method as traffic law. The network cable is the street.
Traffic law (or the access method) regulates the use of the street (or cable), determining who can
drive (or send data) where and at what time. On a network, if two or more people try to send data
at exactly the same time, their signals will interfere with each other, ruining the data being
transmitted. The access method prevents this.

The access method works at the data-link layer (layer 2) because it is concerned with the use of
the medium that connects users. The access method doesn't care what is being sent over the
network, just like the traffic law doesn't stipulate what you can carry. It just says you have to
drive on the right side of the road and obey the traffic lights and signs.

Three traditional access methods are used today, although others exist and may become
increasingly important. They are Ethernet, Token Ring, and ARCnet. Actually, these
technologies encompass wider-ranging standards than their access methods. They also define
other features of network transmission, such as the electrical characteristics of signals, and the
size of data packets sent. Nevertheless, these standards are best known by the access methods
they employ these in accessing channels.

ALOHA
In the 1970s, Norman Abramson and his colleagues at the University of Hawaii devised a new
and elegant method to solve the channel allocation problem. Their work has been extended by
many researchers since then (Abramson, 1985).

Although Abramson's work, called the ALOHA system, used ground-based radio broadcasting,
the basic idea is applicable to any system in which uncoordinated users are competing for the use
of a single shared channel. There are two versions of ALOHA: pure and slotted. They differ with
respect to whether time is divided into discrete slots into which all frames must fit. Pure ALOHA
does not require global time synchronization; slotted ALOHA does.

Pure ALOHA
The basic idea of an ALOHA system is simple: let users transmit whenever they have data to be
sent. There will be collisions, of course, and the colliding frames will be damaged. However, due
to the feedback property of broadcasting, a sender can always find out whether its frame was
destroyed by listening to the channel, the same way other users do. If the frame was destroyed,
the sender just waits a random amount of time and sends it again.

The waiting time must be random or the same frames will collide over and over, in lockstep.
Systems in which multiple users share a common channel in a way that can lead to conflicts are
widely known as contention systems.

We have made the frames all the same length because the throughput of ALOHA systems is
maximized by having a uniform frame size rather than by allowing variable length frames.
Pure ALOHA

Whenever two frames try to occupy the channel at the same time, there will be a collision and
both will be garbled. If the first bit of a new frame overlaps with just the last bit of a frame
almost finished, both frames will be totally destroyed and both will have to be retransmitted
later. The checksum cannot (and should not) distinguish between a total loss and a near miss.
Bad is bad.

The station then transmits a frame containing the line and checks the channel to see if it was
successful. If so, the user sees the reply and goes back to typing. If not, the user continues to wait
and the frame is retransmitted over and over until it has been successfully sent. Let the ''frame
time'' denote the amount of time needed to transmit the standard, fixed-length frame. At this
point we assume that the infinite population of users generates new frames according to a
Poisson distribution with mean N frames per frame time. If N > 1, the user community is
generating frames at a higher rate than the channel can handle, and nearly every frame will suffer
a collision. For reasonable throughput we would expect 0 < N < 1. In addition to the new frames,
the stations also generate retransmissions of frames that previously suffered collisions.

Advantages :
» Superior to fixed assignment when there are large number of bursty stations.
» Adapts to varying number of stations.

Disadvantages :
» Theoretically proven throughput maximum of 18.4%.
» Requires queuing buffers for retransmission of packets.
Slotted ALOHA
In 1972, Roberts published a method for doubling the capacity of an ALOHA system (Robert,
1972). His proposal was to divide time into discrete intervals, each interval corresponding to one
frame. This approach requires the users to agree on slot boundaries. One way to achieve
synchronization would be to have one special station emit a pip at the start of each interval, like a
clock.
In Roberts' method, which has come to be known as slotted ALOHA, in contrast to Abramson's
pure ALOHA, a computer is not permitted to send whenever a carriage return is typed. Instead, it
is required to wait for the beginning of the next slot. Thus, the continuous pure ALOHA is turned
into a discrete one. Since the vulnerable period is now halved, the probability of no other traffic
during the same slot as our test frame is e-G which leads to
S=G e -2G
The slotted ALOHA peaks at G = 1, with a throughput of S =1/e or about 0.368, twice that of
pure ALOHA as shown in figure 3.4. If the system is operating at G = 1, the probability of an
empty slot is 0.368. The best we can hope for using slotted ALOHA is 37 percent of the slots
empty, 37 percent successes, and 26 percent collisions. Operating at higher values of G reduces
the number of empties but increases the number of collisions exponentially. To see how this
rapid growth of collisions with G comes about, consider the transmission of a test frame.

Slotted ALOHA

Advantages :
» Doubles the efficiency of Aloha.
» Adaptable to a changing station population.
Disadvantages :
» Theoretically proven throughput maximum of 36.8%.
» Requires queuing buffers for retransmission of packets.

Synchronization required
» Synchronous system: time divided into slots
» Slot size equals fixed packet transmission time
» When Packet ready for transmission, wait until start of next slot
» Packets overlap completely or not at all

CSMA with Collision Detection


Persistent and non persistent CSMA protocols are clearly an improvement over ALOHA because
they ensure that no station begins to transmit when it senses the channel busy. Another
improvement is for stations to abort their transmissions as soon as they detect a collision. In
other words, if two stations sense the channel to be idle and begin transmitting simultaneously,
they will both detect the collision almost immediately. Rather than finish transmitting their
frames, which are irretrievably garbled anyway, they should abruptly stop transmitting as soon as
the collision is detected. Quickly terminating damaged frames saves time and bandwidth.

This protocol, known as CSMA/CD (CSMA with Collision Detection) is widely used on LANs
in the MAC sublayer. In particular, it is the basis of the popular Ethernet LAN, so it is worth
devoting some time to looking at it in detail. CSMA/CD, as well as many other LAN protocols,
uses the conceptual model of Fig.5. At the point marked t0, a station has finished transmitting its
frame. Any other station having a frame to send may now attempt to do so. If two or more
stations decide to transmit simultaneously, there will be a collision. Collisions can be detected by
looking at the power or pulse width of the received signal and comparing it to the transmitted
signal.
CSMA/CD can be in one of three states: contention, transmission, or idle

After a station detects a collision, it aborts its transmission, waits a random period of time, and
then tries again, assuming that no other station has started transmitting in the meantime.
Therefore, our model for CSMA/CD will consist of alternating contention and transmission
periods, with idle periods occurring when all stations are quiet (e.g., for lack of work).
Now let us look closely at the details of the contention algorithm. Suppose that two stations both
begin transmitting at exactly time t0. How long will it take them to realize that there has been a
collision? The answer to this question is vital to determining the length of the contention period
and hence what the delay and throughput will be. The minimum time to detect the collision is
then just the time it takes the signal to propagate from one station to the other.

Based on this reasoning, you might think that a station not hearing a collision for a time equal to
the full cable propagation time after starting its transmission could be sure it had seized the
cable. By ''seized,'' we mean that all other stations knew it was transmitting and would not
interfere. This conclusion is wrong. Consider the following worst-case scenario. Let the time for
a signal to propagate between the two farthest stations be . At t0, one station begins transmitting.
At , an instant before the signal arrives at the most distant station, that station also begins
transmitting. Of course, it detects the collision almost instantly and stops, but the little noise
burst caused by the collision does not get back to the original station until time . In other words,
in the worst case a station cannot be sure that it has seized the channel until it has transmitted for
without hearing a collision. For this reason we will model the contention interval as a slotted
ALOHA system with slot width . On a 1-km long coaxial cable, . For simplicity we will assume
that each slot contains just 1 bit. Once the channel has been seized, a station can transmit at any
rate it wants to, of course, not just at 1 bit per sec.

Collision-Free Protocols
Although collisions do not occur with CSMA/CD once a station has unambiguously captured the
channel, they can still occur during the contention period. These collisions adversely affect the
system performance, especially when the cable is long and the frames are short. And CSMA/CD
is not universally applicable. In this section, we will examine some protocols that resolve the
contention for the channel without any collisions at all, not even during the contention period.
Most of these are not currently used in major systems, but in a rapidly changing field, having
some protocols with excellent properties available for future systems is often a good thing. In the
protocols to be described, we assume that there are exactly N stations, each with a unique
address from 0 to N - 1 ''wired'' into it. It does not matter that some stations may be inactive part
of the time. We also assume that propagation delay is negligible.

A Bit-Map Protocol
In this collision-free protocol, the basic bit-map method, each contention period consists of
exactly N slots. If station 0 has a frame to send, it transmits a 1 bit during the zeroth slot. No
other station is allowed to transmit during this slot. Regardless of what station 0 does, station 1
gets the opportunity to transmit a 1 during slot 1, but only if it has a frame queued. In general,
station j may announce that it has a frame to send by inserting a 1 bit into slot j. After all N slots
have passed by, each station has complete knowledge of which stations wish to transmit. At that
point, they begin transmitting in numerical order.

Transmission of frames in Bit-Map Protocol

Since everyone agrees on who goes next, there will never be any collisions. After the last ready
station has transmitted its frame, an event all stations can easily monitor, another N bit
contention period is begun. If a station becomes ready just after its bit slot has passed by, it is out
of luck and must remain silent until every station has had a chance and the bit map has come
around again. Protocols like this in which the desire to transmit is broadcast before the actual
transmission are called reservation protocols.

Binary Countdown
A problem with the basic bit-map protocol is that the overhead is 1 bit per station, so it does not
scale well to networks with thousands of stations. We can do better than that by using binary
station addresses. A station wanting to use the channel now broadcasts its address as a binary bit
string, starting with the high-order bit. All addresses are assumed to be the same length. The bits
in each address position from different stations are BOOLEAN ORed together. We will call this
protocol binary countdown. It implicitly assumes that the transmission delays are negligible so
that all stations see asserted bits essentially instantaneously. To avoid conflicts, an arbitration
rule must be applied: as soon as a station sees that a high-order bit position that is 0 in its address
has been overwritten with a 1.
Station give ups in Binary Countdown

Error Detection and Correction


Environmental interference and physical defects in the communication medium can cause
random bit errors during data transmission. Error coding is a method of detecting and correcting
these errors to ensure information is transferred intact from its source to its destination. Error
coding is used for fault tolerant computing in computer memory, magnetic and optical data
storage media, satellite and deep space communications, network communications, cellular
telephone networks, and almost any other form of digital data communication. Error coding uses
mathematical formulas to encode data bits at the source into longer bit words for transmission.
The "code word" can then be decoded at the destination to retrieve the information. The extra
bits in the code word provide redundancy that, according to the coding scheme used, will allow
the destination to use the decoding process to determine if the communication medium
introduced errors and in some cases correct them so that the data need not be retransmitted.
Different error coding schemes are chosen depending on the types of errors expected, the
communication medium's expected error rate, and whether or not data retransmission is possible.
Faster processors and better communications technology make more complex coding schemes,
with better error detecting and correcting capabilities, possible for smaller embedded systems,
allowing for more robust communications. However, tradeoffs between bandwidth and coding
overhead, coding complexity and allowable coding delay between transmissions, must be
considered for each application.

Even if we know what type of errors can occur, we can’t simple recognize them. We can do this
simply by comparing this copy received with another copy of intended transmission. In this
mechanism the source data block is send twice. The receiver compares them with the help of a
comparator and if those two blocks differ, a request for re-transmission is made. To achieve
forward error correction, three sets of the same data block are sent and majority decision selects
the correct block. These methods are very inefficient and increase the traffic two or three times.
Fortunately there are more efficient error detection and correction codes. There are two basic
strategies for dealing with errors. One way is to include enough redundant information (extra bits
are introduced into the data stream at the transmitter on a regular and logical basis) along with
each block of data sent to enable the receiver to deduce what the transmitted character must have
been. The other way is to include only enough redundancy to allow the receiver to deduce that
error has occurred, but not which error has occurred and the receiver asks for retransmission. The
former strategy uses Error-Correcting Codes and latter uses Error-detecting Codes.

Types of errors
These interferences can change the timing and shape of the signal. If the signal is carrying binary
encoded data, such changes can alter the meaning of the data.
These errors can be divided into two types :
1. Single-bit error
2. Burst error.

Single-bit Error
The term single-bit error means that only one bit of given data unit (such as a byte, character, or
data unit) is changed from 1 to 0 or from 0 to 1

Single bit error

Single bit errors are least likely type of errors in serial data transmission. To see why, imagine a
sender sends data at 10 Mbps. This means that each bit lasts only for 0.1 μs (micro-second). For
a single bit error to occur noise must have duration of only 0.1 μs (micro-second), which is very
rare. However, a single-bit error can happen if we are having a parallel data transmission. For
example, if 16 wires are used to send all 16 bits of a word at the same time and one of the wires
is noisy, one bit is corrupted in each word.

Burst Error
The term burst error means that two or more bits in the data unit have changed from 0 to 1 or
vice-versa. Note that burst error doesn’t necessary means that error occurs in consecutive bits.
The length of the burst error is measured from the first corrupted bit to the last corrupted bit.
Some bits in between may not be corrupted.

Burst Error

Burst errors are mostly likely to happen in serial transmission. The duration of the noise is
normally longer than the duration of a single bit, which means that the noise affects data; it
affects a set of bits. The number of bits affected depends on the data rate and duration of noise.

Error Detecting Codes


Basic approach used for error detection is the use of redundancy, where additional bits are added
to facilitate detection and correction of errors.

Popular Error Detecting Codes techniques are:


1. Simple Parity check
2. Two-dimensional Parity check
3. Checksum
4. Cyclic redundancy check ( CRC )

Simple Parity Checking or One-dimension Parity Check


The most common and least expensive mechanism for error- detection is the simple parity check.
In this technique, a redundant bit called parity bit, is appended to every data unit so that the
number of 1s in the unit (including the parity becomes even).

Blocks of data from the source are subjected to a check bit or Parity bit generator form, where a
parity of 1 is added to the block if it contains an odd number of 1’s (ON bits) and 0 is added if it
contains an even number of 1’s. At the receiving end the parity bit is computed from the received
data bits and compared with the received parity bit. This scheme makes the total number of 1’s
even, that is why it is called even parity checking. Considering a 4-bit word, different
combinations of the data words and the corresponding code words
Even-parity checking scheme

An observation of the table reveals that to move from one code word to another, at least two data
bits should be changed. Hence these set of code words are said to have a minimum distance
(hamming distance) of 2, which means that a receiver that has knowledge of the code word set
can detect all single bit errors in each code word. However, if two errors occur in the code word,
it becomes another valid member of the set and the decoder will see only another valid code
word and know nothing of the error. Thus errors in more than one bit cannot be detected. In fact
it can be shown that a single parity check code can detect only odd number of errors in a code
word.

Two-dimension Parity Check


Performance can be improved by using two-dimensional parity check, which organizes the block
of bits in the form of a table. Parity check bits are calculated for each row, which is equivalent to
a simple parity check bit. Parity check bits are also calculated for all columns then both are sent
along with the data. At the receiving end these are compared with the parity bits calculated on
the received data.
Two-dimension Parity Checking

Two- Dimension Parity Checking increases the likelihood of detecting burst errors. As we have
shown in Fig. 3.2.4 that a 2-D Parity check of n bits can detect a burst error of n bits. A burst
error of more than n bits is also detected by 2-D Parity check with a high-probability. There is,
however, one pattern of error that remains elusive. If two bits in one data unit are damaged and
two bits in exactly same position in another data unit are also damaged, the 2-D Parity check
checker will not detect an error.
Example, if two data units: 11001100 and 10101100. If first and second from last bits in each of
them is changed, making the data units as 01001110 and 00101110, the error cannot be detected
by 2-D Parity check.

Checksum
In checksum error detection scheme, the data is divided into k segments each of m bits. In the
sender’s end the segments are added using 1’s complement arithmetic to get the sum. The sum is
complemented to get the checksum. The checksum segment is sent along with the data
segments.. At the receiver’s end, all received segments are added using 1’s complement
arithmetic to get the sum. The sum is complemented. If the result is zero, the received data is
accepted; otherwise discarded.

The checksum detects all errors involving an odd number of bits. It also detects most errors
involving even number of bits.
(a) Sender’s end for the calculation of the checksum, (b) Receiving end for checking the
checksum

Cyclic Redundancy Checks (CRC)


This Cyclic Redundancy Check is the most powerful and easy to implement technique. Unlike
checksum scheme, which is based on addition, CRC is based on binary division. In CRC, a
sequence of redundant bits, called cyclic redundancy check bits, are appended to the end of data
unit so that the resulting data unit becomes exactly divisible by a second, predetermined binary
number. At the destination, the incoming data unit is divided by the same number. If at this step
there is no remainder, the data unit is assumed to be correct and is therefore accepted. A
remainder indicates that the data unit has been damaged in transit and therefore must be rejected.
The generalized technique can be explained as follows.

If a k bit message is to be transmitted, the transmitter generates an r-bit sequence, known as


Frame Check Sequence (FCS) so that the (k+r) bits are actually being transmitted. Now this r-bit
FCS is generated by dividing the original number, appended by r zeros, by a predetermined
number. This number, which is (r+1) bit in length, can also be considered as the coefficients of a
polynomial, called Generator Polynomial. The remainder of this division process generates the r-
bit FCS. On receiving the packet, the receiver divides the (k+r) bit frame by the same
predetermined number and if it produces no remainder, it can be assumed that no error has
occurred during the transmission. Operations at both the sender and receiver end
Basic scheme for Cyclic Redundancy Checking

This mathematical operation performed is illustrated in Fig. by dividing a sample 4-bit number
by the coefficient of the generator polynomial x3+x+1, which is 1011, using the modulo-2
arithmetic. Modulo-2 arithmetic is a binary addition process without any carry over, which is just
the Exclusive-OR operation. Consider the case where k=1101. Hence we have to divide 1101000
(i.e. k appended by 3 zeros) by 1011, which produces the remainder r=001, so that the bit frame
(k+r) =1101001 is actually being transmitted through the communication channel. At the
receiving end, if the received number, i.e., 1101001 is divided by the same generator polynomial
1011 to get the remainder as 000, it can be assumed that the data is free of errors.
Cyclic Redundancy Checks

The transmitter can generate the CRC by using a feedback shift register circuit. The same circuit
can also be used at the receiving end to check whether any error has occurred. All the values can
be expressed as polynomials of a dummy variable X. For example, for P = 11001 the
corresponding polynomial is X4+X3+1. A polynomial is selected to have at least the following
properties:
o It should not be divisible by X.
o It should not be divisible by (X+1)

The first condition guarantees that all burst errors of a length equal to the degree of polynomial
are detected. The second condition guarantees that all burst errors affecting an odd number of
bits are detected.

CRC is a very effective error detection technique. If the divisor is chosen according to the
previously mentioned rules, its performance can be summarized as follows:
→ CRC can detect all single-bit errors
→ CRC can detect all double-bit errors (three 1’s)
→ CRC can detect any odd number of errors (X+1)
→ CRC can detect all burst errors of less than the degree of the polynomial.
→ CRC detects most of the larger burst errors with a high probability.
→ For example CRC-12 detects 99.97% of errors with a length 12 or more

Error Correcting Codes


The techniques that we have discussed so far can detect errors, but do not correct them. Error
Correction can be handled in two ways.
» One is when an error is discovered; the receiver can have the sender retransmit the entire data
unit. This is known as backward error correction.
» In the other, receiver can use an error-correcting code, which automatically corrects certain
errors. This is known as forward error correction.

In theory it is possible to correct any number of errors atomically. Error-correcting codes are
more sophisticated than error detecting codes and require more redundant bits. The number of
bits required to correct multiple-bit or burst error is so high that in most of the cases it is
inefficient to do so. For this reason, most error correction is limited to one, two or at the most
three-bit errors.

Single-bit error correction


Concept of error-correction can be easily understood by examining the simplest case of single-bit
errors. As we have already seen that a single-bit error can be detected by addition of a parity bit
(VRC) with the data, which needed to be send. A single additional bit can detect error, but it’s
not sufficient enough to correct that error too. For correcting an error one has to know the exact
position of error, i.e. exactly which bit is in error (to locate the invalid bits). For example, to
correct a single-bit error in an ASCII character, the error correction must determine which one of
the seven bits is in error. To this, we have to add some additional redundant bits.

To calculate the numbers of redundant bits (r) required to correct d data bits, let us find out the
relationship between the two. So we have (d+r) as the total number of bits, which are to be
transmitted; then r must be able to indicate at least d+r+1 different values. Of these, one value
means no error, and remaining d+r values indicate error location of error in each of d+r
locations. So, d+r+1 states must be distinguishable by r bits, and r bits can indicates 2r states.
Hence, 2r must be greater than d+r+1.

2r >= d + r + 1

The value of r must be determined by putting in the value of d in the relation. For example, if d is
7, then the smallest value of r that satisfies the above relation is 4. So the total bits, which are to
be transmitted is 11 bits ( d + r = 7 + 4 = 11).

Now let us examine how we can manipulate these bits to discover which bit is in error. A
technique developed by R.W.Hamming provides a practical solution. The solution or coding
scheme he developed is commonly known as Hamming Code. Hamming code can be applied to
data units of any length and uses the relationship between the data bits and redundant bits as
discussed.
Positions of redundancy bits in hamming code

Basic approach for error detection by using Hamming code


is as follows:
» To each group of m information bits k parity bits are added to form (m+k) bit code
» Location of each of the (m+k) digits is assigned a decimal value.
» The k parity bits are placed in positions 1, 2, …, 2k-1 positions.–K parity checks are performed
on selected digits of each codeword.
» At the receiving end the parity bits are recalculated. The decimal value of the k parity bits
provides the bit-position in error, if any.
Use of Hamming code for error correction for a 4-bit data

hamming code is used for correction for 4-bit numbers (d4d3d2d1) with the help of three
redundant bits (r3r2r1). For the example data 1010, first r1 (0) is calculated considering the parity
of the bit positions, 1, 3, 5 and 7. Then the parity bits r2 is calculated considering bit positions 2,
3, 6 and 7. Finally, the parity bits r4 is calculated considering bit positions 4, 5, 6 and 7 as shown.
If any corruption occurs in any of the transmitted code 1010010, the bit position in error can be
found out by calculating r3r2r1 at the receiving end. For example, if the received code word is
1110010, the recalculated value of r3r2r1 is 110, which indicates that bit position in error is 6, the
decimal value of 110.

Example:
Let us consider an example for 5-bit data. Here 4 parity bits are required. Assume that during
transmission bit 5 has been changed from 1 to 0 . The receiver receives the code word and
recalculates the four new parity bits using the same set of bits used by the sender plus the
relevant parity (r) bit for each set . Then it assembles the new parity values into a binary number
in order of r positions (r8, r4, r2, r1).

Haming code Example

Calculations :
Parity recalculated (r8, r4, r2, r1) = 01012 = 510.
Hence, bit 5th is in error i.e. d5 is in error.
So, correct code-word which was transmitted is :

Computer Network Security


Network security has become increasingly important with the growth in the number and
importance of networks Network security issues include protecting data from unauthorized
access, protecting data from damage and development, and implementing policies and
procedures for recovery from breaches and data losses. Network security is expensive. It is also
very important. An institution network would possibly be subject to more stringent security
requirements than a similarly-sized corporate network, because of its likelihood of storing
personal and confidential information of network users, the danger of which can be compounded
if any network users are minors. A great deal of attention must be paid to network services to
ensure all network content is appropriate for the network community it serves.
Requirements of Network Security
To understand the types of threats to security that exist, we need to have a definition of security
requirements. Computer and network security address four requirements:

1. Confidentiality : Requires that data only be accessible by authorized parties. This type of
access includes printing, displaying, and other forms of disclosure, including simply revealing
the existence of an object.
2. Integrity : Requires that only authorized parties can modify data. Modification includes
writing, changing, changing status, deleting, and creating.
3. Availability : Requires that data are available to authorized parties.
4. Authenticity : Requires that a host or service be able to verify the identity of a user.

Network Security Threats


Some of the network security is discussed below :

1. Attacks against IP
A number of attacks against IP are possible. Typically, these exploit the fact that IP does not
perform a robust mechanism for authentication, which is proving that a packet came from where
it claims it did. A packet simply claims to originate from a given address, and there isn't a way to
be sure that the host that sent the packet is telling the truth. This isn't necessarily a weakness, per
se, but it is an important point, because it means that the facility of host authentication has to be
provided at a higher layer on the ISO/OSI Reference Model. Today, applications that require
strong host authentication (such as cryptographic applications) do this at the application layer.

An attack against IP includes :


1. IP Spoofing : This is where one host claims to have the IP address of another. Since many
systems (such as router access control lists) define which packets may and which packets may
not pass based on the sender's IP address, this is a useful technique to an attacker: he can send
packets to a host, perhaps causing it to take some sort of action. Additionally, some applications
allow login based on the IP address of the person making the request (such as the Berkeley r-
commands).
2. IP Session Hijacking : This is a relatively sophisticated attack, first described by Steve
Bellovin. This is very dangerous, however, because there are now toolkits available in the
underground community that allow otherwise unskilled bad-guy-wannabes to perpetrate this
attack. IP Session Hijacking is an attack whereby a user's session is taken over, being in the
control of the attacker. If the user was in the middle of email, the attacker is looking at the email,
and then can execute any commands he wishes as the attacked user. The attacked user simply
sees his session dropped, and may simply login again, perhaps not even noticing that the attacker
is still logged in and doing things.

2. Denial-of-Service
DoS (Denial-of-Service) attacks are probably the nastiest, and most difficult to address. These
are the nastiest, because they're very easy to launch, difficult (sometimes impossible) to track,
and it isn't easy to refuse the requests of the attacker, without also refusing legitimate requests for
service. The premise of a DoS attack is simple: send more requests to the machine than it can
handle. There are toolkits available in the underground community that make this a simple
matter of running a program and telling it which host to blast with requests. The attacker's
program simply makes a connection on some service port, perhaps forging the packet's header
information that says where the packet came from, and then dropping the connection. If the host
is able to answer 20 requests per second, and the attacker is sending 50 per second, obviously the
host will be unable to service all of the attacker's requests, much less any legitimate requests (hits
on the web site running there, for example). Such attacks were fairly common in late 1996 and
early 1997, but are now becoming less popular. Some things that can be done to reduce the risk
of being stung by a denial of service attack include :
» Not running your visible-to-the-world servers at a level too close to capacity
» Using packet filtering to prevent obviously forged packets from entering into your network
address space. Obviously forged packets would include those that claim to come from your own
hosts, addresses reserved for private networks as defined in RFC 1918 and the loopback network
(127.0.0.0).
» Keeping up-to-date on security-related patches for your hosts' operating systems.

3. Unauthorized Access
Unauthorized access is a very high-level term that can refer to a number of different sorts of
attacks. The goal of these attacks is to access some resource that your machine should not
provide the attacker. For example, a host might be a web server, and should provide anyone with
requested web pages. However, that host should not provide command shell access without
being sure that the person making such a request is someone who should get it, such as a local
administrator.

4. Executing Commands Illicitly


It's obviously undesirable for an unknown and untrusted person to be able to execute commands
on your server machines. There are two main classifications of the severity of this problem:
normal user access, and administrator access. A normal user can do a number of things on a
system (such as read files, mail them to other people, etc.) that an attacker should not be able to
do. This might, then, be all the access that an attacker needs. On the other hand, an attacker
might wish to make configuration changes to a host (perhaps changing its IP address, putting a
start-up script in place to cause the machine to shut down every time it's started, or something
similar). In this case, the attacker will need to gain administrator privileges on the host.

5. Confidentiality Breaches
We need to examine the threat model: what is it that you're trying to protect yourself against?
There is certain information that could be quite damaging if it fell into the hands of a competitor,
an enemy, or the public. In these cases, it's possible that compromise of a normal user's account
on the machine can be enough to cause damage (perhaps in the form of PR, or obtaining
information that can be used against the company, etc.)
While many of the perpetrators of these sorts of break-ins are merely thrill-seekers interested in
nothing more than to see a shell prompt for your computer on their screen, there are those who
are more malicious, as we'll consider next. (Additionally, keep in mind that it's possible that
someone who is normally interested in nothing more than the thrill could be persuaded to do
more: perhaps an unscrupulous competitor is willing to hire such a person to hurt you.)

6. Destructive Behavior
Among the destructive sorts of break-ins and attacks, there are two major categories :
Data Diddling : The data diddler is likely the worst sort, since the fact of a break-in might not be
immediately obvious. Perhaps he's toying with the numbers in your spreadsheets, or changing the
dates in your projections and plans. Maybe he's changing the account numbers for the auto-
deposit of certain paychecks. In any case, rare is the case when you'll come in to work one day,
and simply know that something is wrong. An accounting procedure might turn up a discrepancy
in the books three or four months after the fact. Trying to track the problem down will certainly
be difficult, and once that problem is discovered, how can any of your numbers from that time
period be trusted? How far back do you have to go before you think that your data is safe?
Data Destruction : Some of those perpetrate attacks are simply twisted jerks who like to delete
things. In these cases, the impact on your computing capability -- and consequently your
business -- can be nothing less than if a fire or other disaster caused your computing

Network Security Threats Prevention


Encryption Method
The universal technique for providing confidentiality for transmitted data is symmetric
encryption. A symmetric encryption scheme has five components.
a. Plaintext : This is the original message or data that is fed into the algorithm as input.
b. Encryption algorithm : The encryption algorithm performs various substitutions and
transformations on the plaintext.
c. Secret key : The secret key is also input to the encryption algorithm. The exact substitutions
and transformations performed by the algorithm depend on the key.
d. Ciphertext : This is the scrambled message produced as output. It depends on the plaintext
and the secret key. For a given message, two different keys will produce two different
ciphertexts.
e. Decryption algorithm : This is essentially the encryption algorithm run in reverse. It takes the
ciphertext and the secret key and produces the original plaintext.

Cryptographic Algorithms
There are several ways of classifying cryptographic algorithms. For purposes of this paper, they
will be categorized based on the number of keys that are employed for encryption and
decryption, and further defined by their application and use. The three types of algorithms that
will be :
» Secret Key Cryptography (SKC) : Uses a single key for both encryption and decryption,
» Public Key Cryptography (PKC) : Uses one key for encryption and another for decryption.
» Hash Functions : Uses a mathematical transformation to irreversibly "encrypt" information.

Cryptographic Algorithms

Secret Key Cryptography


With secret key cryptography, a single key is used for both encryption and decryption. As shown
in Figure 33A, the sender uses the key (or some set of rules) to encrypt the plaintext and sends
the ciphertext to the receiver. The receiver applies the same key (or ruleset) to decrypt the
message and recover the plaintext. Because a single key is used for both functions, secret key
cryptography is also called symmetric encryption.
With this form of cryptography, it is obvious that the key must be known to both the sender and
the receiver; that, in fact, is the secret. The biggest difficulty with this approach, of course, is the
distribution of the key.
Secret key cryptography schemes are generally categorized as being either stream ciphers or
block ciphers. Stream ciphers operate on a single bit (byte or computer word) at a time and
implement some form of feedback mechanism so that the key is constantly changing. A block
cipher is so-called because the scheme encrypts one block of data at a time using the same key
on each block. In general, the same plaintext block will always encrypt to the same ciphertext
when using the same key in a block cipher whereas the same plaintext will encrypt to different
ciphertext in a stream cipher.
Stream ciphers come in several flavors but two are worth mentioning here. Self-synchronizing
stream ciphers calculate each bit in the keystream as a function of the previous n bits in the
keystream. It is termed "self-synchronizing" because the decryption process can stay
synchronized with the encryption process merely by knowing how far into the n-bit keystream it
is. One problem is error propagation; a garbled bit in transmission will result in n garbled bits at
the receiving side. Synchronous stream ciphers generate the keystream in a fashion independent
of the message stream but by using the same keystream generation function at sender and
receiver. While stream ciphers do not propagate transmission errors, they are, by their nature,
periodic so that the keystream will eventually repeat.
Block ciphers can operate in one of several modes; the following four are the most important:
» Electronic Codebook (ECB) : This mode is the simplest, most obvious application: the secret
key is used to encrypt the plaintext block to form a ciphertext block. Two identical plaintext
blocks, then, will always generate the same ciphertext block. Although this is the most common
mode of block ciphers, it is susceptible to a variety of brute-force attacks.
» Cipher Block Chaining (CBC) : This mode adds a feedback mechanism to the encryption
scheme. In CBC, the plaintext is exclusively-ORed (XORed) with the previous ciphertext block
prior to encryption. In this mode, two identical blocks of plaintext never encrypt to the same
ciphertext.
» Cipher Feedback (CFB) : This mode is a block cipher implementation as a self-synchronizing
stream cipher. CFB mode allows data to be encrypted in units smaller than the block size, which
might be useful in some applications such as encrypting interactive terminal input. If we were
using 1-byte CFB mode, for example, each incoming character is placed into a shift register the
same size as the block, encrypted, and the block transmitted. At the receiving side, the ciphertext
is decrypted and the extra bits in the block (i.e., everything above and beyond the one byte) are
discarded.
» Output Feedback (OFB) : This mode is a block cipher implementation conceptually similar to
a synchronous stream cipher. OFB prevents the same plaintext block from generating the same
ciphertext block by using an internal feedback mechanism that is independent of both the
plaintext and ciphertext bitstreams.

Public-Key Cryptography
PKC depends upon the existence of so-called one-way functions, or mathematical functions that
are easy to computer whereas their inverse function is relatively difficult to compute. In PKC,
one of the keys is designated the public key and may be advertised as widely as the owner wants.
The other key is designated the private key and is never revealed to another party. It is straight
forward to send messages under this scheme. Suppose Shams wants to send Aadil a message.
Shams encrypts some information using Aadil‟s ublic key; Aadil decrypts the ciphertext using
his private key. This method could be also used to prove who sent a message; Shams, for
example, could encrypt some plaintext with his private key; when Aadil decrypts using Shams‟s
public key, he knows that Shams sent the message and Shams cannot deny having sent the
message (non-repudiation).

Hash Functions
Hash functions, also called message digests and one-way encryption, and are algorithms that, in
some sense, use no key Instead, a fixed-length hash value is computed based upon the plaintext
that makes it impossible for either the contents or length of the plaintext to be recovered. Hash
algorithms are typically used to provide a digital fingerprint of a file's contents often used to
ensure that the file has not been altered by an intruder or virus. Hash functions are also
commonly employed by many operating systems to encrypt passwords. Hash functions, then,
provide a measure of the integrity of a file.

Why Three Encryption Techniques


So, why are there so many different types of cryptographic schemes? Why can't we do
everything we need with just one? The answer is that each scheme is optimized for some specific
application(s).

» Hash functions : Hash functions are well-suited for ensuring data integrity because any change
made to the contents of a message will result in the receiver calculating a different hash value
than the one placed in the transmission by the sender. Since it is highly unlikely that two
different messages will yield the same hash value, data integrity is ensured to a high degree of
confidence.

» Secret key cryptography : Secret key cryptography is ideally suited to encrypting messages,
thus providing privacy and confidentiality. The sender can generate a session key on a per-
message basis to encrypt the message; the receiver, of course, needs the same session key to
decrypt the message.

» Public-key cryptography : Public-key cryptography asymmetric schemes can also be used for
non-repudiation and user authentication; if the receiver can obtain the session key encrypted with
the sender's private key, then only this sender could have sent the message. Public-key
cryptography could, theoretically, also be used to encrypt messages although this is rarely done
because secret-key cryptography operates about 1000 times faster than public-key cryptography.

Figure puts all of this together and shows how a hybrid cryptographic scheme combines all of
these functions to form a secure transmission comprising digital signature and digital envelope.
In this example, the sender of the message is Shams and the receiver is Bello.
Combination of Encryption Techniques

A digital envelope comprises an encrypted message and an encrypted session key. Shams uses
secret key cryptography to encrypt his message using the session key, which he generates at
random with each session. Shams then encrypts the session key using Bello's public key. The
encrypted message and encrypted session key together form the digital envelope. Upon receipt,
Bello recovers the session secret key using his private key and then decrypts the encrypted
message.
The digital signature is formed in two steps. First, Shams computes the hash value of her
message; next, he encrypts the hash value with his private key. Upon receipt of the digital
signature, Bello recovers the hash value calculated by Shams by decrypting the digital signature
with Shams's public key. Bello can then apply the hash function to Shams's original message,
which he has already decrypted. If the resultant hash value is not the same as the value supplied
by Shams, then Bello knows that the message has been altered; if the hash values are the same,
Bello should believe that the message he received is identical to the one that Shams sent.
This scheme also provides nonrepudiation since it proves that Shams sent the message; if the
hash value recovered by Bello using Shams's public key proves that the message has not been
altered, then only Shams could have created the digital signature. Bello also has proof that he is
the intended receiver; if he can correctly decrypt the message, then he must have correctly
decrypted the session key meaning that his is the correct private key.

Firewall
A firewall is simply a group of components that collectively form a barrier between two
networks. A firewall is a hardware or software system that prevents unauthorized access to or
from a network. They can be implemented in both hardware and software, or a combination of
both. Firewalls are frequently used to prevent unauthorized Internet users from accessing private
networks connected to the Internet. All data entering or leaving the Intranet pass through the
firewall, which examines each packet and blocks those that do not meet the specified security
criteria.

Types of Firewall
Firewalls can be divided into five basic types :
1. Packet filters
2. Stateful Inspection
3. Proxys
4. Dynamic
5. Kernel firewall
The divisions above however are not quite well defined as most modern firewalls have a mix of
abilities that place them in more than one of the categories listed.

To simplify the most commonly used firewalls, expert breaks them down into three categories :
i. Application firewalls
ii. Network layer firewalls
iii. Proxy firewall

Network Layer Firewalls


Network layer firewalls generally make their decisions based on the source address, destination
address and ports in individual IP packets. A simple router is the traditional network layer
firewall, since it is not able to make particularly complicated decisions about what a packet is
actually talking to or where it actually came from. Modern network layer firewalls have become
increasingly more sophisticated, and now maintain internal information about the state of
connections passing through them at any time.

One important difference about many network layer firewalls is that they route traffic directly
through them, which means in order to use one, you either need to have a validly-assigned IP
address block or a private Internet address block. Network layer firewalls tend to be very fast and
almost transparent to their users.

Application Layer Firewalls


Application layer firewalls defined, are hosts running proxy servers, which permit no traffic
directly between networks, and they perform elaborate logging and examination of traffic
passing through them. Since proxy applications are simply software running on the firewall, it is
a good place to do lots of logging and access control. Application layer firewalls can be used as
network address translators, since traffic goes in one side and out the other, after having passed
through an application that effectively masks the origin of the initiating connection. However,
run-of-the-mill network firewalls can't properly defend applications. As Michael Cobb explains,
application-layer firewalls offer Layer 7 security on a more granular level, and may even help
organizations get more out of existing network devices.

In some cases, having an application in the way may impact performance and may make the
firewall less transparent. Early application layer firewalls are not particularly transparent to end-
users and may require some training. However, more modern application layer firewalls are
often totally transparent. Application layer firewalls tend to provide more detailed audit reports
and tend to enforce more conservative security models than network layer firewalls.

The future of firewalls sits somewhere between both network layer firewalls and application
layer firewalls. It is likely that network layer firewalls will become increasingly aware of the
information going through them, and application layer firewalls will become more and more
transparent. The end result will be kind of a fast packet-screening system that logs and checks
data as it passes through.

Proxy Firewalls
Proxy firewalls offer more security than other types of firewalls, but this is at the expense of
speed and functionality, as they can limit which applications your network can support. Why are
they more secure? Unlike stateful firewalls, or application layer firewalls, which allow or block
network packets from passing to and from a protected network, traffic does not flow through a
proxy. Instead, computers establish a connection to the proxy, which serves as an intermediary,
and initiate a new network connection on behalf of the request. This prevents direct connections
between systems on either side of the firewall and makes it harder for an attacker to discover
where the network is, because they will never receive packets created directly by their target
system. Proxy firewalls also provide comprehensive, protocol-aware security analysis for the
protocols they support. This allows them to make better security decisions than products that
focus purely on packet header information.

You might also like