Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
8 views23 pages

CN Unit4 Notes

Uploaded by

lokasrinithya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views23 pages

CN Unit4 Notes

Uploaded by

lokasrinithya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Computer Networks – Unit IV – Lecture Notes

UNIT – IV
The Network Layer Design Issues – Store and Forward Packet Switching-Services Provided to the
Transport layer- Implementation of Connectionless Service-Implementation of Connection Oriented
Service- Comparison of Virtual Circuit and Datagram Networks, Routing Algorithms-The
Optimality principle-Shortest path, Flooding, Distance vector, Link state, Hierarchical, Congestion
Control algorithms-General principles of congestion control, Congestion prevention polices

4.1 The Network Layer Design Issues


4.1.1 Store and Forward Packet Switching
• Before explaining the network layer in detail, it is important to understand the context in
which network layer protocols operate.
• This context is illustrated in following Fig 1.
• The major components of the network include:
• The ISP's equipment (routers connected by transmission lines), represented inside the shaded
oval.
• The customers' equipment, shown outside the oval.
• Host H1 is directly connected to one of the ISP's routers (Router A), similar to a home
computer connected via a DSL modem.
• Host H2 is part of a LAN (e.g., an office Ethernet) with a customer-owned router (Router F).
• Router F connects to the ISP's equipment via a leased line.
• Router F is depicted outside the oval since it is not owned by the ISP.
• However, for the purposes of this chapter, customer-premises routers are considered part of
the ISP network.
• This is because they run the same algorithms as the ISP's routers, and the focus is on these
algorithms.

Fig. 1. The environment of the network layer protocols


The equipment is used in the following way:
• A host with a packet to send transmits it to the nearest router.
• This transmission occurs either:
o On its own LAN.

Pragati Engineering College (A) Page 1


Computer Networks – Unit IV – Lecture Notes

o Over a point-to-point link to the ISP.


• The packet is stored at the router until:
o It has fully arrived.
o The link completes processing, including checksum verification.
• The router then forwards the packet to the next router along the path.
• This process continues until the packet reaches the destination host.
• Once reached the destination host, the packet is delivered.

4.1.2 Services Provided to the Transport layer


The network layer provides services to the transport layer at the network layer/transport layer
interface. An important question is precisely what kind of services the network layer provides to the
transport layer. The services need to be carefully designed with the following goals in mind:
1. The services should be independent of the router technology.
2. The transport layer should be shielded from the number, type, and topology of the routers
present.
3. The network addresses made available to the transport layer should use a uniform numbering
plan, even across LANs and WANs.
Designers have flexibility in defining the network layer services for the transport layer. This has led
to an ongoing debate between connection-oriented and connectionless service models. The core issue
is whether the network should establish a dedicated connection before communication or simply
forward packets independently.
Connectionless Approach (Internet Community)
• The idea is that network should focus only on forwarding packets, leaving reliability to the
end systems (hosts).
• The network is inherently unreliable, no matter how it is designed.
• Hosts must handle error detection, correction, and flow control on their own.
• Service Model Provides a connectionless (datagram-based) service
• Only two basic operations:
o SEND PACKET – Transmit a packet to the network.
o RECEIVE PACKET – Accept a packet at the destination.
• No built-in mechanisms for packet ordering or flow control in the network layer.
• Since hosts already perform reliability functions, adding them at the network level is
redundant. This principle influenced the design of the Internet (Saltzer et al., 1984).
• Each packet is independent of previous ones. Each packet must carry the full destination
address, as no pre-established connection exists. This approach maximizes network flexibility
and scalability.
Connection-Oriented Approach (Telephone Companies)
• The idea is that the network should establish a reliable connection before communication
begins. 100+ years of experience with the telephone system shows that reliability is best
handled in the network. Quality of Service (QoS) is the primary concern, especially for real-
time traffic.

Pragati Engineering College (A) Page 2


Computer Networks – Unit IV – Lecture Notes

• Requires setting up a connection before data transmission begins.


• Ensures guaranteed delivery, packet ordering, and flow control at the network level.
• Ideal for applications like voice and video, where consistent delivery is critical.
• Connection-oriented models dominated early data networks, such as X.25 (1970s) and Frame
Relay (1980s).
• The ARPANET and early Internet favoured connectionless networking, leading to the
dominance of IP.
• The Internet’s success proved that a connectionless approach could scale globally.
• ATM (Asynchronous Transfer Mode) was a connection-oriented technology developed to
replace IP in the 1980s. However, IP prevailed, while ATM became limited to niche
applications.
• The Internet is evolving to include connection-oriented features to enhance QoS.
• Examples of connection-oriented technologies in today’s networks:
o MPLS (MultiProtocol Label Switching) – Used for efficient packet forwarding and
traffic engineering.
o VLANs (Virtual LANs) – Improve network segmentation and efficiency.
• These technologies show a hybrid approach, where connectionless IP networks incorporate
connection-oriented features for better performance.

4.1.3 Implementation of Connectionless Service


• Two different organizations are possible, depending on the type of service offered.
• If connectionless service is offered, packets are injected into the subnet individually and
routed independently of each other. No advance setup is needed. The packets are frequently
called datagrams and the subnet is called a datagram subnet.
• If connection-oriented service is used, a path from the source router to the destination router
must be established before any data packets can be sent. This connection is called a VC
(Virtual Circuit) and the subnet is called a virtual circuit subnet.
• Let us now see how a datagram subnet works.
o Suppose that the process P1 in the figure has a long message for P2.
o It hands the message to the transport layer with instructions to deliver it to process P2
on the host H2.
o The transport layer runs on H1, typically within the operating system.
o It adds a transport header to the beginning of the message and then passes it to the
network layer, which is likely just another process within the operating system.
• Let us assume that the message is four times longer than the maximum packet size, so the
network layer has to break it into four packets, 1,2,3, and 4 and sends each of them in ten to
router A using some point-to-point protocol. At this point the carrier takes over.
• Every router has an internal table telling it where to send packets for each possible
destination. Each table entry is a pair consisting of a destination and the outgoing line to use
for that destination. Only directly connected lines can be used.

Pragati Engineering College (A) Page 3


Computer Networks – Unit IV – Lecture Notes

o For example, in Fig. 2 A has only two outgoing lines to B and C so every incoming
packet must be sent to one of these routers, even if the ultimate destination is some
other router. A’s initial routing table is shown in the figure under the label “initially”.
• As they arrived at A, packets 1, 2 and 3 were stored briefly. Then each was forwarded to C
according to A’s table. Packet 1 was then forwarded to E and then to F. When it got to F, it
was encapsulated in a data link layer frame and sent to H2 over the LAN. Packets 2 and 3
follow the same route.
• Something different happened to packet 4. When it got to A it was sent to router B, even
though it is also destined for F. For some reason, A decided to send packet 4 via a different
route. Perhaps it learned of a traffic jam somewhere in ACE path and updated its routing table
as shown in the figure under the label “later”.
• The algorithm that manages the tables and makes the routing decisions is called the routing
algorithm.

Fig. 2. Routing within a datagram network

4.1.4 Implementation of Connection Oriented Service


• For connection-oriented service, we need a virtual-circuit subnet.
• The idea behind virtual circuits is to avoid having to choose a new route for every packet sent.
Instead, when a connection is established, a route from the source machine to the destination
machine is chosen as part of the connection setup and stored in tables inside the routers.
• This route is used for all traffic flowing over the connection, exactly the same way that the
telephone system works.
• When the connection is released, the virtual circuit is also terminated.

Pragati Engineering College (A) Page 4


Computer Networks – Unit IV – Lecture Notes

• With connection-oriented service, each packet carries an identifier telling which virtual circuit
it belongs to.
• For example, consider the following Fig. 3.
o Here host H1 has established connection 1 with host H2. It is remembered as the first
entry in each of the routing tables.
o The first line of A’s table says that if a packet bearing connection identifier 1 comes in
from H1, it is to be sent to router C and given connection identifier 1.
o Similarly, the first entry at C routes the packet to E, also with connection identifier 1.
o Now let us consider what happens if H3 also wants to establish a connection to H2.
o It chooses connection identifier 1 because it is initiating the connection and this is its
only connection and tells the subnet to establish the virtual circuit. This leads to
second row in the tables.
o But we have a conflict here because although A can easily distinguish connection 1
packets from H1 from connection 1 packets from H3, but C cannot do this.
o For this reason, A assigns a different connection identifier to the outgoing traffic for
the second connection.
• Routers need to be able to change connection identifiers in outgoing packets to prevent
conflicts. In some cases, this process is called label switching.

Fig. 3. Routing within a virtual-circuit network


4.1.5 Comparison of Virtual Circuit and Datagram Networks
Issue Datagram Subnet Virtual-Circuit Subnet
Circuit Setup Not needed Required
Addressing Each packet contains the full Each packet contains a short VC
source and destination address number
State Information Routers do not hold state Each VC requires router table space
information about connections per connection

Pragati Engineering College (A) Page 5


Computer Networks – Unit IV – Lecture Notes

Routing Each packet is routed Route chosen when VC is set up; all
independently packets follow it
Effect of router failures None, except for packets lost All VCs that passed through the
during the crash failed router are terminated
Quality of Service Difficult Easy if enough resources can be
allocated in advance for each VC
Congestion control Difficult Easy if enough resources can be
allocated in advance for each VC

4.2 Routing Algorithms


The main function of the network layer is routing packets from the source machine to the
destination machine. In most networks, packets will require multiple hops to make the journey. The
only notable exception is for broadcast networks. but even here routing is an issue if the source and
destination are not on the same network segment. The algorithms that choose the routes and the data
structures that they use are a major area of network layer design.
The routing algorithm in the network layer decides which output line an incoming packet
should take. In datagram-based networks, this decision is made for every packet since the best route
may change. In virtual circuit networks, routing is decided only when a new circuit is set up, and all
packets follow the same path. This is called session routing because the route stays the same for the
entire session, like when using a VPN.
Routing and forwarding are two different tasks. Routing is the process of deciding which
routes to use, while forwarding happens when a packet arrives and needs to be sent to the next
destination. A router has two main processes: one forwards packets by looking up the routing table,
and the other updates the routing table using a routing algorithm.
A good routing algorithm should have important qualities like correctness, simplicity,
robustness, stability, fairness, and efficiency. While correctness and simplicity are straightforward,
robustness is also crucial. Networks run for years without stopping, but during this time, hardware
and software failures will occur. Routers, hosts, and connections may fail, and the network structure
will change. The routing algorithm must handle these changes smoothly without disrupting ongoing
tasks. Otherwise, the network would have to restart every time a router failed, causing major
problems.
Stability is another key goal of a routing algorithm. Some algorithms keep changing routes
endlessly and never settle on fixed paths. A stable algorithm reaches a balanced state and stays there.
It should also converge quickly because communication may be affected until the algorithm finds a
stable route.
Fairness and efficiency may sound obvious surely no reasonable person would oppose them
but as it turns out, they are often contradictory goals. As a simple example of this conflict, look at
Fig. Suppose that there is enough traffic between A and A', between B and B', and between C and C'
to saturate the horizontal links. To maximize the total flow, the X to X' traffic should be shut off
altogether. Unfortunately, X and X' may not see it that way. Evidently, some compromise between
global efficiency and fairness to individual connections is needed.

Pragati Engineering College (A) Page 6


Computer Networks – Unit IV – Lecture Notes

Fig. 4. Network with a conflict between fairness and efficiency


Before balancing fairness and efficiency, we need to decide what to optimize. One goal is
minimizing packet delay for faster data transfer, while another is maximizing total network
throughput. However, these goals conflict because a network running at full capacity leads to long
delays. A common compromise is reducing the number of hops a packet takes, which improves delay
and saves bandwidth, ultimately boosting overall network performance.
Routing algorithms are divided into two main types: nonadaptive and adaptive. Nonadaptive
(or static) algorithms do not consider current network conditions. Instead, routes are precomputed and
set before the network starts. Since static routing does not adjust to failures, it works best when the
best route is always clear. For example, a router may always send packets to a specific next router, no
matter the final destination.
Adaptive (or dynamic) routing algorithms adjust their routes based on changes in the network,
such as topology or traffic. These algorithms differ in how they gather information (locally, from
nearby routers, or from all routers), when they update routes (only when the topology changes or at
regular intervals), and what factors they use for optimization (such as distance, hop count, or
estimated transit time).
In the next sections, we will explore different routing algorithms. Some focus on sending
packets from a source to a destination, while others handle delivery to multiple or all destinations. All
these algorithms make decisions based on the network’s topology, while traffic-based decisions will
be discussed later in Section.

4.2.1 The Optimality principle


A general statement is made about optimal routes without regard to network topology or
traffic. This statement is known as the optimality principle (Bellman,1975).
It states that if router J is on the optimal path from router I to router K, then the optimal path
from J to K also falls along the same route. To see this, call the part of the route from I to J r1 and the
rest of the route r2. If a route better than r2 existed from J to K, it could be concatenated with r1 to
improve the route from I to K, contradicting our statement that r1r2 is optimal.
As a direct consequence of the optimality principle, we can see that the set of optimal routes
from all sources to a given destination form a tree rooted at the destination. Such a tree is called a
sink tree and is illustrated in Fig. 5 (b), where the distance metric is the number of hops. The goal of
all routing algorithms is to discover and use the sink trees for all routers.

Pragati Engineering College (A) Page 7


Computer Networks – Unit IV – Lecture Notes

Fig. 5. (a) A network (b) A sink tree for router B


A sink tree is not always unique—there can be other trees with the same path lengths. If we
allow multiple possible paths, the tree becomes a Directed Acyclic Graph (DAG), which has no
loops. To keep things simple, we refer to both cases as sink trees. This assumes that paths do not
interfere with each other, meaning a traffic jam on one path won’t affect another.
A sink tree has no loops, ensuring packets reach their destination in a limited number of hops.
However, real networks are more complex—routers and links can fail and recover, leading to
different views of the network. Another challenge is how routers get the information needed to build
their sink tree. Despite these issues, the sink tree and optimality principle serve as benchmarks for
comparing routing algorithms.

4.2.2 Shortest Path Algorithm


We start by looking at a simple method to find the best paths when the entire network is
known. These are the paths a distributed routing algorithm should discover, even if individual routers
don’t have complete network information.
The network is represented as a graph, where:
• Nodes represent routers.
• Edges represent communication links.
To find a route between two routers, the algorithm simply determines the shortest path on this graph.
The shortest path can be measured in different ways:
• Hop count: The number of routers a packet passes through. Using this metric, paths ABC and
ABE in Fig. are equally long.
• Geographic distance: The actual physical distance in kilometres. In this case, ABC is much
longer than ABE if the figure is drawn to scale.

Pragati Engineering College (A) Page 8


Computer Networks – Unit IV – Lecture Notes

Fig. 6. The first six steps used in computing the shortest path from A to D. The arrows indicate the
working node
Besides hops and physical distance, other metrics can be used to measure the shortest path:
• Delay-based metric: Each link is labelled with the average delay of a test packet, measured
periodically (e.g., hourly).
• Using this method, the shortest path is the fastest one, not necessarily the one with the fewest
hops or shortest physical distance.
In general, edge labels can be based on various factors, such as:
• Distance
• Bandwidth
• Average traffic
• Communication cost
• Measured delay
By adjusting the weighting function, the algorithm can find the "shortest" path based on a single
factor or a combination of these criteria.
Several algorithms exist for finding the shortest path in a network. Dijkstra’s algorithm (1959)
finds the shortest paths from a source node to all destinations.
How It Works:
Pragati Engineering College (A) Page 9
Computer Networks – Unit IV – Lecture Notes

• Each node is labelled with its distance from the source along the best-known path.
• Distances must be non-negative (e.g., based on bandwidth or delay).
• Initially:
o No paths are known, so all nodes are labelled infinity (∞).
o All labels are tentative.
• As the algorithm runs:
o Labels update when better paths are found.
o Once a label represents the shortest possible path, it becomes permanent and is never
changed again.
We want to find the shortest path from A to D in a weighted, undirected graph, where weights
represent distance.
1. Start with Node A:
o Mark A as permanent (filled-in circle).
o A is now the working node.
2. Update Adjacent Nodes:
o Check all nodes directly connected to A.
o Relabel each with the distance from A.
o Also record which node was used to reach them (to reconstruct the final path later).
3. Handling Multiple Shortest Paths:
o If multiple shortest paths exist, keep track of all possible probe nodes that give the
same shortest distance.
This step-by-step process continues until we determine the shortest path from A to D.
1. Select the Next Working Node:
• Among all tentatively labelled nodes, choose the one with the smallest label.
• Mark it permanent (this node now has the shortest known path).
• This becomes the new working node (Fig. b).
2. Update Neighbouring Nodes:
• For each neighbour of the new working node:
o Compute (current node's label + edge weight).
o If this sum is smaller than the node’s existing label, update it.
3. Repeat the Process:
• After updating all adjacent nodes, scan the entire graph.
• Find the tentative node with the smallest label.
• Mark it permanent and make it the new working node.
4. Continue Until All Nodes Are Processed:
• The algorithm repeats until all nodes have permanent labels, meaning the shortest paths from
the source to all destinations have been found.
• Figure 7 illustrates the first six steps of this process.
Dijkstra’s algorithm works by making nodes permanent in increasing order of distance. At one
stage, suppose node E has just been made permanent, and we consider whether a different path,
AXYZE, could be shorter than ABE. There are two possibilities: if Z (in AXYZE) is already
Pragati Engineering College (A) Page 10
Computer Networks – Unit IV – Lecture Notes

permanent, then E was already checked after Z, meaning AXYZE was considered but wasn’t shorter.
If Z is still tentative, it must be farther than E, proving AXYZE is a longer path. In both cases, ABE
remains the shortest, confirming that the algorithm correctly finds the shortest path at each step.
If Z is still tentatively labelled, its distance helps determine if AXYZE is shorter than ABE. If Z
has a distance greater than or equal to E, then AXYZE cannot be shorter. If Z has a smaller distance,
it will be made permanent before E, allowing the algorithm to check E from Z first.
In Fig., the algorithm computes the shortest path by starting at the destination (t) instead of the
source (s). Since paths in an undirected graph work the same in both directions, the starting point
does not affect the result. This approach ensures that each node is labelled with its predecessor rather
than its successor. When the final path is stored, the reversal cancels out, producing the correct order.

Fig. 7. Dijkstra’s algorithm to compute the shortest path through a graph


Pragati Engineering College (A) Page 11
Computer Networks – Unit IV – Lecture Notes

4.2.3 Flooding
Flooding is a non-adaptive routing technique following this simple method: when a data
packet arrives at a router, it is sent to all the outgoing links except the one it has arrived on. Unlike
other routing methods, flooding does not require complex calculations or a global view of the
network. Each router makes decisions based only on its local knowledge.
For example, let us consider the network in the figure, having six routers that are connected
through transmission lines.

Using flooding technique


• An incoming packet to A, will be sent to B, C and D.
• B will send the packet to C and E.
• C will send the packet to B, D and F.
• D will send the packet to C and F.
• E will send the packet to F.
• F will send the packet to C and E.
Flooding can create a massive number of duplicate packets, leading to:
• Network congestion: Since every packet is copied multiple times, the number of packets in
the network grows rapidly.
• Infinite looping: If not controlled, flooding can continue indefinitely, overwhelming the
network.
To prevent infinite packet duplication, routers use techniques to "damp" the flooding process:
a) Hop Counter (Time-to-Live - TTL):
• Each packet has a hop counter in its header.
• The counter decreases by 1 at each hop (router).
• When the counter reaches zero, the packet is discarded.
• The counter should ideally be set to the actual path length from the source to the destination.
• If the sender doesn’t know the exact path length, the worst-case value (network diameter) is
used.
b) Sequence Number Tracking:
• Each packet has a unique sequence number assigned by the source router.
• Each router maintains a list of sequence numbers it has already seen.
• If a packet arrives with a sequence number already in the list, it is discarded instead of being
forwarded.
• This prevents routers from sending the same packet more than once.

Pragati Engineering College (A) Page 12


Computer Networks – Unit IV – Lecture Notes

• To avoid the list growing indefinitely, routers only keep track of the latest sequence numbers
using a counter k, which represents the highest sequence number seen so far.
Types of Flooding
• Uncontrolled flooding − Here, each router unconditionally transmits the incoming data
packets to all its neighbours.
• Controlled flooding − They use some methods to control the transmission of packets to the
neighbouring nodes. The two popular algorithms for controlled flooding are Sequence
Number Controlled Flooding (SNCF) and Reverse Path Forwarding (RPF).
• Selective flooding − Here, the routers don't transmit the incoming packets only along those
paths which are heading towards approximately in the right direction, instead of every
available paths.
Advantages of Flooding
• It is very simple to setup and implement, since a router may know only its neighbours.
• It is extremely robust. Even in case of malfunctioning of a large number routers, the packets
find a way to reach the destination.
• All nodes which are directly or indirectly connected are visited. So, there are no chances for
any node to be left out. This is a main criteria in case of broadcast messages.
• The shortest path is always chosen by flooding.
Limitations of Flooding
• Flooding tends to create an infinite number of duplicate data packets, unless some measures
are adopted to damp packet generation.
• It is wasteful if a single destination needs the packet, since it delivers the data packet to all
nodes irrespective of the destination.
• The network may be clogged with unwanted and duplicate data packets. This may hamper
delivery of other data packets.

4.2.4 Distance Vector Routing


Computer networks use dynamic routing algorithms, which are more complex than flooding but more
efficient. They find the shortest paths based on the current network structure. The two most popular
types are:
1. Distance Vector Routing – Covered in this section.
2. Link State Routing – Covered in the next section.
A distance vector routing algorithm operates by having each router maintain a table (i.e., a vector)
giving the best-known distance to each destination and which link to use to get there. These tables are
updated by exchanging information with the neighbors. Eventually, every router knows the best link
to reach each destination.
The distance vector routing algorithm is also known as the distributed Bellman-Ford algorithm,
named after its developers (Bellman, 1957; Ford & Fulkerson, 1962).
• It was the first routing algorithm used in ARPANET.
• It was later used in the Internet under the name RIP (Routing Information Protocol).

Pragati Engineering College (A) Page 13


Computer Networks – Unit IV – Lecture Notes

In distance vector routing, each router keeps a routing table with an entry for every router in the
network. Each entry includes:
1. The best outgoing link to reach the destination.
2. The estimated distance to the destination (measured in hops or another metric).
This helps routers determine the shortest paths efficiently.
Each router knows the distance to its neighbouring routers:
• If the metric is hops, the distance is simply one hop.
• If the metric is propagation delay, the router can measure it using ECHO packets, which the
receiver timestamps and returns immediately.
How Distance Vector Routing Updates Work (Using Delay as a Metric)
1. Routers Share Delay Information
o Every T milliseconds, each router sends its estimated delay to all destinations to its
neighbours.
o It also receives similar delay estimates from its neighbours.
2. How a Router Updates Its Routing Table
o Suppose router J receives a table from neighbour X.
o If X's delay to router i is Xi and J’s delay to X is m, then J can reach i via X in Xi+m
milliseconds.
o J repeats this calculation for all its neighbours and picks the best (shortest) delay.
o The old routing table is not used in this calculation—only fresh data from neighbours.
3. Example Calculation (Router J Choosing Best Path to G)
o J knows its delay to:
▪ A = 8 ms
▪ I = 10 ms
▪ H = 12 ms
▪ K = 6 ms
o J receives delay estimates from these neighbours:
▪ A → G = 18 ms → J’s total delay via A = 26 ms (8+18)
▪ I → G = 31 ms → J’s total delay via I = 41 ms (10+31)
▪ H → G = 6 ms → J’s total delay via H = 18 ms (12+6)
▪ K → G = 31 ms → J’s total delay via K = 37 ms (6+31)
4. Updating the Routing Table
o The shortest delay to G is 18 ms via H, so J updates its table with:
▪ Destination: G
▪ Delay: 18 ms
▪ Next Hop: H
5. Final Step
o The same process is repeated for all destinations, and the updated routing table is
created.

Pragati Engineering College (A) Page 14


Computer Networks – Unit IV – Lecture Notes

Fig. 8. (a) A Network (b) Input from A,I,H,K and the new routing table for J
The Count-to-Infinity Problem
Convergence is when routers settle on the best paths in a network. Distance vector routing
helps routers find the shortest paths but can be slow to update. It reacts quickly to good news but
slowly to bad news. For example, if a router has a long route to X and its neighbour A suddenly
reports a shorter delay, the router immediately switches to A’s route in just one update cycle.
To see how fast good news propagates, consider the five-node (linear) network of Fig. 9,
where the delay metric is the number of hops. Suppose A is down initially and all the other routers
know this. In other words, they have all recorded the delay to A as infinity.

Fig. 9. The count-to-infinity problem


When router A comes back online, other routers learn about it through vector exchanges. To simplify,
imagine a gong that signals all routers to exchange updates at the same time.

Pragati Engineering College (A) Page 15


Computer Networks – Unit IV – Lecture Notes

• In the first exchange, B learns that A is reachable with zero delay and updates its table to
show A is one hop away. Other routers still think A is down.
• In the next exchange, C sees that B has a 1-hop path to A, so it updates its table to show A is 2
hops away.
• D and E will update in later exchanges.
The good news spreads at one hop per exchange, so in a network with N hops, all routers will learn
about A within N exchanges.
If all routers are up, their distances to A are initially correct (B = 1 hop, C = 2 hops, etc.). But if A
goes down or the link between A and B is cut, problems arise.
1. First Exchange:
o B does not hear from A, but C claims to have a path of length 2 to A.
o B trusts C, unaware that C’s path actually goes through B itself.
o So, B updates its table to say A is now 3 hops away.
2. Second Exchange:
o C sees its neighbours now claim A is 3 hops away, so it updates to 4 hops.
o The same process repeats for D and E, increasing their values in each exchange.
3. Why Does Bad News Spread Slowly?
o Each router only increases its distance by one hop per exchange.
o The network slowly counts up to infinity, causing long delays in detecting failures.
o The number of exchanges depends on the value chosen for infinity (it should be the
longest path + 1).
4. The "Count-to-Infinity" Problem
o This slow update process is called count-to-infinity.
o Solutions like split horizon with poisoned reverse (RFC 1058) try to prevent routers
from sending bad updates back to the source.
o However, these methods do not completely solve the problem because routers cannot
tell if they are part of the failing path.

4.2.5 Link State Routing


Distance vector routing was used in ARPANET until 1979, when it was replaced by link state
routing. The main issue was its slow convergence after network changes due to the count-to-infinity
problem. As a result, a new algorithm, link state routing, was introduced. Modern versions, IS-IS
(Intermediate System to Intermediate System) and OSPF (Open Shortest Path First), are now widely
used in large networks and the Internet.
The idea behind link state routing is fairly simple and can be stated as five parts. Each router
must do the following things to make it work:
1. Discover its neighbors and learn their network addresses.
2. Set the distance or cost metric to each of its neighbors.
3. Construct a packet telling all it has just learned.
4. Send this packet to and receive packets from all other routers.
5. Compute the shortest path to every other router.

Pragati Engineering College (A) Page 16


Computer Networks – Unit IV – Lecture Notes

Each router receives the full network topology and then uses Dijkstra's algorithm to determine the
shortest path to all other routers. Next, we will examine these five steps in detail.
Learning about the Neighbors
When a router is booted, its first task is to learn who its neighbors are. It accomplishes this
goal by sending a special HELLO packet on each point-to-point line. The router on the other end is
expected to send back a reply giving its name. These names must be globally unique because when a
distant router later hears that three routers are all connected to F, it is essential that it can determine
whether all three mean the same F.
When two or more routers are connected by a broadcast link (e.g., a switch, ring, or classic
Ethernet), the situation is slightly more complicated. Fig. 10 (a) illustrates a broadcast LAN to which
three routers, A, C, and F, are directly connected. Each of these routers is connected to one or more
additional routers, as shown.

Fig. 10. (a) Nine routers and a broadcast LAN (b) A graph model of (a)
A broadcast LAN connects all attached routers, but treating it as multiple point-to-point links
increases topology size and message overhead. A better approach is to model the LAN as a single
node, N, as shown in Fig. 10 (b). A designated router on the LAN represents N in the routing
protocol. The connection from A to C through the LAN is shown as the path ANC.
Setting Link Costs
The link state routing algorithm assigns a distance or cost to each link to find the shortest
paths. This cost can be set automatically or by the network operator. A common approach is to make
the cost inversely proportional to bandwidth—for example, a 1-Gbps Ethernet link may have a cost of
1, while a 100-Mbps link has a cost of 10, favouring higher-capacity paths.
For wide-area networks, delay can also be considered, making shorter links preferable. This
delay is measured by sending an ECHO packet, which the receiver immediately returns. By
calculating the round-trip time and dividing it by two, the router estimates the delay.
Building Link State Packets
After gathering the necessary information, each router creates a packet with its data. The packet
includes the sender's identity, a sequence number, an age value (explained later), and a list of
neighbors with their respective costs. Fig. 11 (a) shows an example network with link costs, while
Fig. 11 (b) presents the link state packets for all six routers.

Pragati Engineering College (A) Page 17


Computer Networks – Unit IV – Lecture Notes

Fig. 11. (a) A network (b) The link state packets for this network
Creating link state packets is straightforward, but deciding when to generate them is more
challenging. One approach is to build them at regular intervals. Another is to create them when a
significant event occurs, such as a link or neighbour going down, coming back up, or experiencing a
major change.
Distributing the Link State Packets
The most challenging part of the algorithm is distributing link state packets efficiently and reliably.
All routers must receive the same topology updates to avoid routing issues like loops and unreachable
nodes.
The basic method for distribution is flooding. Each packet has a sequence number that increases with
every new update. Routers track the (source router, sequence) pairs they receive. If a packet is new,
they forward it to all neighbours except the sender. Duplicates are discarded, and outdated packets
with lower sequence numbers are rejected.
To prevent problems like
1. Sequence Number Wraparound: A 32-bit sequence number is used, making wraparound
unlikely (it would take 137 years at one packet per second).
2. Router Crashes: If a router restarts at sequence 0, its packets could be rejected as duplicates.
3. Corrupt Sequence Numbers: A single-bit error could cause thousands of packets to be
mistakenly rejected.
The solution is an Age field in each packet, which decreases every second. When it reaches zero, the
packet is discarded. This prevents outdated data from lingering in the network. During flooding,
routers also decrement the Age field to ensure expired packets are removed.
Additional refinements improve reliability:
• Packets are briefly held before transmission in case more updates arrive.
• If a new packet from the same source arrives before transmission, their sequence numbers are
compared. Duplicates are discarded, and the older version is removed.
• All link state packets are acknowledged to guard against transmission errors.
The data structure used by router B for the network in Fig. 11(a) is shown in Fig. 12. Each row
represents a recently received link state packet that has not yet been fully processed. The table
includes the packet's origin, sequence number, age, and data.
Additionally, there are send and acknowledgment flags for each of B’s three links (to A, C, and F).
• Send flags indicate that the packet needs to be forwarded on the specified link.
• Acknowledgment flags indicate that an acknowledgment must be sent for the packet.

Pragati Engineering College (A) Page 18


Computer Networks – Unit IV – Lecture Notes

Fig. 12. The packet buffer for router B in Fig. 11 (a)


In Fig. 12, router B processes incoming link state packets based on their origin and forwarding
requirements:
• Packet from A: Arrives directly at B, so it must be forwarded to C and F and acknowledged to
A.
• Packet from F: Must be sent to A and C and acknowledged to F.
• Packet from E: Arrives twice (via EAB and EFB). It is forwarded only to C but acknowledged
to both A and F.
If a duplicate packet arrives while the original is still in the buffer, the flags must be updated. For
example, if a copy of C’s state arrives from F before the fourth entry is forwarded, the flag bits
change to 100011, meaning it must be acknowledged to F but not sent there.
Computing the routes
Once a router collects all link state packets, it can build the complete network graph, where
each link is represented twice (one for each direction, possibly with different costs). Using Dijkstra's
algorithm, the router computes the shortest paths to all destinations and updates its routing table
accordingly.
Compared to distance vector routing, link state routing requires more memory and processing
power. In a network with n routers and k neighbours per router, storage needs grow proportionally to
kn, and computation time increases even faster. However, link state routing avoids slow convergence
issues, making it practical for many networks.
Two widely used link state protocols are:
• IS-IS (Intermediate System-Intermediate System): Originally designed for DECnet (Digital
Equipment Corporation Network), later adopted for OSI protocols and extended to support IP.
It can handle multiple network layer protocols.
• OSPF (Open Shortest Path First): Developed by IETF (Internet Engineering Task Force), it
shares many features with IS-IS, such as efficient flooding, designated routers, and multiple
path metrics. Unlike IS-IS, OSPF is limited to IP networks.
Routing algorithms depend on routers working correctly. Issues like misconfigured links, faulty
packet forwarding, or memory shortages can disrupt the network. As networks grow to thousands of
nodes, failures become more likely. The challenge is to minimize the impact of inevitable failures, a
topic explored in depth by Perlman (1988).

Pragati Engineering College (A) Page 19


Computer Networks – Unit IV – Lecture Notes

4.2.6 Hierarchical Routing


As networks grow in size, the router routing tables grow proportionally. Not only is router
memory consumed by ever-increasing tables, but more CPU time is needed to scan them and more
bandwidth is needed to send status reports about them.
When hierarchical routing is used, the routers are divided into what we will call regions, with
each router knowing all the details about how to route packets to destinations within its own region,
but knowing nothing about the internal structure of other regions.
For huge networks, a two-level hierarchy may be insufficient; it may be necessary to group
the regions into clusters, the clusters into zones, the zones into groups, and so on, until we run out of
names for aggregations.
Below Fig. 13 (a) Gives a quantitative example of routing in a two-level hierarchy with five
regions. The full routing table for router 1A has 17 entries, as shown in Fig. 13 (b). When routing is
done hierarchically, as in Fig. 13 (c), there are entries for all the local routers as before, but all other
regions have been condensed into a single router, so all traffic for region 2 goes via the 1B -2A line,
but the rest of the remote traffic goes via the 1C -3B line. Hierarchical routing has reduced the table
from 17 to 7 entries. As the ratio of the number of regions to the number of routers per region grows,
the savings in table space increase.

Fig. 13 Hierarchical Routing

Pragati Engineering College (A) Page 20


Computer Networks – Unit IV – Lecture Notes

4.3 Congestion Control algorithms


• When too many packets are present in (a part of) the subnet, performance degrades. This
situation is called congestion.
• Below Figure depicts the symptom. When the number of packets dumped into the subnet by
the hosts is within its carrying capacity, they are all delivered (except for a few that are
afflicted with transmission errors) and the number delivered is proportional to the number
sent.
• However, as traffic increases too far, the routers are no longer able to cope and they begin
losing packets. This tends to make matters worse. At very high traffic, performance collapses
completely and almost no packets are delivered.

Fig. 14 When too much traffic is offered, congestion sets in and performance degrades sharply
• Congestion can be brought on by several factors. If all of a sudden, streams of packets begin
arriving on three or four input lines and all need the same output line, a queue will build up.
• If there is insufficient memory to hold all of them, packets will be lost.
• Slow processors can also cause congestion. If the routers' CPUs are slow at performing the
bookkeeping tasks required of them (queuing buffers, updating tables, etc.), queues can build
up, even though there is excess line capacity. Similarly, low-bandwidth lines can also cause
congestion.

4.3.1 General principles of congestion control


• Many problems in complex systems, such as computer networks, can be viewed from a
control theory point of view. This approach leads to dividing all solutions into two groups:
open loop and closed loop.
• Open loop solutions attempt to solve the problem by good design.
• Tools for doing open-loop control include deciding when to accept new traffic, deciding when
to discard packets and which ones, and making scheduling decisions at various points in the
network.
• Closed loop solutions are based on the concept of a feedback loop.

Pragati Engineering College (A) Page 21


Computer Networks – Unit IV – Lecture Notes

• This approach has three parts when applied to congestion control:


1. Monitor the system to detect when and where congestion occurs.
2. Pass this information to places where action can be taken.
3. Adjust system operation to correct the problem.
• A variety of metrics can be used to monitor the subnet for congestion. Chief among these are
the percentage of all packets discarded for lack of buffer space, the average queue lengths, the
number of packets that time out and are retransmitted, the average packet delay, and the
standard deviation of packet delay. In all cases, rising numbers indicate growing congestion.
• The second step in the feedback loop is to transfer the information about the congestion from
the point where it is detected to the point where something can be done about it.
• In all feedback schemes, the hope is that knowledge of congestion will cause the hosts to take
appropriate action to reduce the congestion.
• The presence of congestion means that the load is (temporarily) greater than the resources (in
part of the system) can handle. Two solutions come to mind: increase the resources or
decrease the load.

4.3.2 Congestion prevention polices


The methods to control congestion by looking at open loop systems. These systems are designed to
minimize congestion in the first place, rather than letting it happen and reacting after the fact. They
try to achieve their goal by using appropriate policies at various levels. In Fig. 15 we see different
data link, network, and transport policies that can affect congestion (Jain, 1990).

Fig. 15 Policies that affect congestion


The data link layer Policies
• The retransmission policy is concerned with how fast a sender times out and what it transmits
upon timeout. A jumpy sender that times out quickly and retransmits all outstanding packets

Pragati Engineering College (A) Page 22


Computer Networks – Unit IV – Lecture Notes

using go back n will put a heavier load on the system than will a leisurely sender that uses
selective repeat.
• Closely related to this is the buffering policy. If receivers routinely discard all out-of- order
packets, these packets will have to be transmitted again later, creating extra load. With respect
to congestion control, selective repeat is clearly better than go back n.
• Acknowledgement policy also affects congestion. If each packet is acknowledged
immediately, the acknowledgement packets generate extra traffic. However, if
acknowledgements are saved up to piggyback onto reverse traffic, extra timeouts and
retransmissions may result. A tight flow control scheme (e.g., a small window) reduces the
data rate and thus helps fight congestion.
The network layer Policies
• The choice between using virtual circuits and using datagrams affects congestion since many
congestion control algorithms work only with virtual-circuit subnets.
• Packet queueing and service policy relates to whether routers have one queue per input line,
one queue per output line, or both. It also relates to the order in which packets are processed
(e.g., round robin or priority based).
• Discard policy is the rule telling which packet is dropped when there is no space.
• A good routing algorithm can help avoid congestion by spreading the traffic over all the lines,
whereas a bad one can send too much traffic over already congested lines.
• Packet lifetime management deals with how long a packet may live before being discarded. If
it is too long, lost packets may clog up the works for a long time, but if it is too short, packets
may sometimes time out before reaching their destination, thus inducing retransmissions.
The transport layer Policies
• The same issues occur as in the data link layer, but in addition, determining the timeout
interval is harder because the transit time across the network is less predictable than the transit
time over a wire between two routers. If the timeout interval is too short, extra packets will be
sent unnecessarily. If it is too long, congestion will be reduced but the response time will
suffer whenever a packet is lost.

Pragati Engineering College (A) Page 23

You might also like