mailto: aruntripathi@kiet.
edu
Data Traffic
The main focus of congestion control and quality of service is
data traffic.
In congestion control we try to avoid traffic congestion.
In quality of service, we try to create an appropriate
environment for the traffic. So, before talking about
congestion control and quality of service, we discuss the data
traffic itself.
mailto: [email protected]
Traffic descriptors
mailto: [email protected]
Three traffic profiles
mailto: [email protected]
Congestion
Congestion in a network may occur if the load on the network
is higher than it can handle.
The number of packets sent to the network is greater than the
capacity of the network.
Congestion control refers to the mechanisms and techniques
to control the congestion and keep the load below the
capacity.
mailto: [email protected]
Queues in a router
mailto: [email protected]
Congestion Control
Congestion control refers to techniques and mechanisms that
can either prevent congestion, before it happens, or remove
congestion, after it has happened.
In general, we can divide congestion control mechanisms into
two broad categories:
Open-loop congestion control (prevention) and
Closed-loop congestion control (removal).
mailto: [email protected]
Congestion control categories
mailto: [email protected]
Retransmission Policy
If the sender feels that a sent packet is lost or corrupted, the
packet needs to be retransmitted. Retransmission in general
may increase congestion in the network. However, a good
retransmission policy can prevent congestion. The
retransmission policy and the retransmission timers must be
designed to optimize efficiency and at the same time prevent
congestion.
mailto: [email protected]
Window Policy
The type of window at the sender may also affect congestion.
The Selective Repeat window is better than the Go-Back-N
window for congestion control. In the Go-Back-N window,
when the timer for a packet times out, several packets may be
resent, although some may have arrived safe and sound at the
receiver. This duplication may make the congestion worse. The
Selective Repeat window, on the other hand, tries to send the
specific packets that have been lost or corrupted.
mailto: [email protected]
Acknowledgment Policy
The acknowledgment policy imposed by the receiver may also
affect congestion. If the receiver does not acknowledge every
packet it receives, it may slow down the sender and help
prevent congestion. Several approaches are used in this case.
A receiver may send an acknowledgment only if it has a
packet to be sent or a special timer expires. A receiver may
decide to acknowledge only N packets at a time. We need to
know that the acknowledgments are also part of the load in a
network.
Sending fewer acknowledgments means imposing less load on
the network.
mailto: [email protected]
Discarding Policy
A good discarding policy by the routers may prevent congestion
and at the same time may not harm the integrity of the
transmission.
mailto: [email protected]
Admission Policy
An admission policy, which is a quality-of-service mechanism,
can also prevent congestion in virtual-circuit networks.
Switches in a flow first check the resource requirement of a
flow before admitting it to the network.
A router can deny establishing a virtual circuit connection if
there is congestion in the network or if there is a possibility of
future congestion.
mailto: [email protected]
Closed loop Congestion Control
Closed-loop congestion control mechanisms try to improve
congestion after it happens. Several mechanisms have been
used by different protocols. We describe a few of them here.
mailto: [email protected]
Backpressure
The technique of backpressure refers to a congestion control
mechanism in which a congested node stops receiving data
from the immediate upstream node or nodes. This may cause
the upstream node or nodes to become congested, and they, in
turn, reject data from their upstream nodes or nodes and so on.
Backpressure is a node-to-node congestion control that starts
with a node and propagates, in the opposite direction of data
flow, to the source.
The backpressure technique can be applied only to virtual
circuit networks, in which each node knows the upstream node
from which a flow of data is corning.
mailto: [email protected]
Backpressure method for alleviating congestion
mailto: [email protected]
Chock Packet
A choke packet is a packet sent by a node to the source to
inform it of congestion.
In backpressure, the warning is from one node to its upstream
node, although the warning may eventually reach the source
station.
In the choke packet method, the warning is from the router,
which has encountered congestion, to the source station
directly.
The intermediate nodes through which the packet has traveled
are not warned.
mailto:
[email protected]Choke packet
mailto: [email protected]
Implicit Signaling
In implicit signaling, there is no communication between the
congested node or nodes and the source.
The source guesses that there is a congestion somewhere in the
network from other symptoms.
For example, when a source sends several packets and there is
no acknowledgment for a while, one assumption is that the
network is congested. The delay in receiving an
acknowledgment is interpreted as congestion in the network;
the source should slow down.
mailto: [email protected]
Explicit Signaling
The node that experiences congestion can explicitly send a
signal to the source or destination.
The explicit signaling method, however, is different from the
choke packet method. In the choke packet method, a separate
packet is used for this purpose; in the explicit signaling method,
the signal is included in the packets that carry data.
Explicit signaling used in Frame Relay congestion control, can
occur in either the forward or the backward direction.
mailto: [email protected]
Backward Signaling
A bit can be set in a packet moving in the direction opposite to
the congestion.
This bit can warn the source that there is congestion and that it
needs to slow down to avoid the discarding of packets.
mailto: [email protected]
Forward Signaling
A bit can be set in a packet moving in the direction of the
congestion.
This bit can warn the destination that there is congestion.
The receiver in this case can use policies, such as slowing down
the acknowledgments, to alleviate the congestion.
mailto: [email protected]
Quality of Service
Quality of Service depends on Flow characteristics.
mailto: [email protected]
Flow characteristics
mailto: [email protected]
Reliability
Reliability is a characteristic that a flow needs.
Lack of reliability means losing a packet or acknowledgment,
which entails retransmission.
However, the sensitivity of application programs to reliability
is not the same.
For example, it is more important that electronic mail, file
transfer, and Internet access have reliable transmissions than
telephony or audio conferencing.
mailto: [email protected]
Delay
Source-to-destination delay is another flow characteristic.
Again applications can tolerate delay in different degrees.
In this case, telephony, audio conferencing, video
conferencing, and remote log-in need minimum delay, while
delay in file transfer or e-mail is less important.
mailto: [email protected]
Jitter
Jitter is defined as the variation in the packet delay. High
jitter means the difference between delays is large; low jitter
means the variation is small.
Jitter is the variation in delay for packets belonging to the
same flow.
For example, if four packets depart at times 0, 1, 2, 3 and
arrive at 20, 21, 22, 23, all have the same delay, 20 units of
time. On the other hand, if the above four packets arrive at
21, 23, 21, and 28, they will have different delays: 21,22, 19,
and 24.
mailto:
[email protected]Bandwidth
Different applications need different bandwidths.
In video conferencing we need to send millions of bits per
second to refresh a color screen while the total number of bits
in an e-mail may not reach even a million.
mailto: [email protected]
Techniques to Improve QoS
We briefly discuss four common methods: scheduling, traffic
shaping, admission control, and resource reservation.
mailto: [email protected]
Scheduling
Packets from different flows arrive at a switch or router for
processing. A good scheduling technique treats the different
flows in a fair and appropriate manner.
Several scheduling techniques are designed to improve the
quality of service.
We discuss three of them here: FIFO queuing, priority
queuing, and weighted fair queuing.
mailto: [email protected]
FIFO queue
In first-in, first-out (FIFO) queuing, packets wait in a buffer
(queue) until the node (router or switch) is ready to process
them.
If the average arrival rate is higher than the average processing
rate, the queue will fill up and new packets will be discarded.
A FIFO queue is familiar to those who have had to wait for a
bus at a bus stop.
mailto: [email protected]
FIFO queue
mailto: [email protected]
Priority queuing
In priority queuing, packets are first assigned to a priority
class.
Each priority class has its own queue.
The packets in the highest-priority queue are processed first.
Packets in the lowest-priority queue are processed last.
Note that the system does not stop serving a queue until it is
empty.
mailto: [email protected]
Priority queuing
mailto: [email protected]
Priority queuing
A priority queue can provide better QoS than the FIFO queue
because higher priority traffic, such as multimedia, can reach
the destination with less delay.
However, there is a potential drawback. If there is a continuous
flow in a high-priority queue, the packets in the lower-priority
queues will never have a chance to be processed. This is a
condition called starvation.
mailto: [email protected]
Weighted fair queuing
A better scheduling method is weighted fair queuing.
In this technique, the packets are still assigned to different
classes and admitted to different queues.
The queues, however, are weighted based on the priority of the
queues; higher priority means a higher weight.
The system processes packets in each queue in a round-robin
fashion with the number of packets selected from each queue
based on the corresponding weight.
mailto: [email protected]
Weighted fair queuing
mailto: [email protected]
Weighted fair queuing
For example, if the weights are 3, 2, and 1, three packets are
processed from the first queue, two from the second queue, and
one from the third queue. If the system does not impose
priority on the classes, all weights can be equal In this way, we
have fair queuing with priority.
mailto: [email protected]