1
Transport Layer
By: Bikash Shrestha By: Bikash Shrestha
2
By: Bikash Shrestha
Transport Layer
• The basic function of the transport layer is to accept
data from above, split it up into smaller units if need
be, pass these to the network layer, and ensure that
the pieces all arrive correctly at the other end.
• Furthermore, all this must be done efficiently and in
a way that isolates the upper layers from the
inevitable changes in the hardware technology.
3
By: Bikash Shrestha
Elements of Transport Protocols
• End to end connection
• Process are addressed / Addressing
• Transportation issue between the host
• Reliable data transfer between the host
• Establish connection between the host
• Flow control and Buffering
• Error detection and recovery/ Crash recovery
• Multiplexing
• Protocol Data Unit (PDU): Segment
4
By: Bikash Shrestha
Transport Layer
5
By: Bikash Shrestha
Transport Services
• Services Provided to the Upper Layers
• Transport Service Primitives
• Berkeley Sockets
6
By: Bikash Shrestha
Services Provided to the Upper Layers
The network, transport, and application layers.
7
By: Bikash Shrestha
Transport Service Primitives
The primitives for a simple transport service.
8
By: Bikash Shrestha
The nesting of TPDUs, packets, and frames.
The nesting of TPDUs, packets, and frames.
9
By: Bikash Shrestha
Berkeley Sockets
The socket primitives for TCP.
10
Transport layer Protocol Services
• TCP implements a reliable data-stream protocol
▫ connection oriented
• UDP implements an unreliable data-stream
▫ connectionless
11
By: Bikash Shrestha
Connection oriented Service
• Connection-Oriented means that when devices communicate, they
perform handshaking to set up an end-to-end connection.
• The handshaking process may be as simple as synchronization such
as in the transport layer protocol TCP, or as complex as negotiating
communications parameters as with a modem.
• Connection-Oriented systems can only work in bi-directional
communications environments. To negotiate a connection, both
sides must be able to communicate with each other. This will not
work in a unidirectional environment.
• Since a connection needs to be established, the service also
becomes reliable one.
• Example of one such connection oriented service is TCP
12
By: Bikash Shrestha
Connection establishment in TCP
• TCP three way handshake for connection Establishment
13
By: Bikash Shrestha
Acknowledgement in TCP
• After three way handshake data transfer begins and
each and every packet will be acknowledge by receiver.
14
By: Bikash Shrestha
Connection Tear Down in TCP
15
By: Bikash Shrestha
TCP Header Format
16
By: Bikash Shrestha
TCP Header Format
• Source and Destination port are of each 16 bit.
• Number of port number =216 = 65535
• 0-1023 are well known port numbers, which is used by
system to represent different services. e.g. port 25 for SMTP,
port 80 for http, 443 for https 21 for ftp, 23 for telnet etc.
• The Source port and Destination port fields identify the local
end points of the connection.
• The 32 bit Sequence number and 32 bit Acknowledgement
number fields are used for reliable data transfer between
hosts.
17
By: Bikash Shrestha
TCP Header Format
• The TCP header length tells how many 32-bit words are
contained in the TCP header. This information is needed
because the Options field is of variable length.
• Minimum HLEN is 20 bytes.
• Next comes a 6-bit field that is not used.
• Six bit flags i.e. URG, ACK, PSH, RST, FIN and SYN
• The ACK bit is set to 1 to indicate that the Acknowledgement
number is valid.
18
By: Bikash Shrestha
TCP Header Format
• SYN, FIN and RST is used for connection establishment and
tear down.
• Sender can use a PUSH flag to instruct TCP not to buffer the
send.
• Sender can use URGENT flag to have TCP send data
immediately and have the receiver TCP signal the receiver
application that there is data to be read.
• Flow control in TCP is handled using a variable-sized sliding
window. The Window size field tells how many bytes may be
sent starting at the byte acknowledged.
• A Checksum is also provided for extra reliability.
19
By: Bikash Shrestha
TCP Reliable
• Error Detection
• Receiver Feedback
• Retransmission
20
Transport Layer Ports By: Bikash Shrestha
Port numbers are used to keep track
of different conversations that cross
the network at the same time.
Port numbers identify which upper
layer service is needed, and are
needed when a host communicates
with a server that uses multiple
services.
• Both TCP and UDP use port numbers to pass to the upper layers.
• Port numbers have the following ranges:
▫ 0-255 used for public applications, 0-1023 also called well-known
ports, regulated by IANA.
▫ Numbers from 255-1023 are assigned to marketable applications
▫ 1024 through 49151 Registered Ports, not regulated.
▫ 49152 through 65535 are Dynamic and/or Private Ports .
L.Krist NVCC 20
Ports for Clients
Clients and servers both use ports to distinguish what process each
segment is associated with.
Source ports, which are set by the client, are determined
dynamically, usually a randomly assigned a number above 1023.
Source Port Destination Port
1. Client requests a web page from server 1032 80
2. Server responds to client 80 1032
Protocols and Port Numbers 22 L.Krist NVCC
APPLICATION Telnet
LAYER
Source Port
5512 Destination Port
23
TRANSPORT
LAYER TCP Header
NETWORK
LAYER 6
IP Header Source IP Address; 128.66.12.2
Destination IP Address; 128.66.13.1
ETHERNET
DATA LINK
LAYER PREAMBLE
DESTINATION ADDR SOURCE ADDR
00 00 1B 09 08 07
FIELD
IP
HEADER
TCP
HEADER DATA FCS
00 00 1B 12 23 34 TYPE
Protocols and Port Numbers
APPLICATION TFTP
LAYER
Source Port
5512 Destination Port
TRANSPORT UDP 69
LAYER
NETWORK
IP Header
LAYER 17
Source IP Address; 128.66.12.2
Destination IP Address; 128.66.13.1
ETHERNET
DATA LINK
LAYER PREAMBLE
DESTINATION ADDR SOURCE ADDR
00 00 1B 09 08 07
FIELD
IP
HEADER
TCP
HEADER DATA FCS
00 00 1B 12 23 34 TYPE
24
By: Bikash Shrestha
Connectionless Service
• A connection less transport service does not require
connection establishment and
• since there is no establishment, there should be no
termination also.
• This happens to be a unreliable service but much more faster
than connection oriented.
• Example of one such connectionless service is UDP
25
By: Bikash Shrestha
User Datagram Protocol (UDP)
26
By: Bikash Shrestha
UDP header Format
• Fixed Length Header format.
• The source port is primarily needed when a reply must be sent back to the
source.
• The UDP length field includes the 8-byte header and the data.
• The UDP checksum is optional and stored as 0 if not computed
27
By: Bikash Shrestha
Application Layer Protocol
28
By: Bikash Shrestha
Addressing
TSAPs (Transport Service Access Point) , NSAPs (Network SAP).
Multiplexing and Demultiplexing
• Host receives IP datagrams
32 bits
▫ Each datagram has source
source port # dest port #
and destination IP address,
▫ Each datagram carries one
other header fields
transport-layer segment
▫ Each segment has source
and destination port application
number data
(message)
• Host uses IP addresses and port
numbers to direct the segment
to appropriate socket TCP/UDP segment format
30
By: Bikash Shrestha
Flow Control and Buffering
• Receive Window field is use to control the flow between two hosts.
• Sliding window protocol is used for flow control mechanism
31
By: Bikash Shrestha
Flow Control and Buffering
32
By: Bikash Shrestha
Flow Control and Buffering
(a) Chained fixed-size buffers. (b) Chained variable-sized buffers. (c) One large
circular buffer per connection.
33
By: Bikash Shrestha
Flow Control and Buffering
• Even if the receiver has agreed to do the buffering, there still remains the
question of the buffer size.
• If most TPDUs are nearly the same size, it is natural to organize the buffers
as a pool of identically-sized buffers, with one TPDU per buffer, as in Fig.A.
• However, if there is wide variation in TPDU size, from a few characters
typed at a terminal to thousands of characters from file transfers, a pool
of fixed-sized buffers presents problems.
• If the buffer size is chosen equal to the largest possible TPDU, space will
be wasted whenever a short TPDU arrives.
• If the buffer size is chosen less than the maximum TPDU size, multiple
buffers will be needed for long TPDUs, with the attendant complexity.
34
By: Bikash Shrestha
Flow Control and Buffering
• Another approach to the buffer size problem is to use variable-sized
buffers, as in Fig. B.
• The advantage here is better memory utilization, at the price of more
complicated buffer management.
• A third possibility is to dedicate a single large circular buffer per
connection, as in Fig. C.
• This system also makes good use of memory, provided that all connections
are heavily loaded, but is poor if some connections are lightly loaded.
35
By: Bikash Shrestha
Difference betn Flow Control and Congestion
• Congestion control has to do with making sure the subnet
is able to carry the offered traffic.
• It is a global issue, involving the behavior of all the hosts,
all the routers, the store-and-forwarding processing within
the routers, and all the other factors that tend to diminish
the carrying capacity of the subnet.
• Flow control, in contrast, relates to the point-to-point
traffic between a given sender and a given receiver.
• Its job is to make sure that a fast sender cannot continually
transmit data faster than the receiver is able to absorb it.
• Flow control frequently involves some direct feedback
from the receiver to the sender to tell the sender how
things are doing at the other end.
36
By: Bikash Shrestha
Congestion Control Algorithms
• When too many packets are present in (a part of) the
subnet, performance degrades. This situation is
called congestion.
37
By: Bikash Shrestha
Congestion caused by several factors
• If all of a sudden, streams of packets begin arriving on three
or four input lines and all need the same output line, a queue
will build up. If there is insufficient memory to hold all of
them, packets will be lost.
• Routers have an infinite amount of memory, congestion gets
worse, not better, because by the time packets get to the
front of the queue, they have already timed out (repeatedly)
and duplicates have been sent.
• Slow processors can also cause congestion. This problem will
persist until all the components are in balance.
• If router has no free buffers, it must discard newly arriving
packet. Sender may timeout and retransmits again and again
increasing traffic.
38
By: Bikash Shrestha
Congestion Control and Flow Control
• Congestion control has to do with making sure the subnet is
able to carry the offered traffic.
• It is a global issue, involving the behavior of all the hosts, all
the routers, the store-and-forwarding processing within the
routers, and all the other factors that tend to diminish the
carrying capacity of the subnet.
• Flow control, in contrast, relates to the point-to-point traffic
between a given sender and a given receiver.
• Its job is to make sure that a fast sender cannot continually
transmit data faster than the receiver is able to absorb it.
• Flow control frequently involves some direct feedback from
the receiver to the sender to tell the sender how things are
doing at the other end.
39
By: Bikash Shrestha
General Principles of Congestion Control
Two approach:
• Open loop and
• Closed loop.
40
By: Bikash Shrestha
Traffic Shaping
• Traffic shaping is about regulating the average rate
(and burstiness) of data transmission.
• In contrast, the sliding window protocols we studied
earlier limit the amount of data in transit at once,
not the rate at which it is sent.
41
By: Bikash Shrestha
Traffic Shaping
• Leaky Bucket
• Token Bucket
42
By: Bikash Shrestha
Leaky Bucket
43
By: Bikash Shrestha
Leaky Bucket
• Imagine a bucket with a small hole in the bottom.
• No matter the rate at which water enters the bucket, the outflow is
at a constant rate, when there is any water in the bucket and zero
when the bucket is empty.
• Also, once the bucket is full, any additional water entering it spills
over the sides and is lost.
• The same idea can be applied to packets, as shown in Fig. (b).
conceptually, each host is connected to the network by an interface
containing a leaky bucket, that is, a finite internal queue.
• If a packet arrives at the queue when it is full, the packet is
discarded.
• In other words, if one or more processes within the host try to send
a packet when the maximum number is already queued, the new
packet is unceremoniously discarded.
44
By: Bikash Shrestha
Leaky Bucket
• This arrangement can be built into the hardware
interface or simulated by the host operating system.
• It was first proposed by Turner (1986) and is called the
leaky bucket algorithm.
• The leaky bucket consists of a finite queue. When a
packet arrives, if there is room on the queue it is
appended to the queue; otherwise, it is discarded. At
every clock tick, one packet is transmitted (unless the
queue is empty).
• The leaky bucket algorithm enforces a rigid output
pattern at the average rate, no matter how bursty the
traffic is.
45
By: Bikash Shrestha
The Token Bucket Algorithm
• For many applications, it is better to allow the output to
speed up somewhat when large bursts arrive, so a more
flexible algorithm is needed, preferably one that never loses
data. One such algorithm is the token bucket algorithm
• Tokens arrive at constant rate in the token bucket.
• If bucket is full, tokens are discarded.
• A packet from the buffer can be taken out only if a token in
the token bucket can be drawn.
• The token bucket algorithm provides a different kind of traffic
shaping than that of the leaky bucket algorithm. The leaky
bucket algorithm does not allow idle hosts to save up
permission to send large bursts later.
46
By: Bikash Shrestha
The Token Bucket Algorithm
• The token bucket algorithm does allow saving, up to the
maximum size of the bucket, n. This property means that
bursts of up to n packets can be sent at once, allowing
some burstiness in the output stream and giving faster
response to sudden bursts of input.
• Another difference between the two algorithms is that
the token bucket algorithm throws away tokens (i.e.,
transmission capacity) when the bucket fills up but never
discards packets. In contrast, the leaky bucket algorithm
discards packets when the bucket fills up.
47
By: Bikash Shrestha
The Token Bucket Algorithm
48
By: Bikash Shrestha
Questions ?
49
By: Bikash Shrestha
Thank you