Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
5 views37 pages

Mod 3

Transport service provides end-to-end data transport functionalities between systems, abstracting underlying communication complexities. It includes connection-oriented and connectionless services, each with distinct advantages and use cases, and emphasizes Quality of Service (QoS) to meet specific transmission requirements. The transport layer also manages connection establishment, flow control, multiplexing, and security to ensure reliable and efficient data transfer.

Uploaded by

piyushcs8340
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views37 pages

Mod 3

Transport service provides end-to-end data transport functionalities between systems, abstracting underlying communication complexities. It includes connection-oriented and connectionless services, each with distinct advantages and use cases, and emphasizes Quality of Service (QoS) to meet specific transmission requirements. The transport layer also manages connection establishment, flow control, multiplexing, and security to ensure reliable and efficient data transfer.

Uploaded by

piyushcs8340
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

What is Transport Service

• Transport service refers to the functionalities offered by a transport protocol to higher-level


protocols such as application or session layer protocols.

• It provides end-to-end data transport between two systems, hiding the complexities of the
underlying communication layers from the user.

• A transport entity in a system communicates with a remote transport entity using the
services of a lower layer (like the network layer).

a) Type of Service

There are two basic types of protocol services:

1. Connection-Oriented Service

• Establishes, maintains, and terminates a logical connection between TS (Transport Service)


users.

• Most commonly used type of service.

• Generally reliable in nature.

Advantages / Strengths:

• Supports features like:

o Flow control

o Error control

o Sequenced delivery

2. Connectionless Service (Datagram Service)

• No need to establish or maintain a connection.

• Often used at lower layers (like internet/network layers) due to:

o Higher robustness

o Acts as a "least common denominator" service for higher layers.

• Also useful at transport and above in specific cases where connection overhead is:

o Unjustified

o Counterproductive

Examples where Connectionless Service is preferred:

• a. Inward Data Collection:

o Periodic data sampling (e.g., from sensors, network components).

o Real-time monitoring – occasional data loss is tolerable.

• b. Outward Data Dissemination:


o Broadcast messages to users.

o Announcements like:

▪ New node detection

▪ Service address changes

▪ Real-time clock distribution

• c. Request-Response:

o Transactional applications using a single request-response.

o Common server with many distributed users.

o Regulated at application level.

o Lower-level connections are often unnecessary or cumbersome.

• d. Real-Time Applications:

o Voice communication, telemetry, etc.

o May have redundancy or strict real-time requirements.

o Cannot afford retransmissions or delays from connection-oriented mechanisms.

b) Quality of Service (QoS)

• The transport protocol entity should allow the TS user to specify the quality of transmission
service required.

• The transport entity will try to optimize the use of underlying link, network, and internet
resources to provide the requested services.

Examples of Services That Might Be Requested:

• Acceptable error and loss levels

• Desired average and maximum delay

• Desired average and minimum throughput

• Priority levels

Limitations:

• The transport entity is limited by the capabilities of the underlying service.

Example 1: IP (Internet Protocol)

• Provides a QoS parameter.

• Allows specification of:

o Eight levels of precedence/priority

o Binary settings for:

▪ Normal or low delay


▪ Normal or high throughput

▪ Normal or high reliability

• The transport entity may delegate (pass the request) to the internetwork entity.

• However, even IP is limited in capability:

o Routers can schedule items preferentially

o But are still dependent on physical transmission facilities

Example 2: X.25

• Offers throughput class negotiation (optional user facility)

• The network may:

o Adjust flow control parameters

o Allocate more network resources to a virtual circuit

o Achieve the desired throughput

Additional Transport Layer Techniques:

• Splitting one transport connection among multiple virtual circuits to enhance throughput.

User Considerations for QoS:

• Based on transmission facility characteristics, the transport entity’s success in providing QoS
may vary.

• Trade-offs are inevitable among:

o Reliability

o Delay

o Throughput

o Cost of service

Application Scenarios for Specific QoS Requirements:

• File Transfer Protocol (FTP):

o Requires high throughput

o Also high reliability (to avoid retransmissions at file level)

• Transaction Protocol (e.g., Web browser-server):

o Requires low delay

• Electronic Mail Protocol:

o May need multiple priority levels

Approaches to QoS in Protocol Design:


1. Include a QoS facility within the protocol itself

o Seen in IP

o Also adopted by most transport protocols

2. Use different transport protocols for different traffic classes

o Approach taken by the ISO-standard family of transport protocols

3. Data Transfer

• The core purpose of the transport layer is to move data between two devices or processes.

• Transfers include both:

o User Data – the actual content

o Control Data – protocol information

Modes of Data Transfer:

• Full-Duplex: Send and receive simultaneously (like a phone call).

• Half-Duplex: Send or receive at a time (like walkie-talkie).

• Simplex: One-way only (like TV broadcast).

4. User Interface

• This is how the TS user accesses transport services.

• No universal standard — it depends on the system and environment.

Possible Implementations:

• Procedure Calls: Function or API-style usage.

• Mailboxes: A message-passing mechanism.

• Direct Memory Access (DMA): Data passed between application memory and transport
entity without CPU.

Interface Considerations:

• Flow Regulation: Prevent users from overwhelming the transport entity.

• Buffering: Prevent transport entity from overflowing the application.

• Acknowledgements:

o Local: Acknowledge immediately after receipt.

o End-to-End: Acknowledge after data reaches remote end.

5. Connection Management

Responsibilities of Transport Layer:

• Connection Establishment – Initiating a session.


• Connection Maintenance – Keeping it alive and synchronized.

• Connection Termination – Closing when done.

Types:

• Symmetric: Both sides can initiate.

• Asymmetric: Only one side initiates (useful for simplex connections).

Termination Modes:

• Graceful: Waits for all data to be delivered. Safe.

• Abrupt: Cuts off immediately. Fast, but risky.

6. Expedited Delivery

• A way to prioritize urgent data that should skip the normal data queue.

Characteristics:

• Delivered as fast as possible.

• Notifies receiver immediately (like an interrupt).

• Used for alarms, control signals, or break characters.

Difference from Priority:

Feature Expedited Delivery Priority Service

Urgency For occasional urgent data For consistently higher importance

Mechanism Interrupt-like Ongoing resource adjustment

Example Alarm from fire sensor Emergency traffic in a network

7. Status Reporting

• Allows TS user to monitor or query transport layer’s current state.

Information Available:

• Connection performance (e.g., current throughput, delay).

• Addresses (both transport and network layer).

• Protocol class/version.

• Timers (e.g., retransmission timers).

• Internal state of protocol machine.

• QoS degradation alerts.

8. Security

• Transport layer may offer built-in security mechanisms.

Features:
• Access Control: Authenticate sender and receiver.

• Encryption/Decryption: Data confidentiality.

• Secure Routing: Prefer secure links/nodes if network supports them.

Example:

Like sending confidential documents through a trusted courier with ID checks and a sealed package.

Reliable Sequencing Network Service — Definition


A Reliable Sequencing Network Service refers to a type of network service that

• Accepts messages of arbitrary length

• Delivers them with nearly 100 percent reliability

• Ensures that messages are delivered in the correct order (sequenced) to the destination

In such networks, the transport protocol can afford to be simple, as many of the complications like
retransmission, ordering, or loss handling are already taken care of by the underlying network.

Examples of Reliable Sequencing Network Services

A highly reliable packet-switching network using an X.25 interface


A Frame Relay network using the LAPF control protocol
An IEEE 802.3 LAN (Local Area Network) that uses the connection-oriented LLC (Logical Link Control)
service

Transport Protocol Issues in This Scenario

Even with a reliable network service, the transport layer must handle the following four essential
issues

1. Addressing

2. Multiplexing

3. Flow Control

4. Connection Establishment and Termination

1. Addressing

Purpose

To identify the correct destination so that a user of a transport entity can

• Establish a connection

• Send connectionless data to another transport user

Required Addressing Components

To properly identify a target user, the transport protocol requires

User Identification
Transport Entity Identification
Station Address
Network Number

How Transport Protocol Derives These

All this information is derived from the TS (Transport Service) user address.
Typically, the address is expressed as a station (host, computer, or device) and a port (specific process
or user on the station).
In the OSI model, this combination is called a TSAP (Transport Service Access Point).

TCP or IP Specifics

In TCP, the combination of port and station is called a socket.


A socket uniquely identifies a communication endpoint.

Role of the Transport Layer in Addressing

The transport layer passes the station address down to the network layer. Routing is not its concern.
The port number is included in the transport header, which is used at the destination by the
receiving transport protocol.

How Does the Initiating TS User Know the Destination Address

There are four strategies to determine or obtain the address of the destination user

Static Methods

1. Pre-configured Address
The initiating user must know the destination address ahead of time.
Used for processes that are only useful to a few users and are not meant to be well-known or
public.
Example: A statistics collection process that only the central network manager connects to
occasionally.

2. Well-Known Addresses
Common services are assigned well-known port numbers or addresses.
Example: Time-sharing service, word processing service.

Dynamic Methods

3. Name Server Lookup


The user requests a service by a generic or global name.
This name is sent to a name server, which looks up the name and returns the actual address.
This is useful when services change location for load balancing or operational reasons.

4. Spawn-on-Demand (Process Request)


The destination process does not exist initially.
The initiating user sends a request to a privileged system process at a well-known address.
That process then spawns the required target process and returns the newly created
process's address.
Example: A simulation program is executed remotely via a job management request.

2. Multiplexing in Transport Layer

1. Definition:
o Multiplexing refers to the process of combining multiple signals or data streams into
one.

o In the transport layer, multiplexing and demultiplexing involve managing multiple


communication sessions over a shared transport protocol.

2. Purpose in Transport Layer:

o Multiple higher-level users (applications) can share the same transport protocol.

o Differentiation is achieved using port numbers or Service Access Points (SAPs).

3. Two Types of Multiplexing:

o Upward Multiplexing: Multiple transport connections are mapped onto a single


lower-layer (network) connection.

o Downward Multiplexing: A single transport connection is distributed across multiple


lower-layer connections.

Example: Transport Entity with X.25 Network

4. Why Use Upward Multiplexing?

o X.25 allows up to 4095 virtual circuits, typically sufficient.

o However, virtual circuits consume buffer resources and may incur usage-based costs
(based on connection time).

o To minimize resource usage and cost, multiple transport sessions can share one
virtual circuit, making upward multiplexing practical and efficient.

5. Why Use Downward Multiplexing?

o Each X.25 virtual circuit supports only a limited sequence number space (3-bit or 7-
bit).

o For high-speed, long-delay networks, larger sequence spaces are needed for
efficient data transfer.

o Splitting the connection across multiple circuits (downward multiplexing) can


increase throughput.

6. Limitations of Downward Multiplexing:

o Throughput gains are bounded by the capacity of the underlying physical link.

o If all virtual circuits share a single station-node link, the maximum achievable
throughput cannot exceed that link's data rate.

3.Flow Control

Flow control at the transport layer is a more complex mechanism compared to the link layer due to
the following reasons:

1. Interaction Involved:
o Flow control at the transport layer involves the coordination between TS users,
transport entities, and the network service.

2. Transmission Delay:

o The transmission delay between transport entities is generally long compared to the
actual transmission time and is also variable.

Illustration of Flow Control (Figure 17.3)

• When TS user A sends data to TS user B over a transport connection, four queues are
involved in the process:

1. A generates data and queues it.

2. A must wait until:

▪ B gives permission (peer flow


control).

▪ Transport entity A gives


permission (interface flow
control).

3. Transport entity A queues the data until permission is received from B and the
network service.

4. The data is handed to the network layer for delivery to B.

5. The network service queues the data until B gives permission.

6. Finally, B must grant permission to deliver the data to its destination.

Flow Control with Delay (Figure 17.4)

1. Sending Data:

o When a TS user sends data to its transport


entity (via a Send call), two events occur:

1. The transport entity generates


transport-level protocol data units
(segments) and passes them to the
network service.

2. The transport entity acknowledges to


the TS user that it has accepted the
data for transmission.

2. Flow Control at Transport Entity:

o The transport entity can exercise flow control across the user-transport interface by
withholding acknowledgment, especially if it's being held up by the network service
or the receiving transport entity.
3. Receiving Data:

o When the segment reaches the receiving transport entity:

▪ The entity unwraps the data and sends it to the destination TS user (via an
Indication primitive).

▪ The TS user acknowledges receipt (via a Response primitive).

▪ The TS user can withhold its response to exercise flow control.

4. Acknowledgment Options:

o Transport entity acknowledgment can either:

1. Be sent immediately after receiving the segment (indicating the data


reached the transport entity).

2. Be sent after confirming that the TS user received the data (the safer
approach).

Why Flow Control is Needed

A transport entity might need to control the rate of segment transmission for two primary reasons:

1. Receiving TS User Cannot Keep Up: The destination user cannot handle the incoming flow of
data.

2. Receiving Transport Entity Cannot Keep Up: The receiving transport entity itself is
overwhelmed.

Buffer Overflow Prevention

• Each transport entity has a buffer to hold incoming segments. If this buffer fills up, the
transport entity must slow or stop the flow to avoid overflow.

• Delays between sender and receiver make it difficult to manage flow control effectively.

Ways to Handle Flow Control

The receiving transport entity can handle flow control in four different ways:

1. Do Nothing: The transport entity discards overflowing segments. The sender will retransmit,
but this increases the flow, exacerbating the problem.

2. Refuse to Accept More Segments: The entity uses a backpressure mechanism, refusing
additional data from the network service when its buffer is full. This triggers flow control at
the network service, which in turn limits segments from the sending transport entity. This
approach is coarse-grained and may affect multiple transport connections if multiplexed on a
single network connection.

3. Fixed Sliding-Window Protocol:

o Sliding Window: This technique uses sequence numbers and a fixed-size window.
The sender can only transmit a certain number of segments (based on the window
size).
o When the receiver's buffer nears full capacity, it withholds acknowledgment to slow
down the sender.

o In reliable networks, the sender will not retransmit until it receives acknowledgment
for the data it sent. However, in unreliable networks, lack of acknowledgment could
be due to either flow control or segment loss.

4. Credit Scheme:

o In a credit scheme, acknowledgment and flow control are decoupled.

o Segments are acknowledged without granting new credit, and vice versa.

o A credit message tells the sender how many new segments can be transmitted. For
example, if the receiver has space for 7 segments, it sends a credit message with 7
credits. The sender then advances the window as it transmits segments, and the
receiver grants credit as space frees up.

Credit Scheme in Action (Figures 17.5 & 17.6)

• Sender's Perspective:

1. Data Sent and Acknowledged: The range of sequence numbers from the start to the
last acknowledged segment.

2. Data Sent but Not Acknowledged: Segments that have been sent, but for which
acknowledgment has not yet been received.

3. Permitted Data Transmission: Segments that can be transmitted based on available


credits.

4. Unused and Unusable Numbers: Sequence numbers beyond the window.

• Receiver's Perspective:

o The receiver decides how many segments to allow the sender to transmit,
depending on available buffer space. A more conservative policy may limit
throughput, while a more optimistic policy might grant credits in anticipation of
freeing up buffer space.

Flow Control Strategies


• A conservative flow control scheme may limit throughput in high-delay situations but avoids
buffer overflow.

• An optimistic flow control scheme may increase throughput by granting credits for space the
receiver expects to free up. However, if the receiver is slower than the sender, it might
discard segments, causing retransmissions. Although this is less efficient, it works well in a
reliable network.

4.Connection Establishment and Termination

Even with a reliable network service, there is a need for connection establishment and
termination procedures to support connection-oriented service.

Purposes of Connection Establishment

Connection establishment serves three main purposes:

• It allows each end to assure that the other exists.

• It allows negotiation of optional parameters (e.g.,


maximum segment size, maximum window size,
quality of service).

• It triggers allocation of transport entity resources


(e.g., buffer space, entry in connection table).

Connection Establishment Procedure

• Connection establishment is by mutual agreement and can be accomplished by a simple set


of user commands and control segments.

• Initial State:
A TS (Transport Service) user is in a CLOSED state (i.e., it has no open transport connection).

• Passive Open Command:

o The TS user can signal that it will passively wait for a request by issuing a Passive
Open command.

o Example: A server program, such as a time-sharing or a file transfer application,


might use this.

o The TS user may change its mind by sending a Close command.

o After the Passive Open command is issued, the transport entity creates a connection
object (i.e., a table entry) that is in the LISTEN state.

• Active Open Command:

o From the CLOSED state, the TS user may open a connection by issuing an Active
Open command.

o This instructs the transport entity to attempt connection establishment with a


designated user.
o It triggers the transport entity to send a SYN (synchronize) segment.

o The SYN segment is carried to the receiving transport entity and interpreted as a
request for connection to a particular port.

• Receiving Side Actions:

o If the destination transport entity is in the LISTEN state for that port:

▪ It signals the TS user that a connection is open.

▪ Sends a SYN as confirmation to the remote transport entity.

▪ Puts the connection object in an ESTAB (established) state.

• Initiating Side Actions:

o When the responding SYN is received by the initiating transport entity, it also moves
the connection to an ESTAB state.

• Premature Abortion:

o The connection is prematurely aborted if either TS user issues a Close command.

Robustness of the Protocol

• Either side can initiate a connection.

• If both sides initiate the connection at about the


same time, it is established without confusion.

• This happens because the SYN segment functions


both as a connection request and a connection
acknowledgment.

What if SYN Arrives When TS User is Idle?

Three possible actions:

1. The transport entity can reject the request by sending an RST (reset) segment back to the
other transport entity.

2. The request can be queued until a matching Open is issued by the TS user.

3. The transport entity can interrupt or otherwise signal the TS user to notify it of a pending
request.

• If the latter mechanism is used:

o A Passive Open command is not strictly necessary.

o It may be replaced by an Accept command, which is a signal from the user to the
transport entity that it accepts the request for connection.

Connection Termination Procedure

• Either side, or both sides, may initiate a close.

• The connection is closed by mutual agreement.


• This strategy allows for either abrupt or graceful termination.

Graceful Termination Procedure

Side Initiating Termination:

1. In response to a TS user's Close primitive, a FIN segment is sent to the other side of the
connection, requesting termination.

2. After sending the FIN, the transport entity places the connection in the FIN WAIT state.

o In this state, it must continue to accept data from the other side and deliver that
data to its user.

3. When a FIN is received in response, the transport entity informs its user and closes the
connection.

Side Not Initiating Termination:

1. When a FIN segment is received:

o The transport entity informs its user of the termination request.

o Places the connection in the CLOSE WAIT state.

o In this state, it must continue to accept data from its user and transmit it in data
segments to the other side.

2. When the user issues a Close primitive:

o The transport entity sends a responding FIN segment to the other side.

o Closes the connection.

Key Points

• Both sides must ensure that all outstanding data is received.

• Both sides must agree to connection termination before actual termination.

Unreliable Network Service


The most difficult case for a transport protocol is that of an unreliable network service.

Examples of Unreliable Networks

• An internetwork using IP.

• A frame relay network using only the LAPF core protocol.

• An IEEE 802.3 LAN using the unacknowledged connectionless LLC service.

Problems in Unreliable Network Service

• The issue is not only that segments are occasionally lost.

• Segments may arrive out of sequence due to variable transit delays.

• Elaborate machinery is required to cope with these two interrelated deficiencies.


General Observations

• A discouraging pattern emerges:

o The combination of unreliability and nonsequencing creates problems with every


mechanism discussed so far.

• Generally:

o The solution to each problem raises new problems.

o Although there are problems for protocols at all levels, there are more difficulties
with a reliable connection-oriented transport protocol than any other sort of
protocol.

Seven Issues to Address

1. Ordered delivery

2. Retransmission strategy

3. Duplicate detection

4. Flow control

5. Connection establishment

6. Connection termination

7. Crash recovery

1.Ordered Delivery

• In unreliable network services, segments may arrive out of order, even if all are delivered.

• To solve this, segments are numbered sequentially.

• Data link protocols like HDLC and X.25, number each data unit (frame, packet) sequentially,
with each sequence number increasing by one.

• TCP uses a different approach where each transmitted data octet is implicitly numbered.

• For example, the first segment might have a sequence number of 0, and if it contains 1000
octets, the next segment will have the sequence number 1000, and so on.

2.Retransmission Strategy

• Two situations require retransmission of a segment:

1. Damage in transit: A segment may be damaged but still arrive. The receiver can
detect and discard it using a frame check sequence.

2. Loss of segment: A segment may fail to arrive, and the sender will not know this.

• To address this, a positive acknowledgment (ACK) scheme is needed: The receiver must
acknowledge each successfully received segment.
• For efficiency, a cumulative acknowledgment can be used. For example, after receiving
segments 1, 2, and 3, the receiver sends ACK 4, indicating all previous segments have been
received successfully.

• If no acknowledgment is received, the sender will retransmit the segment.

• A timer must be associated with each sent segment; if the timer expires before
acknowledgment, the segment is retransmitted.

Retransmission Timer

• The timer should be set to slightly longer than the round-trip delay (time to send a segment
and receive the ACK).

• If the timer is too small, unnecessary retransmissions will occur, wasting network capacity.

• If the timer is too large, the protocol will respond slowly to a lost segment.

• The delay varies with network conditions, making it difficult to set the optimal timer value.

Strategies for Timer Setting

1. Fixed Timer Value:

o A fixed timer based on typical network behavior can be used.

o It suffers from not adapting to changing network conditions, leading to sluggish


response or excessive retransmissions if set improperly.

2. Adaptive Timer Scheme:

o This approach adjusts the retransmission timer based on observed delays.

o It faces challenges like:

▪ Delayed acknowledgments due to cumulative ACKs.

▪ Difficulty in determining whether an ACK corresponds to the initial


transmission or a retransmission.

▪ Sudden changes in network conditions.

o These issues require further adjustments to the transport algorithm.

Other Timers in Transport Protocols

• The retransmission timer is just one of several timers necessary for the proper functioning of
a transport protocol.
3.Duplicate Detection

• If a segment is lost and retransmitted, no confusion arises.

• If an ACK is lost, one or more segments may be retransmitted, and they could be duplicates
of previously received segments.

• The receiver must be able to recognize duplicates.

• The sequence number carried by each segment helps with duplicate detection, but it’s not
always straightforward.

Two cases of duplicate detection:

1. Duplicate received before the connection is closed:

o The receiver assumes the acknowledgment was lost and acknowledges the duplicate
segment.

o The sender must not be confused by receiving multiple ACKs for the same segment.

2. Duplicate received after the connection is closed:

o Similar handling as before, but the connection's state matters in how the duplicate is
processed.

Sequence Number Space Requirement

• The sequence number space must be large enough to avoid "cycling" before the maximum
possible segment lifetime is reached.

• For example, if the sequence space is too small, and the sender cycles back to sequence
number 0 before a segment has finished its journey, old segments may be incorrectly treated
as new ones.

Example Scenario (Figure 17.9)

• Sequence space length: 8

• Sliding-window protocol with window size of 3.

• A sends segments 0, 1, and 2 without receiving any ACKs.

• A times out and retransmits segment 0.

• B has received segments 1 and 2, but segment 0 is delayed.

• B sends an ACK for segments 0, 1, and 2 when segment 0 finally


arrives.

• A times out again and retransmits segment 1, which B


acknowledges with ACK 3.

• Data transfer resumes.

Issue when sequence space is exhausted:


• A cycles back to sequence number 0 and continues, but an old segment 0 arrives before the
new segment 0.

• This would not have caused an issue if the sequence numbers hadn’t wrapped around.

Size of Sequence Space

• The sequence space must be large enough to accommodate delays in transmission.

• The required size depends on network conditions, including maximum packet lifetime and
segment transmission rate.

• A single bit added to the sequence number field doubles the sequence space, making it easy
to select an adequate size.

• Standard transport protocols allow for large sequence spaces to avoid this issue.

4.Flow Control

• Credit-allocation flow control mechanism is robust in handling unreliable network services


and requires little enhancement.

• The mechanism is tied to acknowledgments in the form of a control segment: (ACK N,


CREDIT M).

o ACK N acknowledges all data segments through number N-1.

o CREDIT M allows segments numbered N through N+M+1 to be transmitted.

• This scheme allows flexibility:

o To increase or decrease credit without additional segments: B can send (ACK N,


CREDIT X).

o To acknowledge a new segment without increasing credit: B can send (ACK N + 1,


CREDIT M - 1).

• If an ACK/CREDIT segment is lost, it doesn't cause much harm because future


acknowledgments will resynchronize the protocol.

• If no acknowledgments are received, the sender times out and retransmits, triggering a new
acknowledgment.

• However, deadlock can occur if:

o B sends (ACK N, CREDIT 0), temporarily closing the window.

o The subsequent (ACK N, CREDIT M) is lost, and A waits for permission to send.

o To solve this, a window timer is used.

▪ This timer resets with each outgoing ACK/CREDIT segment.

▪ If the timer expires, a new ACK/CREDIT segment is sent, even if it duplicates


a previous one.

▪ This breaks deadlock and ensures that the protocol entity is still active.
• An alternative mechanism is to allow acknowledgments for the ACK/CREDIT segment,
allowing for larger window timer values.

5.Connection Establishment

• Connection establishment must account for network unreliability and


involves the exchange of SYN segments, also known as a two-way
handshake.

• SYN Loss or SYN response loss can occur, and both can be handled by
a retransmit-SYN timer.

o After sending an SYN, A will retransmit it when the timer


expires.

o This may lead to duplicate SYNs:

▪ If A’s SYN is lost, there are no duplicates.

▪ If B’s response is lost, A may receive two SYNs.

▪ If B’s response is delayed, A might get two SYN responses, but these should
be ignored once the connection is established.

• Delayed data segment or lost acknowledgment can cause duplicate data segments,
interfering with connection establishment.

• To avoid issues:

o Start new connections with different sequence numbers, far removed from the last
sequence number of the previous connection.

o Use the SYN i form, where i is the sequence number of the first data segment to be
sent.

Duplicate SYN Issue After Connection Termination

• A duplicate SYN may survive after the connection is terminated:

o An old SYN i might arrive at B after the connection has been closed, and B assumes
it’s a fresh request.

o B sends SYN j, while A tries to open a new connection with SYN k.

o B discards SYN k as a duplicate, and both sides end up thinking they have established
a valid connection.

• This can be avoided by explicitly acknowledging each other's SYN and


sequence numbers in a three-way handshake.

Three-Way Handshake

• The three-way handshake solves the issues of duplicate SYNs and


ensures both sides acknowledge the SYN segments before declaring
the connection established.
o A new state, SYN RECEIVED, is introduced to ensure SYNs are acknowledged before
the connection is open.

o If a duplicate SYN is detected, an RST (reset) segment is sent to the other side.

Typical Three-Way Handshake (Figure 17.13):

1. A initiates the connection with SYN i (sequence number i).

2. B responds with SYN j, ACK i (acknowledging A’s sequence


number).

3. A acknowledges B’s SYN with ACK j, and data transfer can


begin.

Handling Old SYN After Connection Close (Figure 17.11):

• If an old SYN i arrives at B after the connection closes, B


responds with SYN j, ACK i.

• A then sends an RST, ACK j to discard the old connection request.

Handling Duplicate SYN During New Connection (Figure 17.13):

• An old SYN, ACK arriving during a new connection establishment causes no issue because of
the use of sequence numbers.

• If an invalid ACK is received, an RST is sent if the connection state is not yet OPEN.

6.Connection Termination

• Two-way handshake for connection termination is insufficient in an unreliable network


service, similar to its inadequacy in connection establishment.

• A problem arises when:

o A transport entity in the CLOSE WAIT state sends its last data segment followed by a
FIN segment.

o If the FIN segment arrives before the last data segment, the receiving entity may
close the connection prematurely and lose the last data segment.

• To prevent this, a sequence number is associated with the FIN segment, assigned after the
last octet of transmitted data.

o The receiving entity waits for any late-arriving data before closing the connection,
preventing data loss.

• Potential issues with lost or obsolete segments:

o The termination process adopts a solution similar to connection establishment,


requiring each side to explicitly acknowledge the FIN of the other side.

o An ACK with the sequence number of the FIN is sent in response to acknowledge
termination properly.

• For a graceful close, the transport entity must:


1. Send a FIN i and receive an ACK i.

2. Receive a FIN j and send an ACK j.

3. Wait for an interval equal to twice the maximum-expected segment lifetime before
declaring the connection closed.

7.Crash Recovery

• When a transport entity fails and restarts:

o The state information of active connections is lost, and the affected connections
become half-open.

o The side that did not fail remains unaware of the failure, potentially resulting in an
incomplete or lost connection.

• The still active side can close the connection using a give-up timer:

o The timer tracks how long the transport entity will wait for an acknowledgment (or
an appropriate response) of a retransmitted segment after the maximum number of
retransmissions.

o When the timer expires, the transport entity assumes the other side or the network
has failed.

o The timer triggers the connection closure, signaling an abnormal close to the TS
user.

• If the transport entity that failed restarts quickly, half-open connections can be terminated
more swiftly by sending an RST (reset) segment.

o The failed side returns RST i for each received segment i.

o When the RST i reaches the other side, it must be checked for validity using the
sequence number i, as it could correspond to an old segment.

o If the reset is valid, the transport entity performs an abnormal termination.

• These recovery measures clean up the transport-level state but do not automatically reopen
the connection.

o The decision to reopen the connection depends on the TS users.

o The problem arises from synchronization issues:

▪ The side that did not fail knows how much data it has received, but the failed
side may have lost state information, meaning it doesn't know which
segments were received.

▪ This can result in the loss or duplication of user data.

TCP
• The TCP/IP protocol suite includes two transport-level protocols:
o Transmission Control Protocol (TCP):

▪ TCP is connection-oriented.

o User Datagram Protocol (UDP):

▪ UDP is connectionless.

• In this section:

o Focus is on TCP.

o TCP is specified in RFC 793.

o The discussion is divided into two parts:

▪ First, the service TCP provides to the TS (Transport Service) user.

▪ Second, the internal protocol details of TCP.

TCP Services:
o TCP is designed to provide reliable communication.

o Communication occurs between pairs of processes.

o These processes are referred to as TCP users.

o TCP ensures reliable communication across:

▪ A variety of reliable networks.

▪ A variety of unreliable networks.

▪ A variety of internets (interconnected networks).

• Functional equivalence:

o TCP is functionally equivalent to Class 4 ISO Transport.

• Difference between TCP and ISO model:

o TCP is stream-oriented.

▪ TCP users exchange streams of data.

▪ Data is placed into allocated buffers.

▪ TCP transmits data in the form of segments.

• Support provided by TCP:

o Security labeling is supported.

o Precedence labeling is supported.

• TCP provides two useful facilities for labeling data:

o Data stream push:

▪ Under normal operation:


▪ TCP decides when sufficient data have accumulated.

▪ Only after that, TCP forms a segment for transmission.

▪ TCP user can override this behavior by using the push flag.

▪ The TCP user can require TCP to:

▪ Transmit all outstanding data.

▪ Include all data up to and including those labeled with a push flag.

▪ On the receiving side:

▪ TCP will deliver these pushed data to the user.

▪ The delivery occurs in the same manner as the push transmission.

▪ Use case:

▪ A user might request a push if there is a logical break in the data.

o Urgent data signaling:

▪ Provides a means to inform the destination TCP user.

▪ Indicates that significant ("urgent") data is present in the upcoming data


stream.

▪ At the destination side:

▪ It is up to the destination user to determine appropriate action


based on this urgency.

• Service definition method:

o As with IP, TCP services are defined in terms of:

▪ Primitives (basic operations).

▪ Parameters (details associated with operations).

• Comparison with IP services:

o TCP services are considerably richer than IP services.

o Therefore, the set of:

▪ Primitives is more complex.

▪ Parameters is more complex.

• Table 17.2:

o Lists the TCP service request primitives.

o These primitives are issued by a TCP user to TCP.

• Table 17.3:

o Lists the TCP service response primitives.


o These primitives are issued by TCP to a local TCP user.

• Table 17.4:

o Provides a brief definition of the parameters involved in TCP services.

• Additional comments:

o There are two Passive Open commands:

▪ Their purpose is to signal the TCP user's willingness to accept a connection


request.

o The Active Open with Data command:

▪ Allows the TCP user to:

▪ Begin transmitting data immediately.

▪ Data transmission starts with the opening of the connection.

TCP header format

• TCP uses only a single type of protocol data unit.

o This unit is called a TCP segment.

• The TCP header format:

o It is shown in Figure 17.14.

o One header must serve to perform all protocol mechanisms.

o Therefore, it is rather large.

o It has a minimum length of 20 octets.

• Fields in TCP header:

o Source port (16 bits):

▪ Acts as the source service access point.

o Destination port (16 bits):

▪ Acts as the destination service access point.

o Sequence number (32 bits):

▪ It is the sequence number of the first data octet in this segment.

▪ Exception: when SYN flag is set, it is the initial sequence number (ISN).

▪ When SYN is set, the first data octet will be ISN + 1.

o Acknowledgment number (32 bits):

▪ It is a piggybacked acknowledgment.
▪ It contains the sequence number of the next data octet that the TCP entity
expects to receive.

o Data offset (4 bits):

▪ It indicates the number of 32-bit words in the header.

o Reserved (6 bits):

▪ Reserved for future use.

o Flags (6 bits):

▪ Flags describe control information and actions to be taken:

▪ Listen for connection attempt at specified security and precedence


from any remote destination.

▪ Listen for connection attempt at specified security and precedence


from a specified destination.

▪ Request connection at a particular security and precedence to a


specified destination.

▪ Request connection at a particular security and precedence to a


specified destination and transmit data with the request.

▪ Transfer data across a named connection.

▪ Issue incremental allocation for receive data to TCP.

▪ Close connection gracefully.

▪ Close connection abruptly.

▪ Query connection status.

▪ The six important control flags:

▪ URG: Urgent pointer field significant.

▪ ACK: Acknowledgment field significant.

▪ PSH: Push function.

▪ RST: Reset the connection.

▪ SYN: Synchronize the sequence numbers.

▪ FIN: No more data from sender.

o Window (16 bits):

▪ Provides flow control credit allocation, in octets.

▪ Contains the number of data octets beginning with the one indicated in the
acknowledgment field that the sender is willing to accept.

o Checksum (16 bits):


▪ The one's complement of the sum modulo 2^16-1 of all 16-bit words in the
segment.

▪ It includes a pseudo-header for the calculation.

▪ The pseudo-header situation is described separately below.

o Urgent Pointer (16 bits):

▪ Points to the last octet in a sequence of urgent data.

▪ This allows the receiver to know how much urgent data is coming.

o Options (Variable length):

▪ At present, only one option is defined.

▪ This option specifies the maximum segment size that will be accepted.

• Additional explanations for TCP header fields:

o Source port and Destination port:

▪ They specify the sending and receiving users of TCP.

▪ As with IP, a number of common users of TCP have been assigned specific
numbers.

▪ These assigned numbers should be reserved for that purpose in any TCP
implementation.

▪ Other port numbers must be arranged by mutual agreement between


communicating parties.

• Details about sequence number and acknowledgment number:

o Both are bound to octets rather than entire segments.

o Example:

▪ A segment has sequence number 1000.

▪ It includes 600 octets of data.

▪ The sequence number refers to the first octet in the data field (i.e., 1000).

▪ The next segment in logical order will have sequence number 1600.

o TCP is logically stream-oriented:

▪ It accepts a stream of octets from the user.

▪ It groups the octets into segments as it sees fit.

▪ It numbers each octet in the stream.

• Details about the checksum:

o The checksum field covers the entire TCP segment.


o It includes a pseudo-header which is prefixed at the time of calculation.

o Pseudo-header is added at both transmission and reception.

o The pseudo-header includes:

▪ Source internet address.

▪ Destination internet address.

▪ Protocol field.

▪ Segment length field.

o Purpose of pseudo-header:

▪ To protect TCP from misdelivery by IP.

▪ Even if IP delivers a segment without bit errors to the wrong host, the
receiving TCP entity will detect the error.

o Special case for IPv6:

▪ If TCP is used over IPv6, then the pseudo-header is different.

▪ It is depicted in Figure 16.21.

• Missing items from TCP header:

o TCP is designed specifically to work with IP.

o Therefore, some user parameters are passed by TCP down to IP for inclusion in the IP
header.

• Parameters passed by TCP to IP:

o Precedence:

▪ A 3-bit field.

o Normal-delay / Low-delay:

▪ Determines the delay sensitivity of the segment.

o Normal-throughput / High-throughput:

▪ Determines the throughput level desired.

o Normal-reliability / High-reliability:

▪ Determines reliability preferences.

o Security:

▪ An 11-bit field.

• Overhead consideration:

o The required minimum overhead for every TCP data unit is actually 40 octets.

o This includes 20 octets of TCP header + 20 octets of IP header.


TCP Mechanisms

Connection Establishment

• TCP connection establishment always uses a three-way handshake method.

• When the SYN flag is set:

o The TCP segment acts as a Request for Connection (RFC).

o Functions as explained earlier in Section 17.2.

• To initiate a TCP connection:

o The initiating TCP entity sends an RFC segment.

o This RFC segment includes an initial sequence number (X).

• After receiving the RFC:

o The receiving TCP entity responds by sending:

▪ An RFC segment with a different initial sequence number (Y).

▪ An Acknowledgment (ACK) for X.

▪ Both SYN and ACK flags are set in the segment.

• Finally, the initiator of the connection:

o Responds back by sending an ACK for Y.

• Special Case - Crossing RFCs:

o If both sides send RFCs at the same time:

▪ No conflict or error occurs.

▪ Both sides simply respond with ACKs to each other’s RFCs.

• Connection Identification:

o A connection is uniquely determined by:

▪ A combination of source port and destination port.

• Connection Constraints:

o Only one TCP connection can exist between a unique pair of ports at any one time.

• Multiple Connections:

o A given single port can support multiple simultaneous connections.

o Each connection must be with a different partner port.

Data Transfer
• Although TCP transfers data in segments, logically it treats the data as a continuous stream
of octets.

• Octet Numbering:

o Each octet in the stream is assigned a sequence number.

o Sequence numbers are calculated modulo 2³² (i.e., they wrap around after reaching
2³²-1).

• Sequence Numbers in Segments:

o Each segment includes:

▪ The sequence number of the first octet present in the segment's data field.

• Flow Control:

o TCP uses a credit allocation scheme for flow control.

o Credit is measured in terms of:

▪ Number of octets, not the number of segments.

• Data Buffering:

o Data are buffered by TCP at both:

▪ The sending side (transmission buffer).

▪ The receiving side (reception buffer).

• Data Transmission Decision:

o Normally, TCP decides on its own:

▪ When to create a segment for transmission.

▪ When to release the received data to the user application.

• PUSH Flag:

o If the PUSH flag is set:

▪ It forces TCP to:

▪ Immediately send all buffered data collected so far.

▪ Immediately deliver the received data to the receiving user.

o It behaves like an "end-of-letter" signal in a message.

• Urgent Data Signaling:

o A user can mark a specific block of data as urgent.

o TCP will:

▪ Use the urgent pointer to indicate the end of urgent data.

▪ Transmit urgent data inside the normal data stream.


• Handling of Unexpected Segments:

o If a segment arrives incorrectly (not matching any active connection):

▪ TCP sends back a segment with the RST (Reset) flag set.

• Examples of Unexpected Segments:

o Arrival of delayed duplicate SYN segments.

o Arrival of acknowledgment for data that has not yet been sent.

Connection Termination

• Normal Connection Termination:

o Achieved via a graceful close process.

o Each TCP user must:

▪ Issue a CLOSE primitive to signal closure.

o The TCP entity handling the connection:

▪ Sets the FIN flag on the last outgoing segment.

▪ The segment will also contain:

▪ The final portion of data to be sent.

• Abrupt Connection Termination:

o Occurs when:

▪ A TCP user issues an ABORT primitive.

o On receiving an ABORT command:

▪ The TCP entity:

▪ Abandons all pending send/receive operations.

▪ Discards any data in its transmission buffer.

▪ Discards any data in its reception buffer.

▪ TCP sends an RST (Reset) segment to the other side to notify abrupt closure.

TCP Implementation Policy Options

• The TCP standard provides:

o A strict protocol specification for interaction between TCP entities.

• However:

o Certain features allow multiple implementation choices.


• Important Note:

o Interoperability is always maintained between implementations.

o But performance differences may occur based on chosen policies.

• The five major design-area policy options are:

o Send Policy.

o Deliver Policy.

o Accept Policy.

o Retransmit Policy.

o Acknowledge Policy.

Send Policy

• When there is:

o No PUSH request from the user, and

o No closed transmission window,

• The sending TCP entity:

o Is free to decide when to transmit buffered data.

• Buffering of User Data:

o As the user application sends data:

▪ It gets stored in the TCP transmit buffer.

• Deciding When to Send:

o TCP may either:

▪ Construct and send a segment immediately for each batch of user data.

▪ Wait for more data to accumulate before constructing and sending a


segment.

• Performance Considerations:

o If transmissions are:

▪ Infrequent and large:

▪ Leads to low overhead (less segment creation and processing load).

o If transmissions are:

▪ Frequent and small:

▪ Leads to fast system responsiveness and quick delivery.


• Silly Window Syndrome:

o Frequent small transmissions can cause inefficiency known as:

▪ The Silly Window Syndrome.

o This issue is:

▪ Discussed specifically in Problem 17.19.

Deliver Policy

• When no PUSH is indicated by the sender:

o The receiving TCP entity is free to deliver data to the user at its own convenience.

• The receiving TCP entity has two options for delivering data:

o It can deliver data as each in-order segment arrives.

o It can buffer multiple segments and deliver them together later.

• The choice of delivery policy depends on performance considerations:

o If deliveries are infrequent and large:

▪ The user may not receive data promptly, leading to undesirable delays.

o If deliveries are frequent and small:

▪ This can cause:

▪ Unnecessary processing overhead in the TCP layer.

▪ Unnecessary processing in the user application software.

▪ Increased number of operating-system interrupts.

Accept Policy

• When all segments arrive in order over a TCP connection:

o TCP places the received data into a receive buffer for delivery to the user.

• If segments arrive out of order:

o The receiving TCP entity has two options:

▪ In-order policy:

▪ Accept only segments that arrive in order.

▪ Discard any out-of-order segments.

▪ In-window policy:

▪ Accept all segments that are within the receive window.

▪ See Figure 17.6b for reference to the receive window.

• Characteristics of In-order policy:


o Leads to a simpler TCP implementation.

o Puts a burden on the networking facility because:

▪ Sending TCP must timeout and retransmit segments that were successfully
received but discarded due to misordering.

▪ If a single segment is lost, all subsequent segments must be retransmitted


after timeout of the missing segment.

• Characteristics of In-window policy:

o May reduce retransmissions by accepting out-of-order segments.

o Requires:

▪ A more complex acceptance test.

▪ A more sophisticated data storage scheme to buffer and manage out-of-


order data.

• Special Note:

o For Class 4 ISO Transport Protocol (TP4):

▪ The in-window policy is mandatory.

Retransmit Policy

• TCP maintains a queue of sent segments that have not yet been acknowledged.

• TCP retransmits a segment if it fails to receive an acknowledgment within a specified time.

• Three possible retransmission strategies:

o First-only strategy:

▪ Maintain one retransmission timer for the entire queue.

▪ When an acknowledgment is received:

▪ Remove the acknowledged segment(s) from the queue.

▪ Reset the retransmission timer.

▪ If the timer expires:

▪ Retransmit the segment at the front of the queue.

▪ Reset the timer.

o Batch strategy:

▪ Maintain one retransmission timer for the entire queue.

▪ When an acknowledgment is received:

▪ Remove the acknowledged segment(s) from the queue.

▪ Reset the retransmission timer.


▪ If the timer expires:

▪ Retransmit all segments currently in the queue.

▪ Reset the timer.

o Individual strategy:

▪ Maintain one retransmission timer for each segment in the queue.

▪ When an acknowledgment is received:

▪ Remove the acknowledged segment(s) from the queue.

▪ Destroy the corresponding timer(s).

▪ If any timer expires:

▪ Retransmit the corresponding segment individually.

▪ Reset the timer for that segment.

• Characteristics of First-only policy:

o Efficient in terms of generated traffic:

▪ Only lost segments (or segments whose ACK was lost) are retransmitted.

o Drawback:

▪ The timer for the second segment in the queue is not started until the first
segment is acknowledged.

▪ This can cause considerable delays in retransmissions.

• Characteristics of Individual policy:

o Solves the delay problem of the first-only policy.

o Requires a more complex implementation due to maintaining multiple timers.

• Characteristics of Batch policy:

o Reduces the chance of long delays.

o May cause unnecessary retransmissions by retransmitting segments that were not


lost.

• Interaction of retransmit and accept policies:

o If the receiver uses an in-order accept policy:

▪ It discards out-of-order segments.

▪ Batch retransmission fits better with this behavior.

o If the receiver uses an in-window accept policy:

▪ It can accept out-of-order segments.

▪ First-only or Individual retransmission fits better with this behavior.


• Special Note:

o The ISO TP4 specification outlines similar retransmission policy options.

o No mandatory retransmission policy is enforced by ISO TP4.

Acknowledge Policy

• When a data segment arrives in sequence at the receiver:

o The receiving TCP entity has two acknowledgment timing options:

▪ Immediate acknowledgment:

▪ Immediately transmit an empty segment containing the appropriate


acknowledgment number.

▪ Cumulative acknowledgment:

▪ Record the need for acknowledgment.

▪ Wait until:

▪ An outbound data segment is ready to piggyback the


acknowledgment.

▪ To avoid excessive delay:

▪ Set a window timer (refer Table 17.1).

▪ If the timer expires before an acknowledgment is sent:

▪ Send an empty acknowledgment segment.

• Characteristics of Immediate acknowledgment:

o Simple implementation.

o Keeps the sending TCP entity fully informed about received data.

o Reduces unnecessary retransmissions by the sender.

o Drawbacks:

▪ Extra segment transmissions (empty ACK-only segments).

▪ Increases network load.

▪ Sequence of operations causing more overhead:

▪ TCP receives a segment.

▪ TCP sends an immediate ACK.

▪ The received data is delivered to the application.

▪ The receive window expands.

▪ Another TCP segment is sent to inform sender about window size


change.
• Characteristics of Cumulative acknowledgment:

o Reduces the number of ACK segments.

o Reduces network overhead.

o Drawbacks:

▪ Requires more processing at the receiving TCP entity.

▪ Makes round-trip time estimation harder for the sending TCP entity.

• Special Note:

o The ISO TP4 specification offers similar options for acknowledgment policy.

o It does not mandate the use of any specific acknowledgment policy.

UDP
1. Connectionless Service:

o UDP provides a connectionless service, meaning it does not establish a connection before
transmitting data.

o It is unreliable, as it does not guarantee delivery, order of delivery, or protection against


duplicates.

2. Low Overhead:

o UDP has minimal overhead compared to TCP, making it a preferred choice for applications
where low latency and minimal protocol complexity are desired.

o It is suitable for scenarios where reliability is not a strict requirement, and speed is
prioritized.

3. Use Case in Network Management:

o UDP is often used in network management, as it allows for fast and efficient communication
without the need for connection setup and teardown.

o One example of UDP’s use is in Simple Network Management Protocol (SNMP).

4. Position in the TCP/IP Stack:

o UDP operates on top of the IP layer, providing a mechanism for addressing and delivering
data to specific application ports.

5. UDP Header Structure:

o Source Port: Identifies the sending port.

o Destination Port: Identifies the receiving port.

o Length Field: Specifies the total length of the UDP segment, including both header and data.

o Checksum: A mechanism for error checking. It covers the entire UDP segment, including the
UDP header, data, and a pseudo-header from the IP layer (same algorithm used in TCP and IP
checksums).

6. Checksum Functionality:
o The checksum helps detect errors in the UDP segment.

o If an error is detected, the segment is discarded, but no further action is taken (such as
retransmission or acknowledgment).

o The checksum is optional and, if not used, is set to zero.

o Unlike the IP checksum, which only checks the IP header, the UDP checksum checks the
entire UDP segment (header and data), including a pseudo-header that is added for
checksum calculation.

7. Error Handling:

o If a checksum is used and an error is detected in the UDP segment, the segment is discarded,
but there is no retransmission or error recovery mechanism in UDP. This is in contrast to TCP,
which includes error recovery features like retransmission.

You might also like