Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
6 views42 pages

CN Unit III - Notes

The document discusses the transport layer of the OSI model, focusing on its functions such as flow control, error control, and congestion control, as well as the Transmission Control Protocol (TCP) and User Datagram Protocol (UDP). It explains the importance of port numbers for process-to-process communication and outlines flow control techniques like stop-and-wait and sliding window methods. Additionally, it addresses network congestion and its effects on performance, emphasizing the need for effective congestion control mechanisms.

Uploaded by

urmilap.cs2023
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views42 pages

CN Unit III - Notes

The document discusses the transport layer of the OSI model, focusing on its functions such as flow control, error control, and congestion control, as well as the Transmission Control Protocol (TCP) and User Datagram Protocol (UDP). It explains the importance of port numbers for process-to-process communication and outlines flow control techniques like stop-and-wait and sliding window methods. Additionally, it addresses network congestion and its effects on performance, emphasizing the need for effective congestion control mechanisms.

Uploaded by

urmilap.cs2023
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

CIT– Computer Networks

UNIT – III: TRANSPORT LAYER

Flow Control and Congestion Control: Basics of flow control - Congestion control mechanisms,
Transmission Control Protocol (TCP): TCP features and functionalities, TCP connection
establishment, maintenance, and termination, User Datagram Protocol (UDP): Characteristics and
usage scenarios - Comparison with TCP, Sockets: Overview of sockets in network programming.

INTRODUCTION
 The transport layer is the fourth layer of the OSI model and is the core of the Internet
model.
 It responds to service requests from the session layer and issues service requests to
the network Layer.
 The transport layer provides transparent transfer of data between hosts.
 It provides end-to-end control and information transfer with the quality of service
needed by the application program.
 It is the first true end-to-end layer, implemented in all End Systems (ES).

TRANSPORT LAYER FUNCTIONS / SERVICES


 The transport layer is located between the network layer and the application layer.
 The transport layer is responsible for providing services to the application layer; it
receives services from the network layer.
 The services that can be provided by the transport layer are
1. Process-to-Process Communication
2. Addressing : Port Numbers
3. Encapsulation and Decapsulation
4. Multiplexing and Demultiplexing
5. Flow Control
6. Error Control
7. Congestion Control
1
CIT– Computer Networks

Process-to-Process Communication

 The Transport Layer is responsible for delivering data to the appropriate application
process on the host computers.
 This involves multiplexing of data from different application processes, i.e. forming
data packets, and adding source and destination port numbers in the header of each
Transport Layer data packet.
 Together with the source and destination IP address, the port numbers constitutes a
network socket, i.e. an identification address of the process-to-process
communication.

Addressing: Port Numbers


 Ports are the essential ways to address multiple entities in the same location.
 Using port addressing it is possible to use more than one network-based application
at the same time.
 Three types of Port numbers are used :
 Well-known ports - These are permanent port numbers. They range between
0 to 1023.These port numbers are used by Server Process.
 Registered ports - The ports ranging from 1024 to 49,151 are not assigned or
controlled.
 Ephemeral ports (Dynamic Ports) – These are temporary port numbers. They
range between 49152–65535.These port numbers are used by Client Process.

Encapsulation and Decapsulation


 To send a message from one process to another, the transport-layer protocol
encapsulates and decapsulates messages.
 Encapsulation happens at the sender site. The transport layer receives the data and
adds the transport-layer header.
 Decapsulation happens at the receiver site. When the message arrives at the
destination transport layer, the header is dropped and the transport layer delivers the
message to the process running at the application layer.

2
CIT– Computer Networks
Multiplexing and Demultiplexing
 Whenever an entity accepts items from more than one source, this is referred to as
multiplexing (many to one).
 Whenever an entity delivers items to more than one source, this is referred to as
demultiplexing (one to many).
 The transport layer at the source performs multiplexing
 The transport layer at the destination performs demultiplexing

Flow Control

 Flow Control is the process of managing the rate of data transmission between two
nodes to prevent a fast sender from overwhelming a slow receiver.
 It provides a mechanism for the receiver to control the transmission speed, so that the
receiving node is not overwhelmed with data from transmitting node.

Error Control

 Error control at the transport layer is responsible for


1. Detecting and discarding corrupted packets.
2. Keeping track of lost and discarded packets and resending them.
3. Recognizing duplicate packets and discarding them.
4. Buffering out-of-order packets until the missing packets arrive.
 Error Control involves Error Detection and Error Correction
3
CIT– Computer Networks

Congestion Control

 Congestion in a network may occur if the load on the network (the number of
packets sent to the network) is greater than the capacity of the network (the number
of packets a network can handle).
 Congestion control refers to the mechanisms and techniques that control the
congestion and keep the load below the capacity.
 Congestion Control refers to techniques and mechanisms that can either prevent
congestion, before it happens, or remove congestion, after it has happened
 Congestion control mechanisms are divided into two categories,
1. Open loop - prevent the congestion before it happens.
2. Closed loop - remove the congestion after it happens.

PORT NUMBERS
 A transport-layer protocol usually has several responsibilities.
 One is to create a process-to-process communication.
 Processes are programs that run on hosts. It could be either server or client.
 A process on the local host, called a client, needs services from a process usually
on the remote host, called a server.
 Processes are assigned a unique 16-bit port number on that host.
 Port numbers provide end-to-end addresses at the transport layer
 They also provide multiplexing and demultiplexing at this layer.

 The port numbers are integers between 0 and 65,535 .

4
CIT– Computer Networks

ICANN (Internet Corporation for Assigned Names and Numbers) has divided the port
numbers into three ranges:
 Well-known ports
 Registered
 Ephemeral ports (Dynamic Ports)



WELL-KNOWN PORTS
 These are permanent port numbers used by the servers.
 They range between 0 to 1023.
 This port number cannot be chosen randomly.
 These port numbers are universal port numbers for servers.
 Every client process knows the well-known port number of the corresponding server
process.
 For example, while the daytime client process, a well-known client program, can
use an ephemeral (temporary) port number, 52,000, to identify itself, the daytime
server process must use the well-known (permanent) port number 13.

EPHEMERAL PORTS (DYNAMIC PORTS)

 The client program defines itself with a port number, called the ephemeral port
5
CIT– Computer Networks
number.
 The word ephemeral means “short-lived” and is used because the life of a client is
normally short.
 An ephemeral port number is recommended to be greater than 1023.
 These port number ranges from 49,152 to 65,535 .
 They are neither controlled nor registered. They can be used as temporary or private
port numbers.

REGISTERED PORTS
 The ports ranging from 1024 to 49,151 are not assigned or controlled.

NETWORK FLOW CONTROL

Flow control is a technique used to regulate data transfer between computers or other nodes
in a network. Flow control ensures that the transmitting device does not send more data to
the receiving device than it can handle. If a device receives more data than it can process or
store in memory at any given time, it is lost and retransmitted.

Flow control aims to throttle the amount of data transmitted to avoid overwhelming the
receiver's resources. This is accomplished through a series of messages the receiver sends to
the sender to acknowledge if frames have been received. The sender uses these messages to
determine when to transmit more data. If the sender does not receive an acknowledgment
(ACK), it concludes that there has been a problem with the transmission and retransmits of
the data.

Flow control is implemented differently, depending on how the sender and receiver handle
messages and track data frames. There are two basic approaches to flow control: stop-and-
wait and sliding window. The stop-and-wait approach is the simplest to implement, but it is
not as efficient as sliding window, which delivers better network performance and utilizes
network resources more effectively.

Stop-and-wait flow control

In the stop-and-wait approach, the sender segments the data into frames and then transmits
one frame at a time to the receiver, which responds to each frame with an ACK message.
This process occurs through the following steps:

1. The sender transmits a data frame to the receiver.


2. The sender waits for the receiver to respond.
3. Upon receiving the frame, the receiver transmits an ACK to the sender.
4. Upon receiving the ACK, the sender sends the next frame to the receiver and
waits for the next ACK. If the sender does not receive an ACK within a
defined time limit, known as a timeout, the sender retransmits the same frame.

6
CIT– Computer Networks
5. The process continues until the sender has finished transmitting all the data to
the receiver.

How stop-and-wait flow control works

Above Figure illustrates how this exchange works. In this case, the sender starts by
transmitting Frame 0 and then waiting for the ACK. When Frame 0 reaches its
destination, the receiver sends ACK 0 to the sender.

After receiving ACK 0, the sender transmits Frame 1 and waits for ACK 1. When that
arrives, the sender transmits Frame 2 and waits again. This time, however, the sender
does not receive ACK 2 before the timeout occurs, so it retransmits Frame 2. The frame
arrives at its destination, so the receiver sends ACK 2. When the sender receives ACK 2,
it transmits Frame 3, which is also acknowledged by the receiver.

Stop and wait belongs to a category of error control mechanisms called automatic repeat
requests (ARQs). These ARQs rely on ACKs to determine whether a data
transmission was successful or if retransmission is needed. Other ARQs include Go-
Back-N ARQ and Selective Repeat ARQ, which use the sliding window protocol.

Stop-and-wait is simpler to implement than sliding windows. It is also


fairly reliable because the sender receives an ACK for each frame successfully
transmitted to the receiver. These qualities, however, also make data communications
much slower, which can be exacerbated by long distances and heavy traffic. The stop-
and-wait approach also tends to underutilize network resources.

Sliding window flow control

The sliding window approach addresses many of the issues with stop and wait because
the sender can transmit multiple frames simultaneously without waiting for an ACK for
each frame. However, this approach also comes with additional complexity.
7
CIT– Computer Networks

When first connecting, the sender and receiver establish a window determining the
maximum number of frames the sender can transmit. During the transmission, the sender
and receiver must carefully track which frames have been sent and received to ensure that
all the data reaches its destination and is reassembled in the correct order.

Sliding window flow control can be implemented using one of two approaches: Go-
Back-N and Selective Repeat. With the Go-Back-N approach, the sender can send one or
more frames but never more than the window allows. As the receiver acknowledges the
frames, the sender moves to the next batch, or window, of frames that can now be sent. If
there is a problem with a transmitted frame, the sender retransmits all the frames in the
current window.

Figure 2 shows an example of how Go-Back-N works. In this case, the window consists
of only three frames—initially, Frames 0 through 2. The sender begins by transmitting
Frame 0 to the receiver. Upon receiving Frame 0, the receiver sends an ACK that
specifies the next frame to send (Frame 1) rather than the frame that has just been
received.

How Go-Back-N type of sliding window flow control work

When the sender receives ACK 1, it moves the window by one position, dropping Frame
0 and adding Frame 3. The sender then transmits Frames 1, 2, and 3, which represent the
window's entire contents. The sender does not necessarily need to send Frame 0 first,
followed by Frames 1 through 3. Figure 2 illustrates how the process works.

Upon receiving the three frames, the receiver sends a cumulative ACK that specifies the
next frame to send: Frame 4. The ACK indicates the receiver has all the preceding frames
(0 through 3).
8
CIT– Computer Networks

When the sender receives ACK 4, it adjusts the window to include Frames 4 through 6
and then transmits those frames. This time, however, Frame 4 gets lost in the
transmission, while Frames 5 and 6 reach their destination. Upon receiving Frame 5, the
receiver detects that Frame 4 is missing and sends a negative acknowledgment (NAK)
that specifies Frame 4. At the same time, the receiver discards Frames 5 and 6.
When the sender receives the NAK, it retransmits Frames 4 through 6 and waits for the
ACK. The frames arrive with no errors the second time, so the receiver returns an ACK
indicating that the sender can now transmit Frame 7. The sender adjusts the window
accordingly and transmits the next set of frames, starting with Frame 7.

The Selective Repeat approach is similar to Go-Back-N. The primary difference is that
Selective Repeat does not retransmit the entire window if there is an error, only the
individual frame in dispute. Selective Repeat does not support cumulative ACK messages
like Go-Back-N, so each ACK is specific to the frame just received, enabling the sender
to identify the precise frame that needs to be retransmitted.

Figure 3 illustrates an example of the Selective Repeat process. After transmitting Frame
0, the sender receives an ACK, transmits Frames 1 through 3 and receives an ACK for
each one. The sender then transmits Frames 4 through 6. When Frames 5 and 6 arrive at
the receiver but not Frame 4, the receiver sends ACK 5 and ACK 6, along with NAK 4.
The sender responds to the NAK by retransmitting Frame 4. Upon receiving Frame 4, the
receiver sends an ACK. The sender then adjusts the window and transmits the next three
frames, starting with Frame 7.

How Selective Repeat Sliding Window Flow Control Works


9
CIT– Computer Networks

Both Selective Repeat and Go-Back-N are more efficient than the stop-and-wait
approach, but there are important differences between the two sliding window
approaches. The Go-Back-N approach can consume more bandwidth because all the
frames in a window are retransmitted if an error occurs. However, it is not as complex to
implement as Selective Repeat and does not require the same amount of system
resources. Selective Repeat has greater management overhead because the frames must
be tracked and sorted throughout the data transmission.

NETWORK CONGESTION CONTROL

Congestion control is a crucial concept in computer networks. It refers to the methods to


prevent
network overload and ensure smooth data flow. Too much data is sent through the
network at once, it can cause delays and data loss. Congestion control techniques help
manage the traffic so all users can enjoy a stable and efficient network connection. These
techniques are essential for maintaining the performance and reliability of modern
networks.
What is Congestion?
Congestion in a computer network happens when too much data is sent simultaneously,
causing the network to slow down. Like traffic congestion on a busy road, network
congestion leads to delays and sometimes data loss. When the network can’t handle all
the incoming data, it gets “clogged,” making it difficult for information to travel
smoothly from one place to another.
Effects of Congestion on Computer Network
 Improved Network Stability: Congestion control helps keep the network stable by
preventing overload. It manages the data flow so the network doesn’t crash or fail
due to too much traffic.
 Reduced Latency and Packet Loss: Without congestion control, data
transmission can slow down, causing delays and data loss. Congestion
control helps manage traffic better, reducing these delays and ensuring fewer data
packets are lost, making data transfer faster and the network more responsive.
 Enhanced Throughput: The network can use its resources more effectively by
avoiding congestion. This means more data can be sent in a shorter time, which is
important for handling large amounts of data and supporting high-speed
applications.
 Fairness in Resource Allocation: Congestion control ensures that users share
network resources fairly. No single user or application can take up all
the bandwidth, allowing everyone a fair share.
10
CIT– Computer Networks
 Better User Experience: When data flows smoothly and quickly, users have a
better experience. Websites, online services, and applications work more reliably
and without annoying delays.
 Mitigation of Network Congestion Collapse: Without congestion control, a sudden
spike in data traffic can overwhelm the network, causing severe congestion and
making it almost unusable. Congestion control helps prevent this by managing
traffic efficiently and avoiding such critical breakdowns.
Congestion Control Algorithm
 Congestion Control is a mechanism that controls the entry of data packets into the
network, enabling a better use of a shared network infrastructure and avoiding
congestive collapse.
 Congestive-avoidance algorithms (CAA) are implemented at the TCP layer to
avoid congestive collapse in a network.
 There are two congestion control algorithms, which are as follows:
Leaky Bucket Algorithm
 The leaky bucket algorithm discovers its use in network traffic shaping or rate-
limiting.
 A leaky bucket execution and a token bucket execution are predominantly used for
traffic-shaping algorithms.
 This algorithm controls the rate at which traffic is sent to the network and shapes
the burst traffic to a steady traffic stream.
 The disadvantage compared with the leaky-bucket algorithm is the inefficient use
of available network resources.
 The large network resources, such as bandwidth, are not being used effectively.
For example, Imagine a bucket with a small hole in the bottom. No matter at what rate
water enters the bucket, the outflow is at a constant rate. When the bucket is full of water,
additional water spills over the sides and is lost.

11
CIT– Computer Networks

Similarly, each network interface contains a leaky bucket, and the following steps are
involved in the leaky bucket algorithm:
 When a host wants to send a packet, the packet is thrown into the bucket.
 The bucket leaks at a constant rate, meaning the network interface transmits
packets constantly.
 Bursty traffic is converted to uniform traffic by the leaky bucket.
 In practice, the bucket is a finite queue that outputs at a finite rate.
To learn more about the Leaky Bucket Algorithm, please refer to the article.
Token Bucket Algorithm
 The leaky bucket algorithm has a rigid output design at an average rate
independent of the bursty traffic.
 In some applications, when large bursts arrive, the output is allowed to speed up.
This calls for a more flexible algorithm that never loses information. Therefore, a
token bucket algorithm finds its uses in network traffic shaping or rate-limiting.
 It is a control algorithm that indicates when traffic should be sent. This order
comes based on the display of tokens in the bucket.
 The bucket contains tokens. Each of the tokens defines a packet of predetermined
size. Tokens in the bucket are deleted for the ability to share a packet.
 When tokens are shown, a flow to transmit traffic appears in the display of tokens.
 No token means no flow sends its packets. Hence, a flow transfers traffic up to its
peak burst rate in good tokens in the bucket.
To learn more about the Token Bucket Algorithm, please refer to the article.
Need for Token Bucket Algorithm

12
CIT– Computer Networks
The leaky bucket algorithm enforces the output pattern at the average rate, no matter how
bursty the traffic is. So, to deal with bursty traffic, we need a flexible algorithm that does
not lose data. One such algorithm is the token bucket algorithm.
The steps of this algorithm can be described as follows:
 In regular intervals, tokens are thrown into the bucket. ƒ
 The bucket has a maximum capacity. ƒ
 If there is a ready packet, a token is removed from the bucket, and the packet is
sent.
 If there is no token in the bucket, the packet cannot be sent.
Let’s understand with an example. In figure (A), we see a bucket holding three tokens,
with five packets waiting to be transmitted. For a packet to be transmitted, it must capture
and destroy one token. In figure (B), we see that three of the five packets have gotten
through, but the other two are stuck waiting for more tokens to be generated.
Token Bucket vs Leaky Bucket
The leaky bucket algorithm controls the rate at which packets are introduced in the
network, but it is very conservative. The token bucket algorithm introduces some
flexibility. The token bucket algorithm generates tokens at each tick (up to a certain
limit). For an incoming packet to be transmitted, it must capture a token, and the
transmission occurs at the same rate. Hence, some busty packets are transmitted at the
same rate if tokens are available, which introduces flexibility in the system.
Formula: M * s = C + ? * s where S – is time taken M – Maximum output rate? – Token
arrival rate C – Capacity of the token bucket in byte
Let’s understand with an example,

13
CIT– Computer Networks

Link to question on leaky bucket algorithm


Advantages
 Stable Network Operation: Congestion control ensures that networks remain
stable and operational by preventing them from overloading too much data traffic.
 Reduced Delays: It minimizes delays in data transmission by managing traffic
flow effectively, ensuring that data packets reach their destinations promptly.
 Less Data Loss: By regulating the amount of data in the network at any given
time, congestion control reduces the likelihood of data packets being lost or
discarded.
 Optimal Resource Utilization: It helps networks use their resources efficiently,
allowing for better throughput and ensuring users can access data and services
without interruptions.
 Scalability: Congestion control mechanisms are scalable, allowing networks to
handle increasing data traffic volumes as they grow without compromising
performance.
 Adaptability: Modern congestion control algorithms can adapt to changing
network conditions, ensuring optimal performance even in dynamic and
unpredictable environments.
Disadvantages
 Complexity: Implementing congestion control algorithms can add complexity to
network management, requiring sophisticated systems and configurations.
 Overhead: Some congestion control techniques introduce additional overhead,
which can consume network resources and affect overall performance.

14
CIT– Computer Networks
 Algorithm Sensitivity: The effectiveness of congestion control algorithms can be
sensitive to network conditions and configurations, requiring fine-tuning for
optimal performance.
 Resource Allocation Issues: Fairness in resource allocation, while a benefit, can
also pose challenges when trying to prioritize critical applications over less
essential ones.
 Dependency on Network Infrastructure: Congestion control relies on the
underlying network infrastructure and may be less effective in environments with
outdated or unreliable equipment.

15
CIT– Computer Networks

Difference between flow control and congestion control:

S.No Flow Control Congestion Control

Traffic entering the network from a sender is


controlled by reducing the rate of packets.
Traffic from sender to receiver is controlled to
1.
avoid overwhelming the slow receiver. Here, the sender has to control/modulate his
rate to achieve optimal network utilization.

Flow control is typically used in the data link Congestion control is applied in the network
2.
layer. and transport layer.

In this, the Receiver’s data is prevented from In this, the Network is prevented from
3.
being overwhelmed. congestion.

In flow control, the sender needs to take


In this, many algorithms designed for the
measures to avoid the receiver from being
4. transport layer/network layer define how
overwhelmed depending on feedback from the
endpoints should behave to avoid congestion.
receiver and also in the absence of any input.

Types of Flow control are


Mechanisms designed to prevent network
i. Stop and Wait – The sender expects ACK
congestion are
from the receiver for every frame transmitted.
5. ii. Sliding Window – ACK is needed only I. i. Network Queue Management
after the sender transmits data until II. the ii. Explicit Congestion Notification
window is full, which is allocated initiallyIII.
by iii, TCP Congestion control
the receiver.

TRANSPORT LAYER PROTOCOLS

 Three protocols are associated with the Transport layer.


 They are
(1) UDP - User Datagram Protocol
(2) TCP - Transmission Control Protocol
(3) SCTP - Stream Control Transmission Protocol

 Each protocol provides a different type of service and should be used appropriately.

UDP - UDP is an unreliable connectionless transport-layer protocol used for its simplicity
and efficiency in applications where error control can be provided by the application-layer
process.

TCP - TCP is a reliable connection-oriented protocol that can be used in any application
16
CIT– Computer Networks
where reliability is important.

SCTP - SCTP is a new transport-layer protocol designed to combine some features of UDP
and TCP in an effort to create a better protocol for multimedia communication.

USER DATAGRAM PROTOCOL (UDP)


 User Datagram Protocol (UDP) is a connectionless, unreliable transport protocol.
 UDP adds process-to-process communication to best-effort service provided by IP.
 UDP is a very simple protocol using a minimum of overhead.
 UDP is a simple demultiplexer, which allows multiple processes on each host to
communicate.
 UDP does not provide flow control , reliable or ordered delivery.
 UDP can be used to send small message where reliability is not expected.
 Sending a small message using UDP takes much less interaction between the sender
and receiver.
 UDP allow processes to indirectly identify each other using an abstract locator called
port or mailbox

UDP PORTS
 Processes (server/client) are identified by an abstract locator known as port.
 Server accepts message at well known port.
 Some well-known UDP ports are 7–Echo, 53–DNS, 111–RPC, 161–SNMP, etc.
 < port, host > pair is used as key for demultiplexing.
 Ports are implemented as a message queue.
 When a message arrives, UDP appends it to end of the queue.
 When queue is full, the message is discarded.
 When a message is read, it is removed from the queue.
 When an application process wants to receive a message, one is removed from the
front of the queue.
 If the queue is empty, the process blocks until a message becomes available.

17
CIT– Computer Networks

UDP DATAGRAM (PACKET) FORMAT


 UDP packets are known as user datagrams .
 These user datagrams, have a fixed-size header of 8 bytes made of four fields, each
of 2 bytes (16 bits).

Source Port Number


 Port number used by process on source host with 16 bits long.
 If the source host is client (sending request) then the port number is an temporary
one requested by the process and chosen by UDP.
 If the source is server (sending response) then it is well known port number.

Destination Port Number


 Port number used by process on Destination host with 16 bits long.
 If the destination host is the server (a client sending request) then the
port number is a well known port number.
 If the destination host is client (a server sending response) then port number
is an temporary one copied by server from the request packet.

18
CIT– Computer Networks
Length
 This field denotes the total length of the UDP Packet (Header plus data)
 The total length of any UDP datagram can be from 0 to 65,535 bytes.

Checksum
 UDP computes its checksum over the UDP header, the contents of the message
body, and something called the pseudoheader.
 The pseudoheader consists of three fields from the IP header—protocol number,
source IP address, destination IP address plus the UDP length field.
Data
 Data field defines tha actual payload to be transmitted.
 Its size is variable.

UDP SERVICES
Process-to-Process Communication
 UDP provides process-to-process communication using socket addresses, a
combination of IP addresses and port numbers.

Connectionless Services
 UDP provides a connectionless service.
 There is no connection establishment and no connection termination .
 Each user datagram sent by UDP is an independent datagram.
 There is no relationship between the different user datagrams even if they are
 coming from the same source process and going to the same destination program.
 The user datagrams are not numbered.
 Each user datagram can travel on a different path.

Flow Control
 UDP is a very simple protocol.
 There is no flow control, and hence no window mechanism.
 The receiver may overflow with incoming messages.
 The lack of flow control means that the process using UDP should provide for this
service, if needed.

Error Control

 There is no error control mechanism in UDP except for the checksum.


 This means that the sender does not know if a message has been lost or duplicated.
 When the receiver detects an error through the checksum, the user datagram is
silently discarded.
19
CIT– Computer Networks

 The lack of error control means that the process using UDP should provide for this
service, if needed.

Checksum
 UDP checksum calculation includes three sections: a pseudoheader, the UDP header,
and the data coming from the application layer.
 The pseudoheader is the part of the header in which the user datagram is to be
encapsulated with some fields filled with 0s.

Optional Inclusion of Checksum


 The sender of a UDP packet can choose not to calculate the checksum.
 In this case, the checksum field is filled with all 0s before being sent.
 In the situation where the sender decides to calculate the checksum,
but it happens that the result is all 0s, the checksum is changed to all 1s
before the packet is sent.
 In other words, the sender complements the sum two times.

Congestion Control
 Since UDP is a connectionless protocol, it does not provide congestion control.
 UDP assumes that the packets sent are small and sporadic(occasionally or at irregular
intervals) and cannot create congestion in the network.
 This assumption may or may not be true, when UDP is used for interactive real-time
transfer of audio and video.

Encapsulation and Decapsulation


 To send a message from one process to another, the UDP protocol encapsulates and
decapsulates messages.

Queuing
 In UDP, queues are associated with ports.
 At the client site, when a process starts, it requests a port number from the operating
system.
 Some implementations create both an incoming and an outgoing queue associated
with each process.
 Other implementations create only an incoming queue associated with each process.

Multiplexing and Demultiplexing


 In a host running a transport protocol suite, there is only one UDP but possibly
several processes that may want to use the services of UDP.
 To handle this situation, UDP multiplexes and demultiplexes.

20
CIT– Computer Networks

APPLICATIONS OF UDP
 UDP is used for management processes such as SNMP.
 UDP is used for route updating protocols such as RIP.
 UDP is a suitable transport protocol for multicasting. Multicasting capability is
embedded in the UDP software
 UDP is suitable for a process with internal flow and error control mechanisms such
as Trivial File Transfer Protocol (TFTP).
 UDP is suitable for a process that requires simple request-response communication
with little concern for flow and error control.
 UDP is normally used for interactive real-time applications that cannot tolerate
uneven delay between sections of a received message.

TRANSMISSION CONTROL PROTOCOL (TCP)

 TCP is a reliable, connection-oriented, byte-stream protocol.


 TCP guarantees the reliable, in-order delivery of a stream of bytes. It is a full-duplex
protocol, meaning that each TCP connection supports a pair of byte streams, one
flowing in each direction.
 TCP includes a flow-control mechanism for each of these byte streams that allow the
receiver to limit how much data the sender can transmit at a given time.
 TCP supports a demultiplexing mechanism that allows multiple application programs
on any given host to simultaneously carry on a conversation with their peers.
 TCP also implements congestion-control mechanism. The idea of this mechanism is
to prevent sender from overloading the network.
 Flow control is an end to end issue, whereas congestion control is concerned with
how host and network interact.

TCP SERVICES
Process-to-Process Communication
 TCP provides process-to-process communication using port numbers.
Stream Delivery Service
 TCP is a stream-oriented protocol.
 TCP allows the sending process to deliver data as a stream of bytes and allows the
receiving process to obtain data as a stream of bytes.
 TCP creates an environment in which the two processes seem to be connected by an
imaginary “tube” that carries their bytes across the Internet.
 The sending process produces (writes to) the stream and the receiving process
consumes (reads from) it.

21
CIT– Computer Networks

Full-Duplex Communication
 TCP offers full-duplex service, where data can flow in both directions at the same
time.
 Each TCP endpoint then has its own sending and receiving buffer, and segments
move in both directions.

Multiplexing and Demultiplexing


TCP performs multiplexing at the sender and demultiplexing at the receiver.

Connection-Oriented Service
 TCP is a connection-oriented protocol.
 A connection needs to be established for each pair of processes.
 When a process at site A wants to send to and receive data from another
process at site B, the following three phases occur:
1. The two TCP’s establish a logical connection between them.
2. Data are exchanged in both directions.
3. The connection is terminated.

Reliable Service
 TCP is a reliable transport protocol.
 It uses an acknowledgment mechanism to check the safe and sound arrival of data.

TCP SEGMENT
 A packet in TCP is called a segment.
 Data unit exchanged between TCP peers are called segments.
 A TCP segment encapsulates the data received from the application layer.
 The TCP segment is encapsulated in an IP datagram, which in turn is encapsulated in
a frame at the data-link layer.

11
CIT– Computer Networks

 TCP is a byte-oriented protocol, which means that the sender writes bytes into a TCP
connection and the receiver reads bytes out of the TCP connection.
 TCP does not, itself, transmit individual bytes over the Internet.
 TCP on the source host buffers enough bytes from the sending process to fill a
reasonably sized packet and then sends this packet to its peer on the destination host.
 TCP on the destination host then empties the contents of the packet into a receive
buffer, and the receiving process reads from this buffer at its leisure.
 TCP connection supports byte streams flowing in both directions.
 The packets exchanged between TCP peers are called segments, since each one
carries a segment of the byte stream.

TCP PACKET FORMAT


 Each TCP segment contains the header plus the data.
 The segment consists of a header of 20 to 60 bytes, followed by data from the
application program.
 The header is 20 bytes if there are no options and up to 60 bytes if it contains
options.

SrcPort and DstPort―port number of source and destination process.


SequenceNum―contains sequence number, i.e. first byte of data segment.
Acknowledgment― byte number of segment, the receiver expects next.
HdrLen―Length of TCP header as 4-byte words.
Flags― contains six control bits known as flags.
o URG — segment contains urgent data.
o ACK — value of acknowledgment field is valid.
o PUSH — sender has invoked the push operation.
o RESET — receiver wants to abort the connection.
o SYN — synchronize sequence numbers during connection establishment.
o FIN — terminates the TCP connection.

12
CIT– Computer Networks

Advertised Window―defines receiver’s window size and acts as flow control.


Checksum―It is computed over TCP header, Data, and pseudo header containing IP fields
(Length, SourceAddr & DestinationAddr).
UrgPtr ― used when the segment contains urgent data. It defines a value that must be
added to the sequence number.
Options - There can be up to 40 bytes of optional information in the TCP header.

TCP CONNECTION MANAGEMENT


 TCP is connection-oriented.
 A connection-oriented transport protocol establishes a logical path between the
source and destination.
 All of the segments belonging to a message are then sent over this logical path.
 In TCP, connection-oriented transmission requires three phases:
Connection Establishment, Data Transfer and Connection Termination.

Connection Establishment
 While opening a TCP connection the two nodes(client and server) want to agree on a
set of parameters.
 The parameters are the starting sequence numbers that is to be used for their
respective byte streams.
 Connection establishment in TCP is a three-way handshaking.

1. Client sends a SYN segment to the server containing its initial sequence number (Flags
= SYN, Sequence Num = x)
2. Server responds with a segment that acknowledges client’s segment and specifies its
initial sequence number (Flags = SYN + ACK, ACK = x + 1 Sequence Num = y).
3. Finally, client responds with a segment that acknowledges server’s sequence number
(Flags = ACK, ACK = y + 1).
13
CIT– Computer Networks

 The reason that each side acknowledges a sequence number that is one larger
than the one sent is that the Acknowledgment field actually identifies the “next
sequence number expected,”
 A timer is scheduled for each of the first two segments, and if the expected
response is not received, the segment is retransmitted.

Data Transfer
 After connection is established, bidirectional data transfer can take place.
 The client and server can send data and acknowledgments in both directions.
 The data traveling in the same direction as an acknowledgment are carried on the
same segment.
 The acknowledgment is piggybacked with the data.

Connection Termination
 Connection termination or teardown can be done in two ways :
Three-way Close and Half-Close

Three-way Close—Both client and server close simultaneously.

 Client sends a FIN segment.


 The FIN segment can include last
chunk of data.
 Server responds with FIN + ACK
segment to inform its closing.
 Finally, client sends an ACK
segment

Half-Close—Client stops sending but receives data.


 Client half-closes the
connection by sending a FIN
segment.
 Server sends an ACK segment.
 Data transfer from client to the
server stops.
 After sending all data, server sends
FIN segment to client, which is
acknowledged by the client.

14
CIT– Computer Networks

STATE TRANSITION DIAGRAM


 To keep track of all the different events happening during connection establishment,
connection termination, and data transfer, TCP is specified as the finite state machine
(FSM).
 The transition from one state to another is shown using directed lines.
 States involved in opening and closing a connection is shown above and below
ESTABLISHED state respectively.
 States Involved in TCP :

15
CIT – Computer Networks

Opening a TCP Connection


1. Server invokes a passive open on TCP, which causes TCP to move to LISTEN state
2. Client does an active open, which causes its TCP to send a SYN segment to the server
and move to SYN_SENT state.
3. When SYN segment arrives at the server, it moves to SYN_RCVD state and responds
with a SYN + ACK segment.
4. Arrival of SYN + ACK segment causes the client to move to ESTABLISHED state
and sends an ACK to the server.
5. When ACK arrives, the server finally moves to ESTABLISHED state.

Closing a TCP Connection


 Client / Server can independently close its half of the connection or simultaneously.
Transitions from ESTABLISHED to CLOSED state are:
One side closes:
ESTABLISHED → FIN_WAIT_1 → FIN_WAIT_2 → TIME_WAIT → CLOSED
Other side closes:
ESTABLISHED → CLOSE_WAIT → LAST_ACK → CLOSED
Simultaneous close:
ESTABLISHED → FIN_WAIT_1 → CLOSING → TIME_WAIT → CLOSED

TCP FLOW CONTROL


 TCP uses a variant of sliding window known as adaptive flow control that:
o guarantees reliable delivery of data
o ensures ordered delivery of data
oenforces flow control at the sender
 Receiver advertises its window size to the sender using AdvertisedWindow field.
 Sender thus cannot have unacknowledged data greater than AdvertisedWindow.

16
CIT – Computer Networks

Send Buffer
 Sending TCP maintains send buffer which contains 3 segments
(1) acknowledged data
(2) unacknowledged data
(3) data to be transmitted.
 Send buffer maintains three pointers
(1) LastByteAcked, (2) LastByteSent, and (3) LastByteWritten
such that:
LastByteAcked ≤ LastByteSent ≤ LastByteWritten
 A byte can be sent only after being written and only a sent byte can be
acknowledged.
 Bytes to the left of LastByteAcked are not kept as it had been acknowledged.

Receive Buffer
 Receiving TCP maintains receive buffer to hold data even if it arrives out-of-order.
 Receive buffer maintains three pointers namely
(1) LastByteRead, (2) NextByteExpected, and (3) LastByteRcvd
such that:
LastByteRead ≤ NextByteExpected ≤ LastByteRcvd + 1
 A byte cannot be read until that byte and all preceding bytes have been received.
 If data is received in order, then NextByteExpected = LastByteRcvd + 1
 Bytes to the left of LastByteRead are not buffered, since it is read by the application.

Flow Control in TCP


 Size of send and receive buffer is MaxSendBuffer and MaxRcvBuffer respectively.
 Sending TCP prevents overflowing of send buffer by maintaining
LastByteWritten − LastByteAcked ≤ MaxSendBuffer
 Receiving TCP avoids overflowing its receive buffer by maintaining
LastByteRcvd − LastByteRead ≤ MaxRcvBuffer
 Receiver throttles the sender by having AdvertisedWindow based on free space
17
CIT – Computer Networks

available for buffering.


AdvertisedWindow = MaxRcvBuffer − ((NextByteExpected − 1) – LastByteRead)
 Sending TCP adheres to AdvertisedWindow by computing EffectiveWindow that
limits how much data it should send.
EffectiveWindow = AdvertisedWindow − (LastByteSent − LastByteAcked)
 When data arrives, LastByteRcvd moves to its right and AdvertisedWindow shrinks.
 Receiver acknowledges only, if preceding bytes have arrived.
 AdvertisedWindow expands when data is read by the application.
o If data is read as fast as it arrives then
AdvertisedWindow = MaxRcvBuffer
o If data is read slowly, it eventually leads to a AdvertisedWindow of size 0.
 AdvertisedWindow field is designed to allow sender to keep the pipe full.

TCP TRANSMISSION
 TCP has two mechanism to trigger the transmission of a segment.
 They are
o Maximum Segment Size (MSS) - Silly Window Syndrome
o Timeout - Nagle’s Algorithm

Silly Window Syndrome


 When either the sending application program creates data slowly or the receiving
application program consumes data slowly, or both, problems arise. 
 Any of these situations results in the sending of data in very small segments, which
reduces the efficiency of the operation.
 This problem is called the silly window syndrome.
 The sending TCP may create a silly window syndrome if it is serving an application
program that creates data slowly, for example, 1 byte at a time.
 The application program writes 1 byte at a time into the buffer of the sending TCP.
 The result is a lot of 1-byte segments that are traveling through an internet.
 The solution is to prevent the sending TCP from sending the data byte by byte.
 The sending TCP must be forced to wait and collect data to send in a larger block.

18
CIT – Computer Networks

Nagle’s Algorithm
 If there is data to send but is less than MSS, then we may want to wait some amount
of time before sending the available data
 If we wait too long, then it may delay the process.
 If we don’t wait long enough, it may end up sending small segments resulting in
Silly Window Syndrome.
 The solution is to introduce a timer and to transmit when the timer expires
 Nagle introduced an algorithm for solving this problem

TCP CONGESTION CONTROL


 Congestion occurs if load (number of packets sent) is greater than capacity of the
network (number of packets a network can handle).
 When load is less than network capacity, throughput increases proportionally.
 When load exceeds capacity, queues become full and the routers discard some
packets and throughput declines sharply.
 When too many packets are contending for the same link
o The queue overflows
o Packets get dropped
o Network is congested
 Network should provide a congestion control mechanism to deal with such a
situation.
 TCP maintains a variable called CongestionWindow for each connection.
 TCP Congestion Control mechanisms are:

19
CIT – Computer Networks

1. Additive Increase / Multiplicative Decrease (AIMD)


2. Slow Start
3. Fast Retransmit and Fast Recovery

Additive Increase / Multiplicative Decrease (AIMD)


 TCP source initializes CongestionWindow based on congestion level in the network.
 Source increases CongestionWindow when level of congestion goes down and
decreases the same when level of congestion goes up.
 TCP interprets timeouts as a sign of congestion and reduces the rate of transmission.
 On timeout, source reduces its CongestionWindow by half, i.e., multiplicative
decrease. For example, if CongestionWindow = 16 packets, after timeout it is 8.
 Value of CongestionWindow is never less than maximum segment size (MSS).
 When ACK arrives CongestionWindow is incremented marginally, i.e., additive
increase.
Increment = MSS × (MSS/CongestionWindow)
CongestionWindow += Increment
 For example, when ACK arrives for 1 packet, 2 packets are sent. When ACK for both
packets arrive, 3 packets are sent and so on.
 CongestionWindow increases and decreases throughout lifetime of the connection.

20
CIT – Computer Networks

 When CongestionWindow is plotted as a function of time, a saw-tooth pattern


results.

Slow Start
 Slow start is used to increase CongestionWindow exponentially from a cold start.
 Source TCP initializes CongestionWindow to one packet.
 TCP doubles the number of packets sent every RTT on successful transmission.
 When ACK arrives for first packet TCP adds 1 packet to CongestionWindow and
sends two packets.
 When two ACKs arrive, TCP increments CongestionWindow by 2 packets and sends
four packets and so on.
 Instead of sending entire permissible packets at once (bursty traffic), packets are sent
in a phased manner, i.e., slow start.
 Initially TCP has no idea about congestion, henceforth it increases
CongestionWindow rapidly until there is a timeout. On timeout:
CongestionThreshold = CongestionWindow/ 2
CongestionWindow = 1
 Slow start is repeated until CongestionWindow reaches CongestionThreshold and
thereafter 1 packet per RTT.

21
CIT – Computer Networks

 The congestion window trace will look like

Fast Retransmit And Fast Recovery


 TCP timeouts led to long periods of time during which the connection went dead
while waiting for a timer to expire.
 Fast retransmit is a heuristic approach that triggers retransmission of a dropped
packet sooner than the regular timeout mechanism. It does not replace regular
timeouts.
 When a packet arrives out of order, receiving TCP resends the same
acknowledgment (duplicate ACK) it sent last time.
 When three duplicate ACK arrives at the sender, it infers that corresponding packet
may be lost due to congestion and retransmits that packet. This is called fast
retransmit before regular timeout.
 When packet loss is detected using fast retransmit, the slow start phase is replaced by
additive increase, multiplicative decrease method. This is known as fast recovery.
 Instead of setting Congestion Window to one packet, this method uses the ACKs that
are still in pipe to clock the sending of packets.
 Slow start is only used at the beginning of a connection and after regular timeout. At
other times, it follows a pure AIMD pattern.


22
CIT – Computer Networks

 For example, packets 1 and 2 are received whereas packet 3 gets lost.
o Receiver sends a duplicate ACK for packet 2 when packet 4 arrives.
o Sender receives 3 duplicate ACKs after sending packet 6 retransmits packet 3.
o When packet 3 is received, receiver sends cumulative ACK up to packet 6.
 The congestion window trace will look like

TCP CONGESTION AVOIDANCE


 Congestion avoidance mechanisms prevent congestion before it actually occurs.
 These mechanisms predict when congestion is about to happen and then to reduce the
rate at which hosts send data just before packets start being discarded.
 TCP creates loss of packets in order to determine bandwidth of the connection.
 Routers help the end nodes by intimating when congestion is likely to occur.
 Congestion-avoidance mechanisms are:
o DEC bit - Destination Experiencing Congestion Bit
o RED - Random Early Detection

Dec Bit - Destination Experiencing Congestion Bit


 The first mechanism developed for use on the Digital Network Architecture (DNA).

 The idea is to evenly split the responsibility for congestion control between the
routers and the end nodes.

 Each router monitors the load it is experiencing and explicitly notifies the end nodes
when congestion is about to occur.

 This notification is implemented by setting a binary congestion bit in the packets that
flow through the router; hence the name DECbit.

23
CIT – Computer Networks

 The destination host then copies this congestion bit into the ACK it sends back to the
source.

 The Source checks how many ACK has DEC bit set for previous window packets.

 If less than 50% of ACK have DEC bit set, then source increases its congestion
window by 1 packet

 Otherwise, decreases the congestion window by 87.5%.

 Finally, the source adjusts its sending rate so as to avoid congestion.

 Increase by 1, decrease by 0.875 rule was based on AIMD for stabilization.

 A single congestion bit is added to the packet header.

 Using a queue length of 1 as the trigger for setting the congestion bit.

 A router sets this bit in a packet if its average queue length is greater than or equal to
1 at the time the packet arrives.

Computing average queue length at a router using DEC bit

 Average queue length is measured over a time interval that includes the
last busy + last idle cycle + current busy cycle.
 It calculates the average queue length by dividing the curve area with time interval.

Red - Random Early Detection


 The second mechanism of congestion avoidance is called Random Early
Detection (RED).

24
CIT – Computer Networks

 Each router is programmed to monitor its own queue length, and when it detects that
there is congestion, it notifies the source to adjust its congestion window.
 RED differs from the DEC bit scheme by two ways:
a. In DECbit, explicit notification about congestion is sent to source, whereas
RED implicitly notifies the source by dropping a few packets.

b. DECbit may lead to tail drop policy, whereas RED drops packet based on
drop probability in a random manner. Drop each arriving packet with some
drop probability whenever the queue length exceeds some drop level. This
idea is called early random drop.

Computation of average queue length using RED

 AvgLen = (1 − Weight) × AvgLen + Weight × SampleLen


where 0 < Weight < 1 and
SampleLen – is the length of the queue when a
sample measurement is made.
 The queue length is measured every time a new packet arrives at the gateway.

 RED has two queue length thresholds that trigger certain activity: MinThreshold and
MaxThreshold
 When a packet arrives at a gateway it compares Avglen with these two values
according to the following rules.

25
CIT – Computer Networks

Differences between TCP and UDP

Basis Transmission Control Protocol User Datagram Protocol (UDP)


(TCP)

UDP is the Datagram-oriented


TCP is a connection-oriented protocol.
protocol. This is because there is no
Connection orientation means that the
overhead for opening a connection,
communicating devices should establish
Type of Service maintaining a connection, or
a connection before transmitting data
terminating a connection. UDP is
and should close the connection after
efficient for broadcast and multicast
transmitting the data.
types of network transmission.

TCP is reliable as it guarantees the


The delivery of data to the destination
Reliability delivery of data to the destination
cannot be guaranteed in UDP.
router.

TCP provides extensive error-checking


mechanisms. It is because it provides UDP has only the basic error-
Error checking
flow control and acknowledgment of checking mechanism
mechanism
data. using checksums.

Acknowledgment An acknowledgment segment is present. No acknowledgment segment.

Sequencing of data is a feature of


Transmission Control Protocol (TCP). There is no sequencing of data in
Sequence this means that packets arrive in order at UDP. If the order is required, it has to
the receiver. be managed by the application layer.

UDP is faster, simpler, and more


Speed TCP is comparatively slower than UDP.
efficient than TCP.

There is no retransmission of lost


Retransmission of lost packets is
Retransmission packets in the User Datagram
possible in TCP, but not in UDP.
Protocol (UDP).

TCP has a (20-60) bytes variable length UDP has an 8 bytes fixed-length
Header Length
header. header.

Weight TCP is heavy-weight. UDP is lightweight.

Handshaking Uses handshakes such as SYN, ACK, It’s a connectionless protocol i.e. No
Techniques SYN-ACK handshake

Broadcasting TCP doesn’t support Broadcasting. UDP supports Broadcasting.


26
CIT – Computer Networks
Basis Transmission Control Protocol User Datagram Protocol (UDP)
(TCP)

TCP is used by HTTP, HTTPs, UDP is used by DNS, DHCP,


Protocols
FTP, SMTP and Telnet. TFTP, SNMP, RIP, and VoIP.

Stream Type The TCP connection is a byte stream. UDP connection is a message stream.

Overhead Low but higher than UDP. Very low.

This protocol is used in situations


This protocol is primarily utilized in
where quick communication is
situations when a safe and trustworthy
necessary but where dependability is
Applications communication procedure is necessary,
not a concern, such as VoIP, game
such as in email, on the web surfing,
streaming, video, and music
and in military services.
streaming, etc.

Socket in Computer Network

A socket is one endpoint of a two way communication link between two programs running on
the network. The socket mechanism provides a means of inter-process communication (IPC)
by establishing named contact points between which the communication take place.

Like ‘Pipe’ is used to create pipes and sockets is created using ‘socket’ system call. The
socket provides bidirectional FIFO Communication facility over the network. A socket
connecting to the network is created at each end of the communication. Each socket has a
specific address. This address is composed of an IP address and a port number.

Socket are generally employed in client server applications. The server creates a socket,
attaches it to a network port addresses then waits for the client to contact it. The client creates
a socket and then attempts to connect to the server socket. When the connection is established,
transfers of data takes place.

Socket programming shows how to use socket APIs to establish communication links between
remote and local processes. The processes that use a socket can reside on the same system or
different systems on different networks. Sockets are useful for both stand-alone and network
applications.

A 'Socket Function' is defined as a function used in computer programming to create a new socket
descriptor by specifying the address family, type of socket, and protocol to be used. It is a crucial
element in network communication for establishing connections between different devices.

27
CIT – Computer Networks

The client creates a socket and then attempts to connect to the server socket. When the connection is
established, transfer of data takes place. Types of Sockets : There are three types of Sockets: the
Stream socket, Datagram Sockets and the Raw Sockets

Types of Sockets

Stream Sockets (TCP):

These sockets use the Transmission Control Protocol (TCP), providing reliable, connection-oriented
communication. Data sent over a stream socket arrives in the same order it was sent, with error-
checking and retransmission mechanisms in place.
Datagram Sockets (UDP):
These sockets use the User Datagram Protocol (UDP), offering connectionless, unreliable
communication. Data sent over a datagram socket may arrive out of order, be duplicated, or even be
lost, but it is faster and requires less overhead than stream sockets.
Raw Sockets:
Raw sockets operate at the network layer, allowing direct access to lower-level protocols like IP.
They are typically used for tasks like custom protocol development, packet sniffing, or
implementing ping and traceroute utilities.
CIT – Computer Networks

Socket Programming Workflow


Server-Side Workflow:
Socket Creation:
Create a socket using the socket() function, specifying the domain (e.g., IPv4 or IPv6), type (e.g.,
stream or datagram), and protocol (e.g., TCP or UDP).
Binding:
Bind the socket to a specific IP address and port using the bind() function. This tells the operating
system where to listen for incoming connections or data.
Listening:
For TCP sockets, use the listen() function to make the socket a passive listener, waiting for
connection requests from clients.
Accepting Connections:
Use the accept() function to accept an incoming connection from a client. This creates a new socket
for the connection, allowing the server to continue listening for other clients.
Communication:
Send and receive data using send() and recv() functions for TCP, or sendto() and recvfrom() for
UDP.
Closing the Socket:
Close the socket using the close() function when the communication is complete

Over view of Socket


Functions

28
CIT – Computer Networks

Sockets in Computer Networks

Function Call Description

Socket() To create a socket

Bind() It’s a socket identification like a telephone number to contact

Listen() Ready to receive a connection

Connect() Ready to act as a sender

Accept() Confirmation, it is like accepting to receive a call from a sender

Write() To send data

Read() To receive data

Close() To close a connection

29
CIT – Computer Networks
Common Use Cases

Client-Server Applications:

The most common use of sockets, where a server provides services (e.g., web hosting, file sharing),
and clients connect to the server to use these services.

Peer-to-Peer (P2P) Networks:

Sockets enable direct communication between peers without relying on a central server (e.g., file-
sharing networks, decentralized applications).

Real-Time Communication:

Sockets are used in real-time applications like chat services, online gaming, and VoIP, where low-
latency communication is critical.

Challenges and Considerations

Blocking vs. Non-Blocking Sockets:

 Blocking sockets pause execution until an operation (like recv()) is complete. Non-blocking
sockets allow the program to continue execution while waiting for an operation to complete.

 Non-blocking sockets are crucial in applications requiring high responsiveness.

Security:

Implementing encryption (e.g., SSL/TLS) over sockets is essential for secure communication,
especially in applications handling sensitive data.

Error Handling:

Proper error handling is vital in socket programming, as network conditions can lead to various
errors, such as timeouts, connection resets, or data loss

Summary

 Sockets are essential for network programming, providing a versatile and powerful means of
communication between processes, both locally and across networks.

 Understanding sockets and their behavior in different scenarios is key to developing robust
and efficient networked applications

30

You might also like