CN Unit III - Notes
CN Unit III - Notes
Flow Control and Congestion Control: Basics of flow control - Congestion control mechanisms,
Transmission Control Protocol (TCP): TCP features and functionalities, TCP connection
establishment, maintenance, and termination, User Datagram Protocol (UDP): Characteristics and
usage scenarios - Comparison with TCP, Sockets: Overview of sockets in network programming.
INTRODUCTION
The transport layer is the fourth layer of the OSI model and is the core of the Internet
model.
It responds to service requests from the session layer and issues service requests to
the network Layer.
The transport layer provides transparent transfer of data between hosts.
It provides end-to-end control and information transfer with the quality of service
needed by the application program.
It is the first true end-to-end layer, implemented in all End Systems (ES).
Process-to-Process Communication
The Transport Layer is responsible for delivering data to the appropriate application
process on the host computers.
This involves multiplexing of data from different application processes, i.e. forming
data packets, and adding source and destination port numbers in the header of each
Transport Layer data packet.
Together with the source and destination IP address, the port numbers constitutes a
network socket, i.e. an identification address of the process-to-process
communication.
2
CIT– Computer Networks
Multiplexing and Demultiplexing
Whenever an entity accepts items from more than one source, this is referred to as
multiplexing (many to one).
Whenever an entity delivers items to more than one source, this is referred to as
demultiplexing (one to many).
The transport layer at the source performs multiplexing
The transport layer at the destination performs demultiplexing
Flow Control
Flow Control is the process of managing the rate of data transmission between two
nodes to prevent a fast sender from overwhelming a slow receiver.
It provides a mechanism for the receiver to control the transmission speed, so that the
receiving node is not overwhelmed with data from transmitting node.
Error Control
Congestion Control
Congestion in a network may occur if the load on the network (the number of
packets sent to the network) is greater than the capacity of the network (the number
of packets a network can handle).
Congestion control refers to the mechanisms and techniques that control the
congestion and keep the load below the capacity.
Congestion Control refers to techniques and mechanisms that can either prevent
congestion, before it happens, or remove congestion, after it has happened
Congestion control mechanisms are divided into two categories,
1. Open loop - prevent the congestion before it happens.
2. Closed loop - remove the congestion after it happens.
PORT NUMBERS
A transport-layer protocol usually has several responsibilities.
One is to create a process-to-process communication.
Processes are programs that run on hosts. It could be either server or client.
A process on the local host, called a client, needs services from a process usually
on the remote host, called a server.
Processes are assigned a unique 16-bit port number on that host.
Port numbers provide end-to-end addresses at the transport layer
They also provide multiplexing and demultiplexing at this layer.
4
CIT– Computer Networks
ICANN (Internet Corporation for Assigned Names and Numbers) has divided the port
numbers into three ranges:
Well-known ports
Registered
Ephemeral ports (Dynamic Ports)
WELL-KNOWN PORTS
These are permanent port numbers used by the servers.
They range between 0 to 1023.
This port number cannot be chosen randomly.
These port numbers are universal port numbers for servers.
Every client process knows the well-known port number of the corresponding server
process.
For example, while the daytime client process, a well-known client program, can
use an ephemeral (temporary) port number, 52,000, to identify itself, the daytime
server process must use the well-known (permanent) port number 13.
The client program defines itself with a port number, called the ephemeral port
5
CIT– Computer Networks
number.
The word ephemeral means “short-lived” and is used because the life of a client is
normally short.
An ephemeral port number is recommended to be greater than 1023.
These port number ranges from 49,152 to 65,535 .
They are neither controlled nor registered. They can be used as temporary or private
port numbers.
REGISTERED PORTS
The ports ranging from 1024 to 49,151 are not assigned or controlled.
Flow control is a technique used to regulate data transfer between computers or other nodes
in a network. Flow control ensures that the transmitting device does not send more data to
the receiving device than it can handle. If a device receives more data than it can process or
store in memory at any given time, it is lost and retransmitted.
Flow control aims to throttle the amount of data transmitted to avoid overwhelming the
receiver's resources. This is accomplished through a series of messages the receiver sends to
the sender to acknowledge if frames have been received. The sender uses these messages to
determine when to transmit more data. If the sender does not receive an acknowledgment
(ACK), it concludes that there has been a problem with the transmission and retransmits of
the data.
Flow control is implemented differently, depending on how the sender and receiver handle
messages and track data frames. There are two basic approaches to flow control: stop-and-
wait and sliding window. The stop-and-wait approach is the simplest to implement, but it is
not as efficient as sliding window, which delivers better network performance and utilizes
network resources more effectively.
In the stop-and-wait approach, the sender segments the data into frames and then transmits
one frame at a time to the receiver, which responds to each frame with an ACK message.
This process occurs through the following steps:
6
CIT– Computer Networks
5. The process continues until the sender has finished transmitting all the data to
the receiver.
Above Figure illustrates how this exchange works. In this case, the sender starts by
transmitting Frame 0 and then waiting for the ACK. When Frame 0 reaches its
destination, the receiver sends ACK 0 to the sender.
After receiving ACK 0, the sender transmits Frame 1 and waits for ACK 1. When that
arrives, the sender transmits Frame 2 and waits again. This time, however, the sender
does not receive ACK 2 before the timeout occurs, so it retransmits Frame 2. The frame
arrives at its destination, so the receiver sends ACK 2. When the sender receives ACK 2,
it transmits Frame 3, which is also acknowledged by the receiver.
Stop and wait belongs to a category of error control mechanisms called automatic repeat
requests (ARQs). These ARQs rely on ACKs to determine whether a data
transmission was successful or if retransmission is needed. Other ARQs include Go-
Back-N ARQ and Selective Repeat ARQ, which use the sliding window protocol.
The sliding window approach addresses many of the issues with stop and wait because
the sender can transmit multiple frames simultaneously without waiting for an ACK for
each frame. However, this approach also comes with additional complexity.
7
CIT– Computer Networks
When first connecting, the sender and receiver establish a window determining the
maximum number of frames the sender can transmit. During the transmission, the sender
and receiver must carefully track which frames have been sent and received to ensure that
all the data reaches its destination and is reassembled in the correct order.
Sliding window flow control can be implemented using one of two approaches: Go-
Back-N and Selective Repeat. With the Go-Back-N approach, the sender can send one or
more frames but never more than the window allows. As the receiver acknowledges the
frames, the sender moves to the next batch, or window, of frames that can now be sent. If
there is a problem with a transmitted frame, the sender retransmits all the frames in the
current window.
Figure 2 shows an example of how Go-Back-N works. In this case, the window consists
of only three frames—initially, Frames 0 through 2. The sender begins by transmitting
Frame 0 to the receiver. Upon receiving Frame 0, the receiver sends an ACK that
specifies the next frame to send (Frame 1) rather than the frame that has just been
received.
When the sender receives ACK 1, it moves the window by one position, dropping Frame
0 and adding Frame 3. The sender then transmits Frames 1, 2, and 3, which represent the
window's entire contents. The sender does not necessarily need to send Frame 0 first,
followed by Frames 1 through 3. Figure 2 illustrates how the process works.
Upon receiving the three frames, the receiver sends a cumulative ACK that specifies the
next frame to send: Frame 4. The ACK indicates the receiver has all the preceding frames
(0 through 3).
8
CIT– Computer Networks
When the sender receives ACK 4, it adjusts the window to include Frames 4 through 6
and then transmits those frames. This time, however, Frame 4 gets lost in the
transmission, while Frames 5 and 6 reach their destination. Upon receiving Frame 5, the
receiver detects that Frame 4 is missing and sends a negative acknowledgment (NAK)
that specifies Frame 4. At the same time, the receiver discards Frames 5 and 6.
When the sender receives the NAK, it retransmits Frames 4 through 6 and waits for the
ACK. The frames arrive with no errors the second time, so the receiver returns an ACK
indicating that the sender can now transmit Frame 7. The sender adjusts the window
accordingly and transmits the next set of frames, starting with Frame 7.
The Selective Repeat approach is similar to Go-Back-N. The primary difference is that
Selective Repeat does not retransmit the entire window if there is an error, only the
individual frame in dispute. Selective Repeat does not support cumulative ACK messages
like Go-Back-N, so each ACK is specific to the frame just received, enabling the sender
to identify the precise frame that needs to be retransmitted.
Figure 3 illustrates an example of the Selective Repeat process. After transmitting Frame
0, the sender receives an ACK, transmits Frames 1 through 3 and receives an ACK for
each one. The sender then transmits Frames 4 through 6. When Frames 5 and 6 arrive at
the receiver but not Frame 4, the receiver sends ACK 5 and ACK 6, along with NAK 4.
The sender responds to the NAK by retransmitting Frame 4. Upon receiving Frame 4, the
receiver sends an ACK. The sender then adjusts the window and transmits the next three
frames, starting with Frame 7.
Both Selective Repeat and Go-Back-N are more efficient than the stop-and-wait
approach, but there are important differences between the two sliding window
approaches. The Go-Back-N approach can consume more bandwidth because all the
frames in a window are retransmitted if an error occurs. However, it is not as complex to
implement as Selective Repeat and does not require the same amount of system
resources. Selective Repeat has greater management overhead because the frames must
be tracked and sorted throughout the data transmission.
11
CIT– Computer Networks
Similarly, each network interface contains a leaky bucket, and the following steps are
involved in the leaky bucket algorithm:
When a host wants to send a packet, the packet is thrown into the bucket.
The bucket leaks at a constant rate, meaning the network interface transmits
packets constantly.
Bursty traffic is converted to uniform traffic by the leaky bucket.
In practice, the bucket is a finite queue that outputs at a finite rate.
To learn more about the Leaky Bucket Algorithm, please refer to the article.
Token Bucket Algorithm
The leaky bucket algorithm has a rigid output design at an average rate
independent of the bursty traffic.
In some applications, when large bursts arrive, the output is allowed to speed up.
This calls for a more flexible algorithm that never loses information. Therefore, a
token bucket algorithm finds its uses in network traffic shaping or rate-limiting.
It is a control algorithm that indicates when traffic should be sent. This order
comes based on the display of tokens in the bucket.
The bucket contains tokens. Each of the tokens defines a packet of predetermined
size. Tokens in the bucket are deleted for the ability to share a packet.
When tokens are shown, a flow to transmit traffic appears in the display of tokens.
No token means no flow sends its packets. Hence, a flow transfers traffic up to its
peak burst rate in good tokens in the bucket.
To learn more about the Token Bucket Algorithm, please refer to the article.
Need for Token Bucket Algorithm
12
CIT– Computer Networks
The leaky bucket algorithm enforces the output pattern at the average rate, no matter how
bursty the traffic is. So, to deal with bursty traffic, we need a flexible algorithm that does
not lose data. One such algorithm is the token bucket algorithm.
The steps of this algorithm can be described as follows:
In regular intervals, tokens are thrown into the bucket. ƒ
The bucket has a maximum capacity. ƒ
If there is a ready packet, a token is removed from the bucket, and the packet is
sent.
If there is no token in the bucket, the packet cannot be sent.
Let’s understand with an example. In figure (A), we see a bucket holding three tokens,
with five packets waiting to be transmitted. For a packet to be transmitted, it must capture
and destroy one token. In figure (B), we see that three of the five packets have gotten
through, but the other two are stuck waiting for more tokens to be generated.
Token Bucket vs Leaky Bucket
The leaky bucket algorithm controls the rate at which packets are introduced in the
network, but it is very conservative. The token bucket algorithm introduces some
flexibility. The token bucket algorithm generates tokens at each tick (up to a certain
limit). For an incoming packet to be transmitted, it must capture a token, and the
transmission occurs at the same rate. Hence, some busty packets are transmitted at the
same rate if tokens are available, which introduces flexibility in the system.
Formula: M * s = C + ? * s where S – is time taken M – Maximum output rate? – Token
arrival rate C – Capacity of the token bucket in byte
Let’s understand with an example,
13
CIT– Computer Networks
14
CIT– Computer Networks
Algorithm Sensitivity: The effectiveness of congestion control algorithms can be
sensitive to network conditions and configurations, requiring fine-tuning for
optimal performance.
Resource Allocation Issues: Fairness in resource allocation, while a benefit, can
also pose challenges when trying to prioritize critical applications over less
essential ones.
Dependency on Network Infrastructure: Congestion control relies on the
underlying network infrastructure and may be less effective in environments with
outdated or unreliable equipment.
15
CIT– Computer Networks
Flow control is typically used in the data link Congestion control is applied in the network
2.
layer. and transport layer.
In this, the Receiver’s data is prevented from In this, the Network is prevented from
3.
being overwhelmed. congestion.
Each protocol provides a different type of service and should be used appropriately.
UDP - UDP is an unreliable connectionless transport-layer protocol used for its simplicity
and efficiency in applications where error control can be provided by the application-layer
process.
TCP - TCP is a reliable connection-oriented protocol that can be used in any application
16
CIT– Computer Networks
where reliability is important.
SCTP - SCTP is a new transport-layer protocol designed to combine some features of UDP
and TCP in an effort to create a better protocol for multimedia communication.
UDP PORTS
Processes (server/client) are identified by an abstract locator known as port.
Server accepts message at well known port.
Some well-known UDP ports are 7–Echo, 53–DNS, 111–RPC, 161–SNMP, etc.
< port, host > pair is used as key for demultiplexing.
Ports are implemented as a message queue.
When a message arrives, UDP appends it to end of the queue.
When queue is full, the message is discarded.
When a message is read, it is removed from the queue.
When an application process wants to receive a message, one is removed from the
front of the queue.
If the queue is empty, the process blocks until a message becomes available.
17
CIT– Computer Networks
18
CIT– Computer Networks
Length
This field denotes the total length of the UDP Packet (Header plus data)
The total length of any UDP datagram can be from 0 to 65,535 bytes.
Checksum
UDP computes its checksum over the UDP header, the contents of the message
body, and something called the pseudoheader.
The pseudoheader consists of three fields from the IP header—protocol number,
source IP address, destination IP address plus the UDP length field.
Data
Data field defines tha actual payload to be transmitted.
Its size is variable.
UDP SERVICES
Process-to-Process Communication
UDP provides process-to-process communication using socket addresses, a
combination of IP addresses and port numbers.
Connectionless Services
UDP provides a connectionless service.
There is no connection establishment and no connection termination .
Each user datagram sent by UDP is an independent datagram.
There is no relationship between the different user datagrams even if they are
coming from the same source process and going to the same destination program.
The user datagrams are not numbered.
Each user datagram can travel on a different path.
Flow Control
UDP is a very simple protocol.
There is no flow control, and hence no window mechanism.
The receiver may overflow with incoming messages.
The lack of flow control means that the process using UDP should provide for this
service, if needed.
Error Control
The lack of error control means that the process using UDP should provide for this
service, if needed.
Checksum
UDP checksum calculation includes three sections: a pseudoheader, the UDP header,
and the data coming from the application layer.
The pseudoheader is the part of the header in which the user datagram is to be
encapsulated with some fields filled with 0s.
Congestion Control
Since UDP is a connectionless protocol, it does not provide congestion control.
UDP assumes that the packets sent are small and sporadic(occasionally or at irregular
intervals) and cannot create congestion in the network.
This assumption may or may not be true, when UDP is used for interactive real-time
transfer of audio and video.
Queuing
In UDP, queues are associated with ports.
At the client site, when a process starts, it requests a port number from the operating
system.
Some implementations create both an incoming and an outgoing queue associated
with each process.
Other implementations create only an incoming queue associated with each process.
20
CIT– Computer Networks
APPLICATIONS OF UDP
UDP is used for management processes such as SNMP.
UDP is used for route updating protocols such as RIP.
UDP is a suitable transport protocol for multicasting. Multicasting capability is
embedded in the UDP software
UDP is suitable for a process with internal flow and error control mechanisms such
as Trivial File Transfer Protocol (TFTP).
UDP is suitable for a process that requires simple request-response communication
with little concern for flow and error control.
UDP is normally used for interactive real-time applications that cannot tolerate
uneven delay between sections of a received message.
TCP SERVICES
Process-to-Process Communication
TCP provides process-to-process communication using port numbers.
Stream Delivery Service
TCP is a stream-oriented protocol.
TCP allows the sending process to deliver data as a stream of bytes and allows the
receiving process to obtain data as a stream of bytes.
TCP creates an environment in which the two processes seem to be connected by an
imaginary “tube” that carries their bytes across the Internet.
The sending process produces (writes to) the stream and the receiving process
consumes (reads from) it.
21
CIT– Computer Networks
Full-Duplex Communication
TCP offers full-duplex service, where data can flow in both directions at the same
time.
Each TCP endpoint then has its own sending and receiving buffer, and segments
move in both directions.
Connection-Oriented Service
TCP is a connection-oriented protocol.
A connection needs to be established for each pair of processes.
When a process at site A wants to send to and receive data from another
process at site B, the following three phases occur:
1. The two TCP’s establish a logical connection between them.
2. Data are exchanged in both directions.
3. The connection is terminated.
Reliable Service
TCP is a reliable transport protocol.
It uses an acknowledgment mechanism to check the safe and sound arrival of data.
TCP SEGMENT
A packet in TCP is called a segment.
Data unit exchanged between TCP peers are called segments.
A TCP segment encapsulates the data received from the application layer.
The TCP segment is encapsulated in an IP datagram, which in turn is encapsulated in
a frame at the data-link layer.
11
CIT– Computer Networks
TCP is a byte-oriented protocol, which means that the sender writes bytes into a TCP
connection and the receiver reads bytes out of the TCP connection.
TCP does not, itself, transmit individual bytes over the Internet.
TCP on the source host buffers enough bytes from the sending process to fill a
reasonably sized packet and then sends this packet to its peer on the destination host.
TCP on the destination host then empties the contents of the packet into a receive
buffer, and the receiving process reads from this buffer at its leisure.
TCP connection supports byte streams flowing in both directions.
The packets exchanged between TCP peers are called segments, since each one
carries a segment of the byte stream.
12
CIT– Computer Networks
Connection Establishment
While opening a TCP connection the two nodes(client and server) want to agree on a
set of parameters.
The parameters are the starting sequence numbers that is to be used for their
respective byte streams.
Connection establishment in TCP is a three-way handshaking.
1. Client sends a SYN segment to the server containing its initial sequence number (Flags
= SYN, Sequence Num = x)
2. Server responds with a segment that acknowledges client’s segment and specifies its
initial sequence number (Flags = SYN + ACK, ACK = x + 1 Sequence Num = y).
3. Finally, client responds with a segment that acknowledges server’s sequence number
(Flags = ACK, ACK = y + 1).
13
CIT– Computer Networks
The reason that each side acknowledges a sequence number that is one larger
than the one sent is that the Acknowledgment field actually identifies the “next
sequence number expected,”
A timer is scheduled for each of the first two segments, and if the expected
response is not received, the segment is retransmitted.
Data Transfer
After connection is established, bidirectional data transfer can take place.
The client and server can send data and acknowledgments in both directions.
The data traveling in the same direction as an acknowledgment are carried on the
same segment.
The acknowledgment is piggybacked with the data.
Connection Termination
Connection termination or teardown can be done in two ways :
Three-way Close and Half-Close
14
CIT– Computer Networks
15
CIT – Computer Networks
16
CIT – Computer Networks
Send Buffer
Sending TCP maintains send buffer which contains 3 segments
(1) acknowledged data
(2) unacknowledged data
(3) data to be transmitted.
Send buffer maintains three pointers
(1) LastByteAcked, (2) LastByteSent, and (3) LastByteWritten
such that:
LastByteAcked ≤ LastByteSent ≤ LastByteWritten
A byte can be sent only after being written and only a sent byte can be
acknowledged.
Bytes to the left of LastByteAcked are not kept as it had been acknowledged.
Receive Buffer
Receiving TCP maintains receive buffer to hold data even if it arrives out-of-order.
Receive buffer maintains three pointers namely
(1) LastByteRead, (2) NextByteExpected, and (3) LastByteRcvd
such that:
LastByteRead ≤ NextByteExpected ≤ LastByteRcvd + 1
A byte cannot be read until that byte and all preceding bytes have been received.
If data is received in order, then NextByteExpected = LastByteRcvd + 1
Bytes to the left of LastByteRead are not buffered, since it is read by the application.
TCP TRANSMISSION
TCP has two mechanism to trigger the transmission of a segment.
They are
o Maximum Segment Size (MSS) - Silly Window Syndrome
o Timeout - Nagle’s Algorithm
18
CIT – Computer Networks
Nagle’s Algorithm
If there is data to send but is less than MSS, then we may want to wait some amount
of time before sending the available data
If we wait too long, then it may delay the process.
If we don’t wait long enough, it may end up sending small segments resulting in
Silly Window Syndrome.
The solution is to introduce a timer and to transmit when the timer expires
Nagle introduced an algorithm for solving this problem
19
CIT – Computer Networks
20
CIT – Computer Networks
Slow Start
Slow start is used to increase CongestionWindow exponentially from a cold start.
Source TCP initializes CongestionWindow to one packet.
TCP doubles the number of packets sent every RTT on successful transmission.
When ACK arrives for first packet TCP adds 1 packet to CongestionWindow and
sends two packets.
When two ACKs arrive, TCP increments CongestionWindow by 2 packets and sends
four packets and so on.
Instead of sending entire permissible packets at once (bursty traffic), packets are sent
in a phased manner, i.e., slow start.
Initially TCP has no idea about congestion, henceforth it increases
CongestionWindow rapidly until there is a timeout. On timeout:
CongestionThreshold = CongestionWindow/ 2
CongestionWindow = 1
Slow start is repeated until CongestionWindow reaches CongestionThreshold and
thereafter 1 packet per RTT.
21
CIT – Computer Networks
22
CIT – Computer Networks
For example, packets 1 and 2 are received whereas packet 3 gets lost.
o Receiver sends a duplicate ACK for packet 2 when packet 4 arrives.
o Sender receives 3 duplicate ACKs after sending packet 6 retransmits packet 3.
o When packet 3 is received, receiver sends cumulative ACK up to packet 6.
The congestion window trace will look like
The idea is to evenly split the responsibility for congestion control between the
routers and the end nodes.
Each router monitors the load it is experiencing and explicitly notifies the end nodes
when congestion is about to occur.
This notification is implemented by setting a binary congestion bit in the packets that
flow through the router; hence the name DECbit.
23
CIT – Computer Networks
The destination host then copies this congestion bit into the ACK it sends back to the
source.
The Source checks how many ACK has DEC bit set for previous window packets.
If less than 50% of ACK have DEC bit set, then source increases its congestion
window by 1 packet
Using a queue length of 1 as the trigger for setting the congestion bit.
A router sets this bit in a packet if its average queue length is greater than or equal to
1 at the time the packet arrives.
Average queue length is measured over a time interval that includes the
last busy + last idle cycle + current busy cycle.
It calculates the average queue length by dividing the curve area with time interval.
24
CIT – Computer Networks
Each router is programmed to monitor its own queue length, and when it detects that
there is congestion, it notifies the source to adjust its congestion window.
RED differs from the DEC bit scheme by two ways:
a. In DECbit, explicit notification about congestion is sent to source, whereas
RED implicitly notifies the source by dropping a few packets.
b. DECbit may lead to tail drop policy, whereas RED drops packet based on
drop probability in a random manner. Drop each arriving packet with some
drop probability whenever the queue length exceeds some drop level. This
idea is called early random drop.
RED has two queue length thresholds that trigger certain activity: MinThreshold and
MaxThreshold
When a packet arrives at a gateway it compares Avglen with these two values
according to the following rules.
25
CIT – Computer Networks
TCP has a (20-60) bytes variable length UDP has an 8 bytes fixed-length
Header Length
header. header.
Handshaking Uses handshakes such as SYN, ACK, It’s a connectionless protocol i.e. No
Techniques SYN-ACK handshake
Stream Type The TCP connection is a byte stream. UDP connection is a message stream.
A socket is one endpoint of a two way communication link between two programs running on
the network. The socket mechanism provides a means of inter-process communication (IPC)
by establishing named contact points between which the communication take place.
Like ‘Pipe’ is used to create pipes and sockets is created using ‘socket’ system call. The
socket provides bidirectional FIFO Communication facility over the network. A socket
connecting to the network is created at each end of the communication. Each socket has a
specific address. This address is composed of an IP address and a port number.
Socket are generally employed in client server applications. The server creates a socket,
attaches it to a network port addresses then waits for the client to contact it. The client creates
a socket and then attempts to connect to the server socket. When the connection is established,
transfers of data takes place.
Socket programming shows how to use socket APIs to establish communication links between
remote and local processes. The processes that use a socket can reside on the same system or
different systems on different networks. Sockets are useful for both stand-alone and network
applications.
A 'Socket Function' is defined as a function used in computer programming to create a new socket
descriptor by specifying the address family, type of socket, and protocol to be used. It is a crucial
element in network communication for establishing connections between different devices.
27
CIT – Computer Networks
The client creates a socket and then attempts to connect to the server socket. When the connection is
established, transfer of data takes place. Types of Sockets : There are three types of Sockets: the
Stream socket, Datagram Sockets and the Raw Sockets
Types of Sockets
These sockets use the Transmission Control Protocol (TCP), providing reliable, connection-oriented
communication. Data sent over a stream socket arrives in the same order it was sent, with error-
checking and retransmission mechanisms in place.
Datagram Sockets (UDP):
These sockets use the User Datagram Protocol (UDP), offering connectionless, unreliable
communication. Data sent over a datagram socket may arrive out of order, be duplicated, or even be
lost, but it is faster and requires less overhead than stream sockets.
Raw Sockets:
Raw sockets operate at the network layer, allowing direct access to lower-level protocols like IP.
They are typically used for tasks like custom protocol development, packet sniffing, or
implementing ping and traceroute utilities.
CIT – Computer Networks
28
CIT – Computer Networks
29
CIT – Computer Networks
Common Use Cases
Client-Server Applications:
The most common use of sockets, where a server provides services (e.g., web hosting, file sharing),
and clients connect to the server to use these services.
Sockets enable direct communication between peers without relying on a central server (e.g., file-
sharing networks, decentralized applications).
Real-Time Communication:
Sockets are used in real-time applications like chat services, online gaming, and VoIP, where low-
latency communication is critical.
Blocking sockets pause execution until an operation (like recv()) is complete. Non-blocking
sockets allow the program to continue execution while waiting for an operation to complete.
Security:
Implementing encryption (e.g., SSL/TLS) over sockets is essential for secure communication,
especially in applications handling sensitive data.
Error Handling:
Proper error handling is vital in socket programming, as network conditions can lead to various
errors, such as timeouts, connection resets, or data loss
Summary
Sockets are essential for network programming, providing a versatile and powerful means of
communication between processes, both locally and across networks.
Understanding sockets and their behavior in different scenarios is key to developing robust
and efficient networked applications
30