CSC 501 Note - Part 2
CSC 501 Note - Part 2
Course Content
CSC 501(2 Units) – Computer Networks II
Architecture of computer networks and network protocols, protocol layering, reliable transmission,
congestion control, flow control, naming and addressing, unicast and multicast routing, network security,
network performance, widely used protocols such as Ethernet, wireless LANs, IP, and HTTP. Bus structures
and loop systems computer network. Examples and design consideration, data switching principles,
broadcast techniques, and network structure for packet switching, protocols, and description of network e.g.
ARPANET, DSC etc. Prerequisite CSC 206 30h (L) 0h(T) 45h(P).
1
Many games and other means of entertainment are easily available on the internet.
Furthermore, Local Area Networks (LANs) offers and facilitates other ways of enjoyments, such as
many players are connected through LAN and play a particular game with each other from remote
location.
Inexpensive system
Sharedresources mean reduction in hardware costs. Shared files mean reduction in
memoryrequirement, which indirectly means reduction in file storage expenses. A particular
softwarecan be installed only once on the server and made available across all connectedcomputers
atonce. This saves the expense of buying and installing the same software as many times for asmany
users.
Flexible access
A user can log on to a computer anywhere on the network and access his files. This offers flexibility to
the user as to where he should be during the course of his routine.
Instant and multiple access
Computer networks are multiply processed . many of the users can access the same information at
the same time. Immediate commands such as printing commands can be made with the help of
computer networks.
2
the computer with each other through a network. For example, a person of onedepartment of an
organization can share or access the electronic data of other departmentthrough network.
Email services
A computer network provides you the facility to send or receive mails across the globe in
fewseconds.
Mobile applications
By using the mobile applications, such as cellular or wireless phones, you can
communicate(exchange your views and ideas) with one other.
Directory services
It provides you the facility to store files on a centralized location to increase the speed of
searchoperation worldwide.
Teleconferencing
It contains voice conferencing and video conferencing which are based in networking.
Inteleconferencing the participants need not to be presented at the same location.
Bus Topology
Bus topology is anetwork type in whichevery computer and network device is connected to single
cable.
Features:
It transmits data only in one direction.
Every device is connected to a single cable.
Advantages:
It is cost effective (cheaper).
Cable required is least compared to other network topology.
Used in small networks.
It is easy to understand.
Easy to expand joining two cables together.
Disadvantages:
Cables fails then whole network fails.
3
If network traffic is heavy or nodes are more,
The performance of the network decreases.
Cable has a limited length.
Ring Topology
It is called ring topology because it forms a ring as each computer is connected to another computer,
with the last one connected to the first. Exactly two neighbours for each device.
Features:
Anumber of repeaters are used and the transmission is unidirectional.
Date is transferred in a sequential manner that is bit by bit.
Advantages:
Transmitting network is not affected by high traffic or by adding more nodes, as only the nodes having
tokens can transmit data.
Cheap to install and expand.
Disadvantages:
Troubleshooting is difficult in ring topology.
Adding or deleting the computers disturbs the network activity.
Failure of one computer disturbs the whole network.
Star Topology
In this type of topology all the computers are connected to a single hub through a cable. Thishub is the
central node and all others nodes are connected to the central node.
Features:
4
Every node has its own dedicated connection to the hub.
Acts as a repeater for dataflow.
Can be used with twisted pair, Optical Fibre or coaxial cable.
Advantages:
Fast performance with few nodes and low network traffic.
Hub can be upgraded easily.
Easy to troubleshoot.
Easy to setup and modify.
Only that node is affected which hasfailed rest of the nodes can work smoothly.
Disadvantages:
Cost of installation is high.
Expensive to use.
If the hub is affected then the whole network is stopped because all the nodes depend on thehub.
Performance is based on the.
Mesh Topology
It is apoint-to-point connection to other nodes or devices.
Traffic is carried only between two devices or nodes to which it is connected.
Features:
Fully connected.
Robust.
Not flexible.
Advantages:
Each connection can carry its own data load.
It is robust.
Fault is diagnosed easily.
Provides security and privacy.
Disadvantages:
Installation and configuration is difficult.
Cabling cost is more.
Bulk wiring is required.
Tree Topology
It has a root node and all other nodes are connected to it forming a hierarchy.
It is also called hierarchical topology.
5
It should at least have three levels to the hierarchy.
Features:
Advantages:
Hybrid Topology
Features:
A network structure whose design contains more than one topology is said to be hybrid topology
For example if in an office in one department ring topology is used and in another star topology is
used, connecting these topologies will result in Hybrid Topology (ring topology and star topology).
It is a combination of two ormoretopologies
Inherits the advantages and disadvantages of the topologies included
Advantages:
Reliable aserror detecting and trouble shooting is easy.
Scalable as size can be increased easily.
Flexible.
6
Disadvantages:
Complex in design.
Costly.
7
Figure 2: Local Area Network
8
Internet
The internet is a type of world-wide computer network.
he internet is the collection of infinite numbers of connected computers that are spread across the
world.
We can also say that, the Internet is a computer network that interconnects hundreds of millions of
computing devices throughout the world.
It is established as the largest network and sometimes called network of networks that consists of
numerous academic, business and government networks, which together carry various information.
Internet is a global computer network providing a variety of information and communication facilities,
consisting of interconnected networks using standardized communication protocols.
When two computers are connected over the Internet, they can send and receive all kinds of
information such as text, graphics, voice, video, and computer programs.
F
i
F
i
Figure 5: Some pieces of the Internet
9
PART 2: NETWORK ARCHITECTURE, PROTOCOLS, &
PROTOCOL LAYERING
Servers:
Refer to the computer systems that receive requests from the clients and process them. After the
processing is complete, the servers send a reply to the clients who sent the request.
The concept of clients and servers is essentialin the network design, i.e. Network Architecture
Network Architecture
1. It is basically the physical and logical design which refers to the software, hardware, protocols and the
media of transmission of data. It refers to how computers are organized and how tasks are allocated
among these computers.
2. It is the design of a communication network. It is a framework for the specification of a network's physical
components and their functional organization and configuration, its operational principles and procedures,
as well as data formats used.
3. Network Architecture is the complete framework of an organization's computer network. The diagram of
the network architecture provides a full picture of the established network with detailed view of all the
resources accessible. It includes hardware components used for communication, cabling and device types,
network layout and topologies, physical and wireless connections, implemented areas and future plans. In
addition, the software rules and protocols also constitute to the network architecture. This architecture is
always designed by a network manager/administrator with coordination of network engineers and
other design engineers.
10
Figure 7: Network Edge-Client/Server Network and Peer to Peer
PEER-TO-PEER ARCHITECTURE
In Peer-to-Peer network (architecture), a group of computers is connected together so that users can
share resources and information.
There is no central location (server) for authenticating users, storing files, or accessing resources.
This means that users must remember which computers in the workgroup have the sharedresource or
information that they want to access.
In Peer to Peer architecture task is allocated to every device on a network.
All of them are considered equal as there is no hierarchy in this network.
All have the same abilities to use the resources available on this network;
It is mostly used in file sharing
Each computer has its running software that allows communication between all the other computers.
First P2P network was Napster.
An example of a peer-to-peer architecture is a system of intelligent agents that collaborate to collect,
filter, and correlate information.
CLIENT/SERVER ARCHITECTURE
A client/server network is a system where one or more computers called clients connect to a central
computer named as server to share or use resources.
The client requests a service from server, which may include running an application, querying database,
printing a document, performing a backup or recovery procedure. The request made by the client is
handled by server.
A client/server network is that in which the files and resources are centralized. This means that the server
can hold them and other computers (Client) can access them.
In figure 7 above, we have several clients that wants access from sever.
Server is a central device that provides services and data to the clients requesting such from the server.
Servers are basically computer with fast speed and large memory.
Clients provides services to the end user by taking or consuming services from server.
11
Advantages of a Network client/server Architecture
i. Resources and data security are controlled through the server.
ii. The client/server network is not restricted to a small number of computers, i.e. it can be scaled up to
accomdate hundeds and thousands of computers.
iii. Server can be accessed anywhere and across multiple platforms.
iv. The server system holds the shared files.
v. The server system can be scheduled to take the file backups automatically.
vi. Network access is provided only to authorize users through user security at the server.
vii. The server system is a kind of central repository for sharing printer with clients.
viii. Internet access, e-mail routing and such other networking tasks are quite easily managed by the
server.
ix. The software applications shared by the server are accessible to the clients
Sometimes layered (“tiered”) and peer-to-peer architectures are combined, where the nodes in particular
layers are in peer-to-peer relationships. For example, a multi-tiered architecture might include an enterprise
management layer, consisting of peer nodes for such things as network management, event management,
database management, Web sever management, and workload balancing.
Client-Server architecture
In client-server architecture, there is an always a host, called the server, which provides services
requested from many other hosts, called clients. There is usually a 1:M relationship between servers
and clients. It is also possible to have a 1:M relationship between one or more clients and servers. In
this model, a system is decomposed into client and server processors or processes.
12
A classic example is the Web application for which an always-on Web server services requests from
browsers running on client hosts. When a Web server receives a request for an object from a client
host, it responds by sending the requested object to the client host.
Note that with the client-server architecture, clients do not directly communicate with each other; for
example, in the Web application, two browsers do not directly communicate with each other.
Another characteristic of the client-server architecture is that the server has a fixed, well-known
address, called an IP address. Becausethe server is always on, a client can always contact the server by
sending a packet to the server’s IP address.
Some of the better-known applications with client-server architecture include the Web, FTP, Telnet, and e-
mail.
Client-server architectures are commonly organized into layers referred to as “tiers”.
Tiered Architectures
Two-tier architecture. The system architecture consists of a data server layer and an application client
layer. Data access computation is associated with the data server layer, and the user interface is
associated with the client application layer. If most of the application logic is associated with the client
application logic, it is sometimes referred to as a “fat client.” If it is associated with the data access server,
the application client layer is sometimes referred to as a “thin client.”
Three-tier architecture. The system architecture consists of data server layer, an application server layer
and a client application layer. The application server layer facilitates the separation of application logic
from presentation, and promotes distributed processing.
Multi-tier (n-tier) architecture. The system architecture is a superset of a three-tier architecture, and
includes additional layers for data and/or application servers.
P2P architecture
In P2P architecture, there is no dedicated server.
Pairs of hosts, called peers communicate directly with each other.
Because the peers communicate without passing through a dedicated server, the architecture is called
peer-to-peer.
Many of today’s most popular and traffic-intensive applications are based on P2P architectures.
13
PROTOCOLS AND PROTOCOL LAYERING
Protocol
A protocol is a set of rules that governs (manages) data communications.
Protocols defines methods of communication, how to communicate, when to communicate etc.
A protocol is an agreement between the communicating parties on how communication is toproceed.
Important elements of protocols are
1. Syntax 2. Semantics 3. Timing
Syntax:-Syntax means format of data orthe structure of how it is presented e.g. first eight bits are for sender
address, next eight bits are for receiver address and rest of the bits for message data.
Semantics:-Semantics is the meaning of each section of bits e.g. the address bit means the route of
transmission or final destination of message.
Timing:-Timing means, at what time data can be sent and how fast data can be sent.
Some protocols also support message acknowledgement and data compression designed for reliable
and/or high-performance network communication.
Example: HTTP, IP, FTP etc…
Each Layer has a “Peer-to-Peer” protocol that seems to represent (and carry out) the rest of
the network task, yet it does only a specific part and delegate the rest to the layer beneath it,
via its interface, which that defines the services that is provided to the layer above (and
beneath) it.
“Encapsulation”: Each layer has its own PDU (Protocol Data Unit) that’s passes (as a
parameter) to the layer beneath, which in turn adds a “ header ”(at layer 2 also adds trailer”)
before assign it to the next layer (except the physical layer).
Why “header” and “trailer”?
Physical movement of information PDU is “vertical” yet the user thinks (At each peer –to-
peer) layer that info moves” horizontal” (pipe)
15
Figure 2.4 An exchange using the OSI model
Physical Layer
The physical layer, the lowest layer of the OSI model, is concerned with the transmission and reception
of the unstructured raw bit stream over a physical medium/communication channel.
It describes the electrical/optical, mechanical, and functional interfaces to the physical medium, and
carries the signals for all of the higher layers. The design issues have to do with making sure that when
one side sends a 1 bit, it is received by the other side as a 1 bit, not as a 0 bit.
Typical questions here are
o how many volts should be used to represent a 1 and how many for a 0,
o how many nanoseconds a bit lasts,
o whether transmission may proceed simultaneously in both directions,
o how the initial connection is established and how it is torn down when both sides are finished,
o How many pins the network connector has and what each pin is used for.
The design issues here largely deal with mechanical, electrical, and timing interfaces, and the physical
transmission medium, which lies below the physical layer.
To summarize, the physical layer provides the following services or take into account the following
factors:
Line Coding /Data (Signal) encoding: How are the bits 0 and 1 to be represented? Data
enconding modifies the simple digital signal pattern (1s and 0s) used by the PC to better
accommodate the characteristics of the physical medium, and to aid in bit and frame
synchronization. It allows data to be sent by hardware devices that are optimized for digital
communications that may have discreet timing on the transmission link
Transmission technique (Signal type): determines whether the encoded bits will be
16
transmitted by baseband (digital) or broadband (analog) signaling.
Physical medium transmission: transmits bits as electrical or optical signals appropriate for the
physical medium. What is the medium used, and what are its properties?
Modulates the process of converting a signal from one form to another so that it can be physically
transmitted over a communication channel
Start-stop signaling and flow control in asynchronous serial communication
Circuit switching and multiplexing hardware control of multiplexed digital signals
Carrier sensing and collision detection, whereby the physical layer detects carrier availability and
avoids the congestion problems caused by undeliverable packets
Signal equalization to ensure reliable connections and facilitate multiplexing
Forward error correction/channel coding such as error correction code
Bit interleaving to improve error correction
Bit synchronization: Is the transmission asynchronous or synchronous serial
communication?
Transmission type: Is the transmission serial or parallel?
Transmission mode control: Is the transmission simplex, half-duplex or full duplex?
Topology: What is the topology (mesh, star ring, bus or hybrid) used?
Multiplexing: Is multiplexing used, and if so, what is its type (FDM, TDM)?
Interface: How are the two closely linked devices connected?
Bandwidth: Which of the two baseband or broadband communication is being used?
Auto-negotiation
Bit-by-bit delivery
Data linkLayer
17
The data link layer provides error-free transfer of data frames from one node to another over the
physical layer, allowing layers above it to assume virtually error-free transmission over the link. That is,
The data link layer is responsible for moving frames from one hop (node) to the next.
To do this, the data link layer provides:
1. Media Access Control or management: Control the access to the physical medium among all
connected devices. It determines when the node "has the right" to use the physical medium.
2. Link establishment and termination: establishes and terminates the logical link between two
nodes.
3. Framing (Frame delimiting): creates and recognizes frame boundaries.
4. Frame sequencing: transmits/receives frames sequentially.
5. Frame acknowledgment: provides/expects frame acknowledgments. Detects and recovers from
errors that occur in the physical layer by retransmitting non-acknowledged frames and handling
duplicate frame receipt.
6. Physical Addressing: incorporate sender/receiver addresses in the frame header.
7. Flow Control (Frame Traffic Control): To prevent fast sender from flooding a slower receiver with
frames. Tells the transmitting node to "back-off"(stop) when no frame buffers are available.
8. Error Control (Frame error checking): To Increase physical layer reliability by adding mechanism to
detect and ReTx damages and lost frames using the Trailer info. Checks received frames for
integrity.
Network Layer
Responsible for the source to destination delivery of individual packets, possibly across multiple
networks.
The network layer controls the operation of the subnet, deciding which physical path the data should
take based on network conditions, priority of service, and other factors.
To do this, the Network Layer provides:
Routing: route packets over the subnet cloud of routers and switches, make the optimal routing
decisions (src- destination)
Packet Deliver: Source host system to destination host system delivery, utilizing the data link layer
18
for peer-to-peer delivery.
Logical-physical address mapping: translates logical addresses or names, into physical addresses.
Physical addresses at the D.L (device level) are not enough; we need to add logical addressing in
the packet header, of the sender and receiver
Internetworking: resolve any Net protocols conflicts while moving in the subnet.
Subnet traffic control: routers (network layer intermediate systems) can instruct a sending station
to "throttle back" its frame transmission when the router's buffer fills up.
Packet fragmentation: if it determines that a downstream router's maximum transmission unit
(MTU) size is less than the packet size, a router can fragment a frame for transmission and
reassembly at the destination station.
Subnet usage accounting: has accounting functions to keep track of frames forwarded by subnet
intermediate systems, to produce billing information.
Transport Layer
The most important layer since it abstracts the complete details of the subnet to the user.
The transport layer is responsible for the delivery of a message from one process to another.
The transport layer ensures that messages are delivered error-free, in sequence, and with no losses or
duplications. It relieves (release) the higher layer protocols from any concern with the transfer of data
between them and their peers.
The size and complexity of a transport protocol depends on the type of service it can get from the
network layer. For a reliable network layer with virtual circuit capability, a minimal transport layer is
required. If the network layer is unreliable and/or only supports datagrams, the transport protocol
should include extensive error detection and recovery.
The transport layer provides:
Service Access Point Addressing (SAP): The network logical address (i.e. IP addresses) is for
src_sys to destination not src_user_process to destination_user_process, hence we need another
address mechanism => SAP addresses within the same system for user message delivery.
Message segmentation and reassembly: of segments => packets
Accepts a Process (user) message from the (session) layer above it, splits the message into smaller
units called segments (if not already small enough or if needed) each with n seg. Sequence number
to aid in assembly (incorrect order) related segments into the original message at the destination’s
transport layer. It then passes the smaller units down to the network layer. Note: The maximum
Transmittion Unit (MTU) size is 1500 bytes, i.e. the max size a data packet can have.
Typically, the transport layer can accept relatively large messages, but there are strict message
size limits imposed by the network (or lower) layer. Consequently, the transport layer must
break up the messages into smaller units, or packets, prepending a header to each packet.
The transport layer header information must then include control information, such as
message start and message end flags, to enable the transport layer on the other end to
recognize message boundaries.
In addition, if the lower layers do not maintain sequence, the transport header must contain
sequence information to enable the transport layer on the receiving end to get the pieces back
together in the right order before handing the received message up to the layer above.
19
Message acknowledgment (Connection control): provides reliable end-to-end message delivery
with acknowledgments.
1. Connection reliable service (-no ACK, -no guarantee). Connection-oriented TCP,
guarantees delivery in order with ACK of segments.
2. Connectionless unreliable service
Message traffic control (Flow Control): tells the transmitting station to "back-off" when no
message buffers are available. This works as in the data Link Layer but all the message level “end-
users”.
Error Control: Like the DLL, but process_to_process delivery of messages. Errors (damaged, loss or
duplicate) cause reTransmission of messages.
Session Layer
The session layer is responsible for dialog control and synchronization. It serves as the network
dialog controller.
It provides:
Session establishment, maintenance and termination: allows two application processes on
different machines (or stations) to establish, use and terminate a connection, called a session.
Session support: performs the functions that allow these processes to communicate over the
network, performing security, name recognition, logging, and so on.
20
It established, maintain and synchronizes the interaction among communicating (processes on
different) system
o Dialog Controls H/Duplex or F/Duplex
o Synchronization: Checkpoints are added to data streams for dividing into units of
independent ACK. Communication robustness in case of crashes.
Presentation Layer
The presentation layer is responsible for translation, compression, and encryption.
Viewed as the translator for the network, the presentation layer formats the data to be presented to
the application layer. It may translate data from a format used by the application layer into a common
format at the sending station, and then translate the common format to a format known to the
application layer at the receiving station.
The presentation layer provides:
Translation (Character code translation): for example, ASCII to EBCDIC. Abstract syntax notation
(ASN). ASN is a standard interface description language (IDL) for defining data structures that can
be serialized and deserialized in a cross-platform way.
Data conversion: bit order, CR-CR/LF, integer-floating point, and so on.
Data compression: reduces the number of bits that need to be transmitted on the network, so as
to ensure efficient utilization of bandwidth
Data encryption: encrypt data so as to secure information transmitted for privacy. For example,
password encryption.
Application Layer
The application layer is responsible for providing services to the user.
The application layer serves as the window for users and application processes to access network
services.
21
9. SMTP, FTP, HTTP, DNS, SNMP, TELNET.
Whent compareted to the ISO model, theTCP/IP model is show to have five layers, see Figure __
22
As we can see from the above figure, presentation and session layers are not there in TCP/IP model.
Also note that the Network Access Layer in TCP/IP model combines the functions of Datalink Layer and
Physical Layer.
23
combining the best of UDP and TCP
TCP/IP Network Interface Layer (Network Access Layer)- Datalink + Physical Layer
Network Access Layer defines details of how data is physically sent through the network, including
how bits are electrically or optically signalled by hardware devices that interface directly with a
network medium, such as coaxial cable, optical fiber, or twisted pair copper wire.
The protocols included in Network Access Layer are Ethernet, Token Ring, FDDI, X.25, Frame Relay etc.
Physical Layer: Very vague, it can be LAN, MAN, WAN.
A mapping of the protocols and services to both TCP/IP and the OSI model is given in Figure __ and Figure__.
24
OSI and TCP/IP – Comparing and Contrating
TCP/IP ADDRESSING
Four levels of addresses are used in an internet employing the TCP/IP protocols. These are (1) Physical
Addresses (2) Logical Addresses (3)Port Addresses (4) Specific Addresses
Example 2.1
In Figure 2.19 a node with physical address 10 sends a frame to a node with physical address 87. The two nodes
are connected by a link (bus topology LAN). As the figure shows, the computer with physical address 10 is the
sender, and the computer with physical address 87 is the receiver.
25
Figure 2.18 Relationship of layers and addresses in TCP/IP
Example 2.2
As we will see in Chapter 13, most local-area
local networks use a 48-bit (6-byte)
byte) physical address written as 12
hexadecimal digits; every byte (2 hexadecimal digits) is separated by a colon, as shown below:
Example 2.3
Figure 2.20 shows a part of an internet with two routers connecting three LANs. Each device (computer or
router) has a pair of addresses (logical and physical) for each connection
connection.. In this case, each computer is
connected to only one link and therefore has only one pair of addresses. Each router, however, is connected
to three networks (only two are shown in the figure). So each router has three pairs of addresses, one for
each connection.
26
Figure 2.20 IP addresses
Example 2.4
Figure 2.21 shows two computers communicating via the Internet. The sending computer is running three
processes at this time with port addresses a, b, and c. The receiving computer is running two processes at
this time with port addresses j and k. Process a in the sending computer needs to communicate with
process j in the receiving computer. Note that although physical addresses change from hop to hop,
logical and port addresses remain the same from the source to destination.
27
Figure 2.21 Port addresses
NOTE:
The physical addresses will change from hop to hop, but the logical addresses usually remain the same.
Example 2.5
As we will see in Chapter 23, a port address is a 16-bit
16 address represented by one decimal number as
shown.
NOTE:
The physical addresses change from hop to hop, but the logical and port addresses usually remain the same.
Processing Delay
The time required to examine the packet’s header and determine where to direct the packet is part
of the processing delay.
The processing delay can also include other factors, such as the time needed to check for bit-level
errors in the packet that occurred in transmitting the packet’s bits from the upstream node to
router.
It is typically on the order of microseconds or less.
Queuing Delay
At the queue, the packet experiences a queuing delay as it waits to be transmitted onto the link.
The length of the queuing delay of a specific packet will depend on the number of earlier arriving
packets that are queued and waiting for transmission onto the link.
If the queue is empty and no other packet is currently being transmitted, then our packet’s queuing
delay will be zero.
On the other hand, if the traffic is heavy and many other packets are also waiting to be transmitted,
the queuing delay will be long.
Queuing delays can be on the order of microseconds to milliseconds in practice.
Transmission Delay
Assuming that packets are transmitted in a first-come-first-served manner like packet-switched
networks.
Now packet can be transmitted only after all the packets that have arrived before it have been
transmitted.
Denote the length of the packet by L bits, and denote the transmission rate of the link from router to
router by R bits/sec.
The transmission delay is L/R.
Transmission delays are typically on the order of microseconds to milliseconds in practice.
Propagation Delay
Once a bit is pushed into the link, it needs to propagate to router B. The time required to propagate
from the beginning of the link to router B is the propagation delay.
The bit propagates at the propagation speed of the link.
The propagation speed depends on the physical medium of the link.
Propagation delays are on the order of milliseconds.
Propagations delay = d (Length of Physical Link) /s (Propagation speed in medium).
29
Packet Loss
Packet loss is the failure of one or more transmitted packets to arrive at their destination.
This event can cause noticeable effects in all types of digital communications.
The loss of data packets depends on the switch queue. The loss of data packets increases with the
increases in the traffic intensity.
It affects the performance of the network.
Throughput
Throughput or Network Throughput is the rate of successful message delivery over a communication
channel.
The data these messages belong to may be delivered over a physical or logical link or it can pass
through a certain network node.
Throughput is usually measured in bits per second (bit/s or bps), and sometimes in data packets per
second (p/s or pps) or data packets per time slot.
Example
In this problem, we consider sending real-time voice from Host A to Host B over a packet-switched
network (VoIP). Host A converts analog voice to a digital 64 kbps bit stream on the fly. Host A then
groups the bits into 56-byte packets. There is one link between Hosts A and B; its transmission rate is
2 Mbps and its propagation delay is 10 msec. As soon as Host A gathers a packet, it sends it to Host
B. As soon as Host B receives an entire packet, it converts the packet’s bits to an analog signal. How
much time elapses from the time a bit is created (from the original analog signal at Host A) until the
bit is decoded (as part of the analog signal at Host B)?
Since this is a packet switched network, the data will be transmitted packet by packet.
A packet is 56 byte and the analog to digital conversation rate is 64 kbps, thus the preparing time
PT for a packet is 448/(64*1000)= 0.007 sec=7 msec
The transition delay TD for a packet is (Size or Length of packet)/(Speed or Transmission rate) So,TD =
(56*8)/(2*1000*1000) = 0.000224 s = 0.224msec.
Propagation delay PD = 10msec (Given in sum)
Finally, the total time elapses from the time a bit is create until the bit is decoded is PT+TD+PD =
7+0.224+10 = 17.224 msec.
30
Fig.7: Reliable data transfer commands
The sending side of the data transfer protocol will be invoked from above by a call to rdt_send().
On the receiving side, rdt_rcv() will be called when a packet arrives from the receiving side of the
channel.
When the rdt protocol wants to deliver data to the upper layer, it will do so by calling
deliver_data().
Both the send and receive sides of rdt send packets to the other side by a call to udt_send().
The sender and receiver FSMs (Finite State Machines) have only one state.
The arrows in the FSM description indicate the transition of the protocol from one stateto
another. (Since each FSM has just one state, a transition is necessarily from the onestate
back to itself).
31
The event causing the transition is shown above the horizontal line labeling the transition,
and the action(s) taken when the event occurs are shown below the horizontal line.
The sending side of rdt simply accepts data from the upper-layer via the rdt_send(data)
event, puts the data into a packet (via the action make_pkt(packet,data)) and sends the
packet into the channel.
On the receiving side, rdt receives a packet from the underlying channel via the
rdt_rcv(packet) event, removes the data from the packet using the action
extract(packet,data) and passes the data up to the upper-layer.
Also, all packet flow is from the sender to receiver-with a perfectly reliable channel there is
no need for the receiver side to provide any feedback to the sender since nothing can go
wrong.
Reliable Data Transfer over a Channel with Bit Errors: rdt2.0
A more realistic model of the underlying channel is one in which bits in a packet may be
corrupted.
Such bit errors typically occur in the physical components of a network as a packet is
transmitted, propagates, or is buffered.
We'll continue to assume for the moment that all transmitted packets are received (although
their bits may be corrupted) in the order in which they were sent.
Before developing a protocol for reliably communicating over such a channel, first consider
how people might deal with such a situation.
Consider how you yourself might dictate a long message over the phone. In a typical scenario,
the message taker might say ``OK'' after each sentence has been heard, understood, and
recorded. If the message taker hears a garbled sentence, you're asked to repeat the garbled
sentence. This message dictation protocol uses both positive acknowledgements (``OK'') and
negative acknowledgements (``Please repeat that'').
These control messages allow the receiver to let the sender know what has been received
correctly, and what has been received in error and thus requires repeating. In a computer
network setting, reliable data transfer protocols based on such retransmission are known ARQ
(Automatic Repeat reQuest) protocols.
Fundamentally, two additional protocol capabilities are required in ARQ protocols to handle
the presence of bit errors:
Error detection: First, a mechanism is needed to allow the receiver to detect when bit
errors have occurred. UDP transport protocol uses the Internet checksum field for exactly
this purpose. Error detection and correction techniques allow the receiver to detect, and
possibly correct packet bit errors.
Receiver feedback: Since the sender and receiver are typically executing on different
end systems, possibly separated by thousands of miles, the only way for the sender to
learn of the receiver's view of the world (in this case, whether or not a packet was received
correctly) is for the receiver to provide explicit feedback to the sender. The positive (ACK)
and negative acknowledgement (NAK) replies in the message dictation scenario are an
example of such feedback. Our rdt2.0 protocol will similarly send ACK and NAK packets
back from the receiver to the sender
32
If an ACK packet is received (the notation rdt_rcv(rcvpkt) && is ACK (rcvpkt), the sender
knows the most recently transmitted packet has been received correctly and thus the
protocol returns to the state of waiting for data from the upper layer.
If a NAK is received, the protocol retransmits the last packet and waits for an ACK or NAK to
be returned by the receiver in response to the retransmitted data packet.
It is important to note that when the receiver is in the wait-for-ACK-or-NAK state, it cannot
get more data from the upper layer; that will only happen after the sender receives an ACK
and leaves this state.
Thus, the sender will not send a new piece of data until it is sure that the receiver has
correctly received the current packet.
Because of this behaviour, protocols such as rdt2.0 are known as stop-and-wait protocols.
The receiver-side FSM for rdt2.0 still has a single state.
On packet arrival, the receiver replies with either an ACK or a NAK, depending on whether
or not the received packet is corrupted.
In the above figure the notation rdt_rcv(rcvpkt) && corrupt(rcvpkt) corresponds to the
event where a packet is received and is found to be in error.
33
Fig 10 rdt 2.1 Sender
The rdt2.1 sender and receiver FSM's each now have twice as many states as rdt2.0.
This is because the protocol state must now reflect whether the packet currently being sent
(by the sender) or expected (at the receiver) should have a sequence number of 0 or 1
34
Note that the actions in those states where a 0-numbered packet is being sent or expected are mirror
images of those where a 1-numbered packet is being sent or expected; the only differences have to
do with the handling of the sequence number
Protocol rdt2.1 uses both positive and negative acknowledgements from the receiver to the
sender
A negative acknowledgement is sent whenever a corrupted packet, or an out of order packet,
is received
We can accomplish the same effect as a NAK if instead of sending a NAK, we instead send an
ACK for the last correctly received packet
A sender that receives two ACKs for the same packet (i.e., receives duplicate ACKs) knows that
the receiver did not correctly receive the packet following the packet that is being ACKed
twice. Our NAK-free reliable data transfer protocol for a channel with bit errors is rdt2.2.
Reliable Data Transfer over a Lossy Channel with Bit Errors: rdt3.0
Suppose now that in addition to corrupting bits, the underlying channel can lose packets as
well.
Two additional concerns must now be addressed by the protocol: how to detect packet loss
and what to do when this occurs.
The use of checksumming, sequence numbers, ACK packets, and retransmissions – the
techniques already developed in rdt 2.2-will allow us to answer the latter concern. Handling
the first concern will require adding a new protocol mechanism.
There are many possible approaches towards dealing with packet loss.
Suppose that the sender transmits a data packet and either that packet, or the
receiver's ACK of that packet, gets lost.
In either case, no reply is forthcoming at the sender from the receiver.
If the sender is willing to wait long enough so that it is certain that a packet has been lost, it
can simply retransmit the data packet.
But how long must the sender wait to be certain that something has been lost? It must clearly
wait at least as long as a round trip delay between the sender and receiver (which may include
buffering at intermediate routers or gateways) plus whatever amount of time is needed to
process a packet at the receiver.
If an ACKis not received within this time, the packet is retransmitted.
Note that if a packet experiences a particularly large delay, the sender may retransmit the
packet even though neither the data packet nor its ACK have been lost.
This introduces the possibility of duplicate data packets in the sender-to-receiver
channel.
Happily, protocol rdt2.2 already has enough functionality (i.e., sequence numbers) to handle
the case of duplicate packets.
From the sender's viewpoint, retransmission is a solution. The sender does not know
35
whether a data packet was lost, an ACK was lost, or if the packet or ACK was simply overly
delayed.
In all cases, the action is the same: retransmit. In order to implement a time-based
retransmission mechanism, a countdown timer will be needed that can interrupt the
sender after a given amount of timer has expired.
The sender will thus need to be able to (i) start the timer each time a packet (either a first
time packet, or a retransmission) is sent, (ii) respond to a timer interrupt (taking
appropriate actions), and (iii) stop the timer.
The existence of sender-generated duplicate packets and packet (data, ACK) loss
alsocomplicates the sender's processing of anyACK packet it receives.
If an ACK is received, how is the sender to know if it was sent by the receiver in responseto
its (sender's) own most recently transmitted packet, or is a delayed ACK sent inresponse to
an earlier transmission of a different data packet? The solution to this dilemma is to
augment the ACK packetwith an acknowledgement field. When thereceiver generates an
ACK, it will copy the sequence number of the data packet beingACK'ed into this
acknowledgement field. By examining the contents of the acknowledgment field, the
sender can determine the sequence number of the packet being positively acknowledged.
Fig 12 rdt 3.0 Sender
36
Fig. 13 Operation of rdt3.0, the alternating-bit protocol
37
Circuit Switching
Circuit switching is used in public telephone networks and is the basis for private networks built on
leased-lines.
Circuit switching was developed to handle voice traffic but also digital data (although inefficient)
With circuit switching a dedicated path is established between two stations for communication.
Switching and transmission resources within the network are reserved for the exclusive use of the
circuit for the duration of the connection.
The connection is transparent once it is established; it appears to attach devices as if there were a
direct connection.
Communication via circuit switching involves three phases:
1. Circuit Establishment
2. Data Transfer
3. Circuit Disconnect
Connection path must be established before data transmission begins. Nodes must have switching
38
capacity and channel capacity to establish connection.
Circuit switching is inefficient
1. Channel capacity dedicated for duration of connection
2. If no data, capacity wasted
Set up (connection) takes time
Once connected, transfer is transparent to the users
1. Data is transmitted at a fixed data rate with no delay (except for the propagation delay)
Developed for voice traffic (phone)
1. May also be used for data traffic via modem
Interconnection of telephones within a building or office.
In circuit switching, a direct physical connection between two devices is created by space division
switches, time-division switches, or both.
39
Figure14: Time Division Switching
Packet Switching
Packet switching was designed to provide a more efficient facility than circuit-switching for bursty data
traffic.
With packet switching, a station transmits data in small blocks, called packets.
At each node packets are received, stored briefly (buffered) and passed on to the next node.
1. Store and 2. forward mechanism
Each packet contains some portion of the user data plus control info needed for proper functioning of
the network.
A key element of packet-switching networks is whether the internal operation is datagram or virtual
circuit (VC).
1. With internal VCs, a route is defined between two endpoints and all packets for that VC
follow the same route.
2. Without internal VCs, each packet is treated independently, and packets intended for the
same destination may follow different routes.
Examples of packet switching networks are X.25, Frame Relay, ATM and IP.
Station breaks long message into packets. Packets sent one at a time to the network.
Packets handled in two ways:
1. Datagram
Each packet treated independently
Packets can take any practical route
Packets may arrive out of order
Packets may go missing
Up to receiver to re-order packets and recover from missing packets
2. Virtual Circuit
Preplanned route established before any packets sent.
Once route is established, all the packets between the two communicating parties
follow the same route through the network
Call request and call accept packets establish connection (handshake)
Each packet contains a Virtual Circuit Identifier (VCI) instead of destination address
No routing decisions required for each packet
Clear request to drop circuit
Not adedicated path
MessageSwitching
This technique was somewhere in middle of circuit switching and packet switching.
In message switching, the whole message is treated as a data unit and is transferred in its entirety.
A switch working on message switching, first receives the whole message and buffers it until there are
resources available to transfer it to the next hop.
If the next hop is not having enough resource to accommodate large size message, the message is
stored and switch waits.
Connection-oriented method
40
Connection-oriented communication includes the steps of setting up a call from one computer to
another, transmitting/receiving data, and then releasing the call, just like a voice phone call.
However, the network connecting the computers is a packet switched network, unlike the phone
system's circuit switched network.
Connection-oriented communication is done in one of two ways over a packet switched network:
1. Without virtual circuits. 2. With virtual circuits.
Connectionless method
Connectionless communication is just packet switching where no call establishment and release occur.
A message is broken into packets, and each packet is transferred separately. Moreover, the packets
can travel different route to the destination since there is no connection.
Connectionless service is typically provided by the UDP (User Datagram Protocol). The packets
transferred using UDP are also called datagrams.
41
PROCESSES COMMUNICATING
Process
A process is an instance of a program running in a computer or we can say that process is program
under execution.
When processes are running on the same end system, they can communicate with each other with
interprocess communication, using rules that are governed by the end system’s operating system.
Processes on two different end systems communicate with each other by exchanging messages across
the computer network.
A sending process creates and sends messages into the network; a receiving process receives these
messages and possibly responds by sending messages back.
In the context of a communication session between a pair of processes, the process that initiates the
communication is called the client. The process that waits to be contacted to begin the session is the
server.
The client is a process (program) that sends a message to a server process (program), requesting that
the server perform a task (service).
A server is a process (program) that fulfils the client request by performing the task requested by
client. Server programs generally receive requests from client programs, execute it and dispatch
responses to client requests
42
a socket.
A process is similar to a house and its socket is similar to its door. When a process wants to send a message
to another process on another host, it shoves (passes) the message out its door (socket). This sending
process assumes that there is a transportation infrastructure on the otherside of its door that will transport
the message to the door of the destination process. Once the message arrives at the destination host, the
message passes through the receiving process’s door (socket), and the receiving process then acts on the
message.
Addressing Processes
To identify the receiving process, two pieces of information need to be specified:
1. the address of the host and
2. an identifier that specifies the receiving process in the destination host.
In the Internet, the host is identified by its IP address.
An IPaddress is a 32-bit data/address that uniquely identifies the host.
In addition to knowing the address of the host to which a message is destined, the sending process must
also identify the receiving process (more specifically, the receiving socket) runningin the host.
This information is needed because in generala host could be running many network applications.
Throughput
Throughput is the rate at which the sending process can deliver bits to the receiving process.
The transport protocol ensures that the available throughput is always at least r bits/sec.
43
Timing
A transport-layer protocol can also provide timing guarantees.
An example guarantee might be that every bit that the sender pumps into the socket arrives at the
receiver’s socket no more than 100 msec later. Such a service would be appealing to interactive real-
time applications, such as Internet telephony, virtual environments,teleconferencing and multiplayer
games, all of which require tight timing constraintson datadelivery in order to be effective.
Security
Finally, a transport protocol can provide an application with one or more security services.
For example, in the sending host, a transport protocol can encrypt all data transmitted by the sending
process, and in the receiving host, the transport-layer protocol can decrypt the data before delivering
the data to the receiving process.
Transport-Layer services
A transport-layer protocol provides logical communication between application processes
running on different hosts.
By logical communication, application processes communicate with each other byusing the
logical communication provided by the transport layer to send messages to each other, free
44
from the worry of the details of the physical infrastructure used to carry these messages.
Transport-layer protocols are implemented in the end systems but not in network routers.
On the sending side, the transport layer converts the application-layer messages it receives
from a sending application process into transport-layer packets, known as transport-layer
segments.
On the receiving side, the transport layer reassembles segments into messages, passes to
application layer.
A network-layer protocol provides logical communication between hosts.
The job of gathering data chunks at the source host from different sockets, encapsulating each data chunk
with header information (that will later be used in demultiplexing) to create segments, and passing the
segments to the network layer is called multiplexing.
At the receiving end, the transport layer examines these fields to identify the receivingsocket and
then directs the segment to that socket. This job of delivering the data in atransport-layer
segment to the correct socket is called demultiplexing
Transport layer in the middle host in Figure 1 must demultiplex segments arriving fromthe
network layer below to either process P1 or P2 above; this isdone by directing thearriving
segment’s data to the corresponding process’s socket
The transport layer in the middle host must also gather outgoing data from thesesockets, form
transport-layer segments, and pass these segments down to the networklayer
46
Endpoint Identification
Sockets must have unique identifiers.
Each segment must include header fields identifying the socket, these header fields are the
source port number field and the destination port number field.
Each port number is a 16-bit number:0 to 65535.
Transport layer in Host A creates a segment containing source port, destination port,and dataand
thenpasses it to the network layer in Host A.
Transport layer in Host B examines destinationport number and delivers segment tosocket
identified by port 46428
Note: a UDP socket is fully identified by a two-tuple consisting of
1. a destination IP address
2. a destination port number
Source port number from Host A is used at Host B as "return address".
47
Connection-Oriented Multiplexing and Demultiplexing
Each TCP connection has exactly two end-points.
This means that two arriving TCP segments with different source IP addresses or source port
numbers will be directed to two different sockets, even if they have the same destination port
number.
So a TCP socket is identified by four-tuple
1. source IP address 3. destination IP address
2. source port# 4. destination port #
Whereas UDP is identified by only two-tuples
1. destination IP address 2. destination port #
Fig. 4: Two clients, using the same destination port number (80) tocommunicate with the same
Web server application
48
The port numbers allow the destination host to pass the application data to the correct
process runningon the destination end system (that isusedto perform the demultiplexing
function).
Thelength field specifies the number of bytes in the UDP segment (header plus data).
The checksum is used by the receiving host to check whether errors have been introduced
into the segment.
How to calculate (find) checksum:
The UDP checksum iscalculatedon the sending side by summing all the 16-bit words in the
segment, with any overflow being wrapped around and then the 1's complement is
performed and the result is added to the checksum field inside the segment.
At the receiver side, all words inside the packet are added and the checksum is added upon
them if the result is 1111 1111 1111 1111 then the segment is valid else the segment has an
error.
49
path name.
For example, the
URLhttp://www.someSchool.edu/someDepartment/picture.gifhaswww.someSchool.eduis
hostname and“/someDepartment/picture.gif”is path name.
r
e
q
u
e
s
t-responsebehaviour
When a user requests a Web page (for example, clicks on a hyperlink), the browser sends
HTTPrequest messages for the objects in the page to the server.
The server receives the requests and responds with HTTP response messages that contain
theobjects.
HTTP uses TCP as its underlying transport protocol.
The HTTP client first initiates aTCP connection with the server.
Once the connection is established, the browser and the server processes access TCP
throughtheirsocket interfaces.
HTTP follows client/server model
client: browser that requests, receives, (using HTTP protocol) and “displays” Web objects
server: Web server sends (using HTTP protocol) objects in response to requests
HTTP connection types
1. Non-persistent HTTP
2. Persistent HTTP
Non-persistent HTTP
A non-persistent connection is the one that is closed after the server sends the requested objectto
the client. In other words, the connection is used exactly for one request and one response.
Fordownloading multiple objectsitrequired multiple connections.
Non-persistent connections are the default mode for HTTP/1.0.
suppose user enters URL: "www.someSchool.edu/someDepartment/home.index"
Above link contains text and references to 10 jpeg images.
50
1. The HTTP client process initiates a TCP connectionto the serverwww.someSchool.edu on port number 80,
which is the default port numberfor HTTP. Associated with the TCP connection, there will be a socket at
theclient and a socket at the server.
2. The HTTP client sends an HTTP request message to the servervia its socket. Therequest message includes
the path name /someDepartment/home.index.
3. The HTTP server process receives the request message via its socket, retrieves the
object /someDepartment/home.index from its storage (RAM ordisk), encapsulates the object in an HTTP
response message, and sends theresponse message to the client via its socket.
4. The HTTP server process tells TCP to close the TCP connection. (But TCPdoesn’t actually terminate the
connection until it knows for sure that the clienthas received the response message intact.).
5. The HTTP client receives the response message. The TCP connection terminates. The message indicates
that the encapsulated object is an HTML file. Theclient extracts the file from the response message,
examines the HTML file, and finds references to the 10 JPEG objects.
6. The first four steps are then repeated for each of the referenced JPEG objects.
RTT(Round Trip Time)-which is the time it takes for a small packet to travel from client to
Serverand then back to theclient. See Figure 4 below.
51
Fig.4 Calculation for the time needed to request and receive an HTML file
Persistent HTTP
With persistent connections, the server leaves the TCP connection open after sending responses and hence
the subsequent requests and responses between the same client and server can be sent.
The server closes the connection only when it is not used for acertain configurable amount of time.
It requiresas little as one RTT for all the referenced objects
With persistent connections, the performance is improved by 20%.
Persistent connections are the default mode for HTTP/1.1.
HTTP messageformat
There aretwo types of HTTP messages:
1. Request 2. Response
52
HTTP request messageconsist of three part
1. Request line 2. Header line 3. Carriage return
The message is written in ordinary ASCII text, so that your ordinary computer-literate human being can
read it.
Each line is followed by a carriage return and a line feed.
The last line is followed by an additional carriage return and line feed.
The first line of an HTTP request message is called the request line; the subsequent lines are called the
header lines.
The request line has three fields: the method field, the URL field, and the HTTP version field.
The method field can take on several different values, including GET, POST, HEAD, PUT, andDELETE.
In this example, the browser is requesting the object/somedir/page.html. The version is self-
explanatory; in this example, thebrowser implements version HTTP/1.1.
The header line Host: www.someschool.edu specifies the host on which the object resides.
Use of cookies
authorization recommendations
shopping carts user session state (Webe-mail)
54
_____________________________________________________________________________________
storage.
Fig.8Clients requesting objects through a Web cache
A Web cacheORa proxy serveris a network entity that satisfies HTTPrequests on the behalf of
an origin Web server.
The Web cache has its own disk storageand keeps copies of recently requested objects in
thisstorage.
Auser’s browser can be configured so that all of the user’s HTTP requests are first directedtothe
Webcache.
As an example, suppose a browser is requesting theobject
http://www.someschool.edu/campus.gif.
Here is what happens
1.The browser establishes a TCP connection to the Web cache and sends anHTTP request
for the object to the Web cache.
2.The Web cache checks to see if it has a copy of the object stored locally. If itdoes, the
Web cache returns the object within an HTTP response message tothe client browser.
3.If the Web cache does not have the object, the Web cache opens a TCP connectionto the
Originserver, that is, to www.someschool.edu. The Web cachethen sends an HTTP
request for the object into the cache-to-server TCP connection.After receiving this
request, the origin server sends the object withinan HTTP response to the Web cache.
4.When the Web cache receives the object, it stores a copy in its local storage andsends a
copy, within an HTTP response message, to the client browser.
Note that a cache is both a server and a client at the same time.
When it receivesrequests from and sends responses to a browser, it is a server.
When it sends requeststo and receives responses from an origin server, it is a client.
57
Fig.12Alice sends a message to Bob
Inabove figure when Alice is finished composing her message, her user agent sends the
message to her mail server, where the message is placed in the mail server’s outgoing message
queue.
When Bob wants to read amessage, his user agent retrieves the message from his mailbox in hismail
server.
Mail servers form the core of the e-mail infrastructure.
Each recipient, such asBob, has a mailbox located in one of the mail servers.
Bob’s mailbox manages andmaintainsthe messages that have been sent to him.
A typical message starts its journeyin the sender’s user agent, travels to the sender’s mail
serverandtravels to the recipient’s mail server, where it is deposited in the recipient’s mailbox.
When Bob wants toaccess the messages in his mailbox, the mail server containing his mailbox
authenticates Bob (with usernames and passwords).
Alice’s mail server must also deal with failures in Bob’s mail server.
If Alice’s server cannot deliver mail to Bob’s server, Alice’s server holds the message in a
message queue and attempts to transfer the message later.
Re-attempts are often done every 30 minutes or so; if there is no success after several days, theserver
removes the message and notifies the sender (Alice) with an e-mail message.
SMTP is the principal application-layer protocol for Internet electronic mail. It uses the reliabledata
transfer service of TCP to transfer mail from the sender’s mail server to the recipient’s mailserver.
SMTP has two sides: a client side, which executes on the sender’s mail server, and a server side,which
executes on the recipient’s mail server.
Both the client and server sides of SMTP run on every mail server.
When a mail server sends mail to other mail servers, it acts as an SMTP client. When a mailserver
receives mail from other mail servers, it acts as an SMTP server.
58
Fig.13Alice sends a message to Bob
Alice invokesher user agent for e-mail, provides Bob’s e-mail address (for example,[email protected]),
composes a message and instructs the user agent to send themessage.
Alice’s user agent sends the message to her mail server, where it is placed in a messagequeue.
The client side of SMTP, running on Alice’s mail server, sees the message in the messagequeue. It
opens a TCP connection to an SMTP server, running on Bob’s mail server.
After some initial SMTP handshaking, the SMTP client sends Alice’s message into theTCPconnection.
At Bob’s mail server, the server side of SMTP receives the message. Bob’s mail server thenplaces the
message in Bob’s mailbox.
Bob invokes his user agent to read the message at his convenience.
SMTP does not normally use intermediate mail servers for sending mail, even when the two
mailservers are located at opposite ends of the world.
If Bob’s mail server is down, the message remains in Alice’s mail server and waits for a
newattemptandthe message does not get placed in some intermediate mail server.
How SMTP transfers a message from a sending mail server to a receiving mail
server
First, the client SMTP (running on the sending mail server host) has TCP establish a connection
to port 25 at the server SMTP (running on the receiving mail server host).
If the server is down, the client tries again later.
Once this connection is established, the server and client perform some application-layer
handshaking, just as humans often introduce themselves before transferring information from
one to another.
During this SMTP handshaking phase, the SMTP client indicates the e-mail address of thesender
(the person who generated the message) and the e-mail address of the recipient.
Once the SMTP client and server have introduced themselves to each other, the client sends the
message.
SMTP can count on the reliable data transfer service of TCP to get the message to the server
without errors.
The client then repeats this process over the same TCP connection if it has other messages to
send to the server; otherwise, it instructs TCP to close the connection.
Every time you use a domain name, therefore, a DNSservice must translate thedomainnameinto the
corresponding IP address. For example, the domain name www.google.com mighttranslate to
198.105.232.4.
The DNS system is, in fact, its own network. If one DNS server doesn't know how to translate
aparticular domain name, it asks another one, and so on, until the correct IP address is returned.
1.A single point of failure:If the DNS server crashesthenthe entire Internetwill not stop.
2.Traffic volume:Today the Internet is so huge, with millions of device and users accessing
its services from all over the globe atthe same time. A Single DNS Server cannot handle
the huge global DNS trafficbutwith distributed system this trafficis distributedand
reduce overload on server.
3.Distant centralized database:A single DNS server cannot be “close to” all the querying
clients. If we put the single DNS server in New York City, then all queries from Australia
must travel to the other side of the globe, perhaps over slow and congested links. This
can lead to significant delays.
4.Maintenance:The single DNS server would have to keep records for all Internet hosts.
Not only would this centralized database be huge, but it would have to be updated
frequently to account for every new host.
62
Protocol pipelining
Protocol pipelining is a technique in which multiple requests are written out to a single
socketwithout waiting for the corresponding responses(acknowledged).
Pipelining can be used in various application layer network protocols, like HTTP/1.1,
SMTP and FTP.
Range of sequence numbers must be increased.
Data or Packet should bebufferedat sender and/or receiver.
64
This approach can speed up the retransmission but is not strictly needed.
1
6
T
C
P
s
e
g
m
e
nt structure
65
The unit of transmission in TCP is called segments.
The header includes source and destination port numbers, which are used
formultiplexing/demultiplexing data from/toupper-layer applications.
The 32-bit sequence number field and the 32-bit acknowledgment number field areused by
the TCP sender and receiver in implementing a reliable datatransfer service.
The sequence number for a segment is the byte-stream number of the first byte in
thesegment.
The acknowledgment number is the sequence number of the next byteaHostisexpecting
from anotherHost.
The 4-bit header length field specifies the length of the TCP header in 32-bitwords. TheTCP
header can be of variablelength due to the TCP options field.
The 16-bit receive window field is used for flow control.It is used to indicate thenumber of
bytes that a receiver is willing to accept.
The 16-bit checksum field is used for errorchecking of the header and data.
Unused6 bitsarereserved for future useand should besenttozero.
Urgent Pointerisused in combining with the URG control bit for priority data transfer.This
field contains the sequence number of the last byte of urgent data.
Data: The bytes of data being sent in the segment.
URG (1 bit):indicates that the Urgent pointer field is significant.
ACK (1 bit):indicates that the Acknowledgment field is significant.
PSH (1 bit):Push function. Asks to push the buffered data to the receiving application.
RST(1 bit):Reset the connection.
SYN (1 bit):Synchronize sequence numbers. Only the first packet sent from each
endshould have this flag set. Some other flags and fields change meaning based on this
flag,and some are only valid for when it is set, and others when it is clear.
FIN (1 bit): No more data from sender.
Congestion Control
When a connection is established, a suitable window size has to be chosen.
The receiver can specify a window based on its buffer size.
If the sender sticks to this window size, problems will not occur due to buffer overflowat
the receiving end, but they may still occur due to internal congestion within thenetwork.
In Figure17(a), we see a thick pipe leading to a small-capacity receiver.
As long as the sender does not send more water than the bucket can contain, no waterwill
66
be lost.
In Figure17(b), the limiting factor is not the bucket capacity, but the internal
carryingcapacity of the network.
Fig. 17TCP segment structure
67
If this segment is acknowledged before the timer goes off, it adds one segment's worthof
bytes to the congestion window to make it two maximum size segments and sendstwo
segments.
As each of these segments is acknowledged, the congestion window is increased by one
maximum segment size.
When the congestion window is n segments, if all n are acknowledged on time,
thecongestion window is increased by the byte count corresponding to n segments.
68
to reduce the offered load on the network.
Once ssthresh is reached, TCP changes from slow-start algorithm to the linear
growth(congestion avoidance) algorithm. At this point, the window is increased by 1
segmentfor each RTT.
69