Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
14 views204 pages

CN Module 2

Uploaded by

alsonmathias1209
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views204 pages

CN Module 2

Uploaded by

alsonmathias1209
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 204

MODULE-2

Data Link Layer: Error Detection and Correction: Introduction, Block


Coding, Cyclic Codes. Data
link control: DLC Services: Framing, Flow Control, Error Control,
Connectionless and Connection Oriented, Data link layer protocols, High
Level Data Link Control.
Media Access Control: Random Access, Controlled Access. Check Sum
and Point to Point Protocol
ERROR-DETECTION AND CORRECTION
• Types of Errors
• Whenever bits flow from one point to another, they are subject to
unpredictable changes because of interference.
• The interference can change the shape of the signal.
• Two types of errors:

• 1) Single-bit error
• 2) Burst-error.
ERROR-DETECTION AND CORRECTION
• Single-Bit Error
• Only 1 bit of a given data is changed → from 1 to 0
or
• → from 0 to 1 (Figure 10.1a).
ERROR-DETECTION AND CORRECTION
2) Burst Error
• Two or more bits in the data have changed → from 1 to 0 or
• → from 0 to 1 (Figure 10.1b).
• A burst error is more likely to occur than a single-bit error because the
duration of the noise signal is normally longer than the duration of 1 bit,
which means that when noise affects data, it affects a set of bits.
Redundancy
• The central concept in detecting/correcting errors is redundancy.

• To be able to detect or correct errors, we need to send some extra bits


with our data. These extra bits are called redundant-bits.

• The redundant-bits are added by the sender and removed by the


receiver.
Detection versus Correction
• Error-correction is more difficult than error-detection.
1) Error Detection
• Here, we are checking whether any error has occurred or not. The
answer is a simple
• YES or NO.
• We are not interested in the number of corrupted-bits.
Detection versus Correction
2) Error Correction
• we need to know the exact number of bits that are corrupted and, more
importantly, their location in the message.
• Two important factors to be considered:
• 1) Number of errors and
• 2)Message-size.
• If we need to correct a single error in an 8-bit data unit, we need to
consider eight possible error locations;
• if we need to correct two errors in a data unit of the same size, we
need to consider 28 (permutation of 8 by 2) possibilities.
Coding
• Redundancy is achieved through various coding-schemes.
• 1) Sender adds redundant-bits to the data-bits. This process creates a
relationship between redundant-bits and data-bits.
• 2) Receiver checks the relationship between redundant-bits & data-bits
to detect/correct errors.
• Two important factors to be considered:
1) Ratio of redundant-bits to the data-bits and
2) Robustness of the process.
BLOCK CODING
• Two broad categories of coding schemes:
1) Block-coding and 2) Convolution coding.
Block-coding
• The message is divided into k-bit blocks. These blocks are called data-
words.

• Here, r-redundant-bits are added to each block to make the length n=k+r.

• The resulting n-bit blocks are called code-words.

• Since n>k, the number of possible code-words is larger than the number of
possible data-words.
BLOCK CODING
• Block-coding process is one-to-one; the same data-word is always
encoded as the same code-word.

• Thus, we have 2^n-2^k code-words that are not used. These code-words
are invalid or illegal.
Error Detection
• How can errors be detected by using block coding? If the following
two conditions are met, the receiver can detect a change in the original
codeword.

1) The receiver has a list of valid code-words.

2) The original code-word has changed to an invalid code-words.


Error Detection
Here is how it works (Figure 10.2):

1) At Sender

i) The sender creates code-words out of data-words by using a


generator. The generator applies the rules and procedures of encoding.

ii) During transmission, each code-word sent to the receiver may


change.
Error Detection
2) At Receiver
i) a) If the received code-word is the same as
one of the valid code-words, the codeword is
accepted; the corresponding data-word is
extracted for use.
b) If the received code-word is invalid, the
code-word is discarded.
ii) However, if the code-word is corrupted but
the received code-word still matches a valid
codeword, the error remains undetected.
Error-detection
• An error-detecting code can detect only the types of errors for which it
is designed; other types of errors may remain undetected.
• Example 3.1
error-detection
Example 3.1
• Assume the sender encodes the data word 01 as 011 and sends it to the
receiver. Consider the following cases:
1. The receiver receives 011. It is a valid codeword. The receiver
extracts the data word 01 from it.
2. The codeword is corrupted during transmission, and 111 is received
(the leftmost bit is corrupted). This is not a valid codeword and is
discarded.
3. The codeword is corrupted during transmission, and 000 is received
(the right two bits are corrupted). This is a valid codeword. The receiver
incorrectly extracts the data word 00. Two corrupted bits have made the
error undetectable.
Hamming Distance
• One of the central concepts in coding for error control is the idea of
the Hamming distance.

• The Hamming distance between two words (of the same size) is the
number of differences between the corresponding bits.

• Let d(x,y) = Hamming distance b/w 2 words x and y.

Hamming distance can be found by applying the XOR operation on the


2 words and counting the number of 1s in the result.
Hamming Distance
• For example:

• 1) The Hamming distance d(000, 011) is 2 because 000 011= 011 (two
1s).

• 2) The Hamming distance d(10101, 11110) is 3 because 10101 11110


= 01011 (three 1s).
Hamming Distance and Error
• Hamming distance between the received word and the sent code-word
is the number of bits that are corrupted during transmission.

• For example: Let Sent code-word = 00000

• Received word = 01101

• Hamming distance = d(00000, 01101) =3. Thus, 3 bits are in error.


Minimum Hamming Distance for Error Detection
• Minimum Hamming distance is the smallest Hamming distance b/w
all possible pairs of code-words.
• Let dmin = minimum Hamming distance.
• To find dmin value, we find the Hamming distances between all words
and select the smallest one.
Minimum Hamming Distance for Error Detection
Minimum Hamming Distance for Error Detection
• Let us assume that the sent code-word x is at the center of a circle with
radius s.

• All received code-words that are created by 0 to s errors are points inside the
circle or on the perimeter of the circle.

• All other valid code-words must be outside the circle

• For example: A code scheme has a Hamming distance dmin = 4.

This code guarantees the detection of upto 3 errors

(d = s + 1 or s = 3).
Minimum Hamming Distance for Error Detection
Linear Block Code
• Almost all block codes used today belong to a subset called linear
block codes.
• A linear block code is a code in which the exclusive OR (addition
modulo-2) of two valid code words creates another valid codeword.
Linear Block Code
Example
• Let us see if the two codes defined in Tables above, belong to the class
of linear block codes.
1. The scheme in the above Table is a linear block code because the
result of XORing any codeword with any other codeword is a valid
codeword.

Minimum Distance for Linear Block Codes


It is simple to find the minimum Hamming distance for a linear block
code.
Linear Block Code
• The minimum Hamming distance is the number of 1s in the nonzero
valid codeword with the smallest number of 1s.

• Example 10.6

• In our first code (Table 10.1), the numbers of 1s in the nonzero


codewords are 2, 2, and 2. So the

• minimum Hamming distance is dmin = 2.


Simple Parity check code
• This code is a linear block code. In this code, a k-bit dataword is
changed to an n-bit codeword where n = k + 1.

• The extra bit, called the parity bit, is selected to make the total number
of 1s in the codeword even.

• The minimum Hamming distance for this category is dmin = 2, which


means that the code is a single-bit error-detecting code.
Simple Parity check code
• First code (Table 10.1) is a parity-check code (k = 2 and n = 3).

• The code in Table 10.2 is also a parity-check code with k = 4 and n =


5.
Encoder and decoder for simple parity-check code
Encoder and decoder for simple parity-check code
The parity bit that is added makes the number of 1’s in the
codeword even. i.e r 0 = a0 +a1 +a2 +a3 (modulo-2)
• If number of 1s is even, the result is 0.
• If number of 1s is odd, the result is 1.
• The sender sends the code word which is corrupted during
transmission.
• The receiver receives 5 bit code word.
• The checker performs modulo-2 of all the bits received. The
result is called syndrome.
Encoder and decoder for simple parity-check code
• If number of 1s in the codeword is even, the syndrome is 0.

• If number of 1s in the codeword is odd, the syndrome is 1.

• The syndrome is passed to decision logic analyzer. If Syndrome


is 0, no error in the code word received, data portion is extracted.

• If syndrome is 1, error in the received codeword and discarded.


Encoder and decoder for simple parity-check code
• Let us look at some transmission scenarios. Assume the sender
sends the data word 1011. The codeword created from this data
word is 10111, which is sent to the receiver.

Five cases can be examined:

1. No error occurs; the received codeword is 10111. The


syndrome is 0. The dataword 1011 is created.
Encoder and decoder for simple parity-check code
2. One single-bit error changes a1. The received codeword is
10011. The syndrome is 1. No dataword is created.

3. One single-bit error changes r0. The received codeword is


10110. The syndrome is 1. No dataword is created. Note that
although none of the dataword bits are corrupted, no dataword is
created because the code is not sophisticated enough to show the
position of the corrupted bit.
Encoder and decoder for simple parity-check code
4. An error changes r0 and a second error changes a3. The received
codeword is 00110. The syndrome is 0. The dataword 0011 is
created at the receiver. Note that here the dataword is wrongly
created due to the syndrome value. The simple parity-check decoder
cannot detect an even number of errors. The errors cancel each other
out and give the syndrome a value of 0.

5. Three bits—a3, a2, and a1—are changed by errors. The received


codeword is 01011. The syndrome is 1. The dataword is not created.
This shows that the simple parity check, guaranteed to detect one
single error, can also find any odd number of errors.
CYCLIC CODES
• Cyclic codes are special linear block codes with one extra property.

• In a cyclic code, if a codeword is cyclically shifted (rotated), the


result is another codeword.

• Eg:1011000 is a codeword and if it is cyclically left-shifted, then


0110001 is also a codeword.

• We can shift the bits using

• b 1=a0,b2=a1, b3=a2, b4=a3, b5=a4, b6=a5, b0=a6


Cyclic Redundancy Check
• Cyclic codes are created to correct errors.

• Discuss a subset of cyclic codes called the cyclic


redundancy check (CRC), which is used in networks
such as LANs and WANs.
Cyclic Redundancy Check
Cyclic Redundancy Check
Figure 10.5 shows one possible design for the encoder and decoder
Cyclic Redundancy Check
In the encoder, the data word has k bits(4 bits), code word has
n bits(7 here).
• The data word is augmented by adding n-k 0s to the right
hand side of the word.
• The n-bit result is fed to the generator.
• The generator uses a divisor of size n-k+1which is
predefined and agreed upon.
• The generator divides the augmented dataword by divisor
(modulo-2 division).
Cyclic Redundancy Check
The quotient of the division is discarded.
• The remainder r2r1 r0 is appended to the data word to
create the codeword.
• The decoder receives the corrupted codeword.
• A copy of all n bits are fed to the checker.
• The remainder produced by checker is a syndrome of
n-k bits, which is fed to the decision logic analyzer.
• If syndrome bits are 0, the 4 left most bits of the
code word are accepted as a data word.
Cyclic Redundancy Check
Cyclic Redundancy Check
•CRC-Encoder
• Encoder takes data word and augments it with n-k number of
0s.
• It then divides the augmented data word by divisor.
• Modulo-2 binary division is used. For Addition and subtraction, XOR
operations is used.
• In each step, copy of the divisor is XOR ed with 4 bits of
dividend. The result of XOR operation is 3 bits.
Cyclic Redundancy Check
CRC-Decoder
• Code word can change during transmission.
• The decoder same thing as encoder and remainder of
the division id the syndrome.
• If syndrome is all 0s, no error, data word is separated
from the code word and accepted.
• Otherwise , everything is discarded.
• Question is how divisor 1011 is chosen??
Cyclic Redundancy Check
Cyclic Redundancy Check (CRC)

CRC Decoding Steps:


1.Uncorrupted Codeword (1001110):
•Perform the same division using the divisor.
•Syndrome = 000 (no error detected), so the dataword 1001 is accepted.
2.Corrupted Codeword (1000110):
•Perform division with the divisor.
•Syndrome = 011 (error detected), so the dataword is discarded.

•Encoding: Dataword + remainder = codeword.


•Decoding: Recalculate the syndrome. If it's all zeros, accept the data; otherwise, discard it.
Cyclic Redundancy Check (CRC)
Polynomials
Cyclic codes is to represented as polynomials.
A binary pattern of 0s and 1s can be expressed as a polynomial, where each term's power
indicates the bit's position, and the coefficient shows its value.
For example, we see how to convert a binary pattern to a polynomial and simplify it
by removing terms with zero coefficients.
Polynomials
This method can significantly reduce the representation size: a 7-bit pattern can become
just three terms in a polynomial form, making analysis simpler.

Degree of a Polynomial
The degree of a polynomial is the highest power in the polynomial.
For example, the degree of the polynomial x6 + x + 1 is 6. Note that the degree of a polynomial is
1 less than the number of bits in the pattern. The bit pattern in this case has 7 bits.
Polynomials
Multiplying or Dividing Terms

In this arithmetic, multiplying a term by another term is very simple; we just add the
powers. For example, x3 × x4 is x7. For dividing, we just subtract the power of the second
term from the power of the first. For example, x5/x2 is x3.

Multiplying Two Polynomials: Multiplying a polynomial by another is done term by


term. Each term of the first polynomial must be multiplied by all terms of the second.
Polynomials
Dividing One Polynomial by Another.
Polynomial division is similar to binary division used in encoding. We divide the first
term of the dividend by the first term of the divisor to get the first term of the quotient,
then multiply and subtract from the dividend.
Polynomials
Shifting
Left Shifting:
•Used in cyclic codes to add extra zeros to the dataword. This creates space for check bits
(remainder) after division, allowing error detection in the final codeword.
Right Shifting:
•Mostly used to remove extra bits or adjust data alignment in general binary operations, but it’s
not commonly used in the main cyclic encoding process.
Cyclic Code Encoder Using Polynomials

A Cyclic Code Encoder Using Polynomials is a method to encode data for error
detection in digital communication. In this method, both the data and a special
"generator" code are represented as polynomials.

The divisor in a cyclic code is normally called the generator polynomial or simply the
generator.

Example:
 The dataword 1001 is represented as x3 + 1.
 The divisor 1011 is represented as x3 + x + 1.
Cyclic Code Encoder Using Polynomials

 To find the augmented dataword, we have left-shifted the dataword 3 bits


(multiplying by x3 ). 1001+000
 The result is x6 + x3 .
 Divide the first term of the dividend, x6 , by the first term of the divisor, x3 .
 The first term of the quotient is then x6/x3, or x3.
 Then we multiply x3 by the divisor and subtract (according to our previous definition of
subtraction) the result from the dividend.
 The result is x4 , with a degree greater than the divisor’s degree; we continue to divide
until the degree of the remainder is less than the degree of the divisor.
Cyclic Code Encoder Using Polynomials
At the Receiver’s End:

1.Receiver’s Division:

1. The receiver divides the received codeword by the

same divisor used by the sender.

2.Check for Remainder:

1. If the remainder is zero, it means no error is

detected in the codeword, and the data is assumed

to be correct.

2. If there is a non-zero remainder, it indicates an

error in transmission, and codeword was corrupted.


Cyclic Code Analysis

Analyze cyclic codes to detect or correct transmission errors using polynomial operations.

•Dataword (d(x)): Original data.


•Codeword (c(x)): Encoded data with check bits.
•Generator (g(x)): Polynomial used for encoding and error detection.
•Syndrome (s(x)): Remainder after dividing the received codeword by g(x)g(x)g(x).
•Error (e(x)): Represents any errors in the transmitted data.
•Error Detection:
•If s(x)≠0, is detected.
•If s(x)=0, Either no error or undetectable error.
Cyclic Code Analysis
Cyclic Code Analysis
Cyclic Code Analysis
checksum
• A checksum is a technique used to detect errors
in data. When data (like a message) is sent,
the sender adds extra information, called the
checksum, to the data. This checksum is created
based on the contents of the data.

• At the receiving end, the receiver recalculates


the checksum from the received data and
compares it with the sent checksum. If they
match, it means the data is error-free. If not,
it indicates that an error occurred during
transmission.
checksum
checksum
• In this example, the traditional checksum method
is explained in a simple way:
• You have a list of five 4-bit numbers to send.
Along with these numbers, you also send the sum of
those numbers as a checksum.
• For instance, if your numbers are:
• 7, 11, 12, 0, and 6
• The sum of these numbers is:
• 7 + 11 + 12 + 0 + 6 = 36
• So, you send the message as:
• (7, 11, 12, 0, 6, 36)
• At the receiving end, the receiver adds up the
five numbers (7, 11, 12, 0, 6). If the result
matches the sum sent with the message (36), the
receiver assumes the message is error-free and
accepts it. If the sums don't match, the receiver
checksum
• In this example, the limitation of the previous
checksum method is addressed by using one's
complement arithmetic to handle sums larger
than the bit size.
• In the previous method, the sum of the numbers
(36 in decimal) exceeded the 4-bit limit (which
can only represent numbers up to 15). To
overcome this, one’s complement arithmetic is
used, which allows handling numbers using a
fixed bit size by "wrapping" any extra bits.
• Here’s how it works:
1.The sum of the original numbers was 36, which
in binary is 100100₂.
checksum
1.To fit this into 4 bits, you wrap the leftmost
bits by adding them to the rightmost 4 bits:
1.100100₂ can be split into the 4-bit part (0010₂) and
the extra bits (10₂).
2.Adding the extra bits (10₂) to the 4-bit part
(0010₂) gives 0110₂, which is 6 in decimal.
• So instead of sending 36, you send 6 as the
checksum.
• At the receiver’s end, the numbers (7, 11, 12,
0, 6) are added together using one’s complement
arithmetic. If the result is 6, the message is
considered error-free and accepted; otherwise,
it's rejected. This method ensures that the sum
stays within the 4-bit limit.
checksum
• In this example, one's complement arithmetic is
further utilized to generate a checksum that
ensures data integrity during transmission.
• Here’s how it works:
1.The sender adds the five numbers (7, 11, 12,
0, 6) using one's complement arithmetic:
1.The sum of these numbers is 6.
2.The sender then complements the result to
calculate the checksum:
1.To get the checksum, subtract the sum (6) from 15
(which is the maximum 4-bit value):
1.Checksum = 15 - 6 = 9 (in binary: 1001₂).
checksum
• Note that 6 and 9 are complements of each other
in binary:6 = 0110₂,9 = 1001₂.
• The sender sends the data numbers along with
the checksum:
• The message sent is: (7, 11, 12, 0, 6, 9).
• At the receiver’s end:
• The receiver adds the numbers (7, 11, 12, 0, 6,
9) using one’s complement arithmetic.
• The sum is 15 (which is the maximum possible
sum in 4-bit one's complement arithmetic).
• The receiver then complements this result:
• Complement of 15 is 0, indicating that there
was no corruption in transmission.
checksum
Internet checksum
• The traditional 16-bit checksum is a method
used to detect errors in data transmission,
commonly used in the Internet. The process
follows these steps for both the sender and
receiver:
• Sender Side:
1.Divide the message into 16-bit words.
2.Initialize the checksum value to zero.
3.Add all the 16-bit words, including the
checksum itself, using one's complement
addition (where carries from the most
significant bit are added back to the least
significant bit).
Internet checksum
• Receiver Side:
1.Receive the message and the checksum.
2.Divide the message into 16-bit words
(including the checksum).
3.Add all the words using one's complement
addition.
4.Complement the sum to create the new checksum.
5.If the new checksum is zero, the message is
accepted. Otherwise, it is rejected, indicating
an error.
Internet checksum
• Algorithm Details:
• The calculation of the checksum involves a two-
step process:
1.Sum Calculation: The sender computes the sum
of all data words using two's complement
addition. This is because most modern computers
use two's complement arithmetic.
2.Wrapping Extra Bits: If the sum exceeds 16
bits, the extra bits are wrapped by adding the
overflow bits (left bits) to the rightmost 16
bits to simulate one's complement behavior.
Internet checksum
• Algorithm Details:
Internet checksum
• Performance:
• While the traditional 16-bit checksum provides
basic error detection, it has limitations:
• If one word is incremented and another is
decremented by the same amount, the error cannot
be detected because the sum and checksum remain
unchanged.
• If several words are modified but their sum
remains the same, the error won't be caught.
• Because of these shortcomings, stronger error
detection methods like Cyclic Redundancy Check
(CRC) are preferred in modern Internet protocols.
The Fletcher and Adler checksums have also been
proposed to address some of these issues by
introducing weighted sums, but the general trend
is moving toward CRC for its superior error-
DATA LINK CONTROL
DLC SERVICES
• Data link Layer is divided into Logical Link Control (DLC) and MAC.

• The data link control (DLC) deals with procedures for communication
between two adjacent nodes i.e. node-to-node communication.

• Data link control functions include

1) Framing

2) Flow control and

3) Error control.
Framing
• A frame is a group of bits. Framing means organizing the bits into a
frame that are carried by the physical layer.

• The data-link-layer needs to form frames, so that each frame is


distinguishable from another.

• Framing separates a message from other messages by adding sender-


address & destination-address.

• The destination-address defines where the packet is to go.

• The sender-address helps the recipient acknowledge the receipt.


Framing
• Q: Why the whole message is not packed in one frame?

• Ans: Large frame makes flow and error-control very inefficient.

Even a single-bit error requires the re-transmission of the whole


message.

• When a message is divided into smaller frames, a single-bit error


affects only that small frame.
Framing
• Our postal system practices a type of framing. The simple act of inserting a
letter into an envelope separates one piece of information from another;
the envelope serves as the delimiter.

• In addition, each envelope defines the sender and receiver addresses since
the postal system is a many-to-many carrier facility
Framing
Frame Size

• Two types of frames:

1) Fixed Size Framing

• There is no need for defining boundaries of frames; the size itself can be used as
a delimiter. For example: ATM WAN uses frames of fixed size called cells.

2) Variable Size Framing

• We need to define the end of the frame and the beginning of the next frame.
Framing
Two approaches are used in Variable Size Framing :

1) Character-oriented approach

2) 2) Bit-oriented approach.

Character Oriented Framing

• Data to be carried are 8-bit characters from a coding system such as


ASCII.

• The header and the trailer are also multiples of 8 bits.


Character Oriented Framing
• Header carries the source and destination-addresses and other control
information.
• Trailer carries error-detection or error-correction redundant bits.
• To separate one frame from the next frame, an 8- bit (I-byte) flag is added
at the beginning and the end of a frame.
• The flag is composed of protocol-dependent special characters. The flag
signals the start or end of a frame.
Character Oriented Framing
• Problem:

• Character-oriented framing is suitable when only text is exchanged by the


data-link-layers. However, if we send another type of information (say
audio/video), then any pattern used for the flag can also be part of the
information.

• If the flag-pattern appears in the data-section, the receiver might think that
it has reached the end of the frame.

• Solution: A byte-stuffing is used


Character Oriented Framing
byte-stuffing or character stuffing :

• In byte stuffing, a special byte is added to the data-section of the frame


when there is a character with the same pattern as the flag.

• The data-section is stuffed with an extra byte. This byte is called the
escape character (ESC), which has a predefined bit pattern.

• When a receiver encounters the ESC character, the receiver removes ESC
character from the data-section and treats the next character as data, not a
delimiting flag.
Character Oriented Framing
byte-stuffing or character stuffing :

• What happens if the text contains one or more escape characters followed
by a flag?

• The receiver removes the escape character, but keeps the flag, which is
incorrectly interpreted as the end of the frame.

Solution:

• Escape characters part of the text must also be marked by another escape
character .
Character Oriented Framing
byte-stuffing or character stuffing :

• In short, byte stuffing is the process of adding one extra byte whenever
there is a flag or escape character in the text.
Bit Oriented Framing
• The data-section of a frame is a sequence of bits to be interpreted by the
upper layer as text, audio, video, and so on.

• However, in addition to headers and trailers, we need a delimiter to separate


one frame from the other.

• Most protocols use a special 8-bit pattern flag 01111110 as the delimiter to
define the beginning and the end of the frame
Bit Oriented Framing
• Problem:

• If the flag-pattern appears in the data-section, the receiver might think that it
has reached the end of the frame.

• Solution: A bit-stuffing is used.

• In bit stuffing, if a 0 and five consecutive 1 bits are encountered, an extra 0 is


added. This extra stuffed bit is eventually removed from the data by the
receiver.
• bit-stuffing
• This guarantees that the flag field sequence does not inadvertently appear
in the frame.
• In short, bit stuffing is the process of adding one extra 0 whenever five
consecutive 1s follow a 0 in the data, so that the receiver does not mistake
the pattern 0111110 for a flag.
Flow Control and Error Control
• One of the responsibilities of the DLC sublayer is flow and error
control at the data-link layer.
• Flow Control
• Whenever an entity produces items and another entity consumes them,
there should be a balance between production and consumption rates.
• If the items are produced faster than they can be consumed, the
consumer can be overwhelmed and may need to discard some items.
• We need to prevent losing the data items at the consumer site.
Flow Control
• At the sending node, the data-link layer tries to push frames toward the
data-link layer at the receiving node
• If the receiving node cannot process and deliver the packet to its
network at the same rate that the frames arrive, it becomes
overwhelmed with frames.
• Here, flow control can be feedback from the receiving node to the
sending node to stop or slow down pushing frames.
Buffers
Flow control can be implemented by using buffer.
• A buffer is a set of memory locations that can hold packets at the
sender and receiver.
• Normally, two buffers can be used.
1) First buffer at the sender.
2) Second buffer at the receiver.
• The flow control communication can occur by sending signals from
the consumer to the producer.
• When the buffer of the receiver is full, it informs the sender to stop
pushing frames.
Error Control
• Error-control includes both error-detection and error-correction.

• Error-control allows the receiver to inform the sender of any frames


lost/damaged in transmission.

• A CRC is added to the frame header by the sender and → checked

by the receiver
Error Control
• At the data-link layer, error control is normally implemented using one
of the following two methods.

• 1) First method: If the frame is corrupted, it is discarded;

• If the frame is not corrupted, the packet is delivered to the network


layer. This method is used mostly in wired LANs such as Ethernet.
Error Control
• 2) Second method: If the frame is corrupted, it is discarded;

• If the frame is not corrupted, an acknowledgment is sent to the sender.

• Acknowledgment is used for the purpose of both flow and error


control.
Combination of Flow and Error Control
• Flow and error control can be combined.

• The acknowledgment that is sent for flow control can also be used for
error control to tell the sender the packet has arrived uncorrupted.

• The lack of acknowledgment means that there is a problem in the sent


frame.

• A frame that carries an acknowledgment is normally called an ACK to


distinguish it from the data frame.
Connectionless and Connection-Oriented
1) Connectionless Protocol
• Frames are sent from one node to the next without any relationship
between the frames; each frame is independent.
• The term connectionless does not mean that there is no physical
connection (transmission medium) between the nodes; it means that
there is no connection between frames.
• The frames are not numbered and there is no sense of ordering.
• Most of the data-link protocols for LANs are connectionless protocols.
Connectionless and Connection-Oriented
2. Connection Oriented Protocol
• A logical connection should first be established between the two nodes
(setup phase).
• After all frames that are somehow related to each other are transmitted
(transfer phase), the logical connection is terminated (teardown phase).
• The frames are numbered and sent in order.
• If the frames are not received in order, the receiver needs to wait until all
frames belonging to the same set are received and then deliver them in order
to the network layer.
• Connection oriented protocols are rare in wired LANs, but we can see them
in some point-to- point protocols, some wireless LANs, and some WANs.
DATA LINK LAYER PROTOCOLS
• Traditionally 2 protocols have been defined for the data-link layer to
deal with flow and error control:
1)Simple Protocol and
2) Stop-and-Wait Protocol.
• The behavior of a data-link-layer protocol can be better shown as a
finite state machine (FSM). An FSM is a machine with a finite number
of states.
• The machine is always in one of the states until an event occurs.
• Each event is associated with 2 reactions:
DATA LINK LAYER PROTOCOLS
• Each event is associated with 2 reactions:
1) Defining the list (possibly empty) of actions to be performed.
2) Determining the next state (which can be the same as the current
state).
• One of the states must be defined as the initial state, the state in which
the machine starts when it turns on.
Simple Protocol
• Assumptions:
• The protocol has no flow-control or error-control.
• The protocol is a unidirectional protocol (in which frames are traveling
in only one direction). The receiver can immediately handle any frame
it receives.
Simple Protocol
Here is how it works

1) At Sender

The data-link-layer gets data from its network-layer makes a frame out
of the data and sends the frame.

2) At Receiver

The data-link-layer receives a frame from its physical layer extracts


data from the frame and delivers the data to its network-layer.
Simple Protocol
• Data-link-layers of sender & receiver provide transmission services for
their network-layers.

• Data-link-layers use the services provided by their physical layers for


the physical transmission of bits.
Simple Protocol
FSM (Finite state machine )s:

• Two main requirements:

1) The sender-site cannot send a frame until its network-layer has a data
packet to send.

2) The receiver-site cannot deliver a data packet to its network-layer


until a frame arrives.

• These 2 requirements are shown using two FSMs.


Simple Protocol
FSM (Finite state machine )s:

• Each FSM has only one state, the ready state.


Simple Protocol
Here is how it works
1) At Sending Machine
• The sending machine remains in the ready state until a request comes from the
process in the network layer.
• When this event occurs, the sending machine encapsulates the message in a frame
and sends it to the receiving machine.
2) At Receiving Machine
• The receiving machine remains in the ready state until a frame arrives from the
sending machine.
• When this event occurs, the receiving machine decapsulates the message out of the
frame and delivers it to the process at the network layer.
An example of communication using this protocol. It is very simple. The sender
sends frames one after another without even thinking about the receiver.
Stop-and-Wait Protocol
• It is the simplest flow control method. In this, the sender will transmit
one frame at a time to the receiver. The sender will stop and wait for
the acknowledgement from the receiver.

• When the sender gets the acknowledgement (ACK), it will send the
next data packet to the receiver and wait for the disclosure again, and
this process will continue as long as the sender has the data to send.

• While sending the data from the sender to the receiver, the data flow
needs to be controlled. If the sender is transmitting the data at a rate
higher than the receiver can receive and process it, the data will be
lost.
Stop-and-Wait Protocol
• Each data frame is assigned a Cyclic Redundancy Check (CRC) code
to detect corruption. The receiver checks the frame using CRC.
• If the CRC indicates corruption, the frame is silently discarded by the
receiver (no acknowledgment sent).
• When the sender transmits a frame, it starts a timer to wait for
acknowledgment (ACK).
• If the acknowledgment arrives before the timer expires, the sender
stops the timer and sends the next frame.
• If the timer expires (due to missing ACK), the sender assumes the
frame was lost or corrupted and retransmits it.
Stop-and-Wait Protocol
• The sender keeps a copy of the transmitted frame until it receives the
corresponding acknowledgment.
• The protocol ensures that only one frame and one acknowledgment are
in transit at any given moment (no concurrent frames).
Stop-and-Wait Protocol
Stop-and-Wait protocol
• Sender States
The sender is initially in the ready state, but it can move between the
ready and blocking state.
Ready State.
 When the sender is in this state, it is only waiting for a packet from
the network layer.
If a packet comes from the network layer, the sender creates a frame,
saves a copy of the frame, starts the only timer and sends the frame.
The sender then moves to the blocking state.
Stop-and-Wait protocol
Blocking State. When the sender is in this state, three events can occur:

a. If a time-out occurs, the sender resends the saved copy of the frame and
restarts the timer.

b. If a corrupted ACK arrives, it is discarded.

c. If an error-free ACK arrives, the sender stops the timer and discards the
saved copy of the frame. It then moves to the ready state.
Stop-and-Wait protocol
Receiver

The receiver is always in the ready state. Two events may occur:

a. If an error-free frame arrives, the message in the frame is delivered to


the network layer and an ACK is sent.

b. If a corrupted frame arrives, the frame is discarded.


Stop-and-Wait protocol
• Sequence and Acknowledgment Numbers

If a packet is duplicated, the receiver might treat it as a new packet,


leading to issues like placing duplicate orders (as in the online order
example). Duplicate packets need to be avoided just like corrupted ones.

Adding Sequence Numbers: To avoid duplicates, sequence numbers are


added to each data frame. These numbers alternate between 0 and 1. So,
the sequence would be: 0, 1, 0, 1, 0, 1...
Stop-and-Wait protocol
• Sequence and Acknowledgment Numbers
Similarly, acknowledgment (ACK) frames also use alternating numbers,
but they start with 1. The acknowledgment number in an ACK frame
indicates the sequence number of the next frame expected from the
sender. So, the sequence would be: 1, 0, 1, 0, 1...
The sequence number in the data frame helps the receiver identify
whether the frame is new or a retransmission.
The acknowledgment number sent back to the sender indicates which
frame should come next.
Stop-and-Wait protocol
• Figure 11.12 shows an example. The
first frame is sent and acknowledged.
The second frame is sent, but lost.
After time-out, it is resent. The third
frame is sent and acknowledged, but
the acknowledgment is lost. The
frame is resent. However, there is a
problem with this scheme. The
network layer at the receiver site
receives two copies of the third
packet, which is not right.
Stop-and-Wait protocol
• Figure 11.13 shows how adding
sequence numbers and
acknowledgment numbers can
prevent duplicates. The first frame is
sent and acknowledged. The second
frame is sent, but lost. After time-out,
it is resent. The third frame is sent
and acknowledged, but the
acknowledgment is lost. The frame is
resent.
Piggybacking
A technique called piggybacking is used to improve the efficiency of the
bidirectional protocols.
• The data in one direction is piggybacked with the acknowledgment in
the other direction.
• In other words, when node A is sending data to node B, Node A also
acknowledges the data received from node B.
High-level Data Link Control (HDLC)
• HDLC is a bit-oriented protocol for communication over point-to-
point and multipoint links.
• HDLC implements the ARQ (Automatic Repeat Request)
mechanisms.
Configurations and Transfer Modes
HDLC provides 2 common transfer modes that can be used in different
configurations:
1) Normal response mode (NRM)
2) Asynchronous balanced mode (ABM).
High-level Data Link Control (HDLC)
• NRM
• The station configuration is unbalanced (Figure 11.14).
• We have one primary station and multiple secondary stations.
• A primary station can send commands, a secondary station can only
respond.
• The NRM is used for both point-to-point and multiple-point links.
High-level Data Link Control (HDLC)
• ABM
• The configuration is balanced (Figure 11.15).
• Link is point-to-point, and each station can function as a primary and a
secondary (acting as peers).
• This is the common mode today.
HDLC Framing
• To provide the flexibility necessary to support all the options possible in
the modes and configurations,
HDLC defines three types of frames:
1) Information frames (I-frames): are used to transport user data and
control information relating to user data (piggybacking).
2) Supervisory frames (S-frames): are used only to transport control
information.
3) Unnumbered frames (U-frames): are reserved for system management.
Information carried by U-frames is intended for managing the link itself.
• Each type of frame serves as an envelope for the transmission of a
different type of message.
HDLC Framing
HDLC Framing
• Various fields of HDLC frame are:
1) Flag Field
• This field has a synchronization pattern 01111110.
• This field identifies both the beginning and the end of a frame.
2) Address Field
• This field contains the address of the secondary station.
• If a primary station created the frame, it contains a to-address.
• If a secondary creates the frame, it contains a from-address.
• This field can be 1 byte or several bytes long, depending on the needs of
the network.
HDLC Framing
3) Control Field
• This field is one or two bytes used for flow and error control.
4) Information Field
• This field contains the user's data from the network-layer or
management information. Its length can vary from one network to
another.
5) FCS Field
• This field is the error-detection field. (FCS Frame Check Sequence).
• This field can contain either a 2- or 4-byte standard CRC
Control Fields of HDLC Frames
The control field determines the type of frame and defines its
functionality.
Control Fields of HDLC Frames
1) Control Field for I-Frames
• I-frames are designed to carry user data from the network-layer.
• In addition, they can include flow and error-control information
(piggybacking).
The subfields in the control field are:
1) The first bit defines the type.If the first bit of the control field is 0, this
means the frame is an I-frame.
2) The next 3 bits N(S) define the sequence-number of the frame. With 3 bits,
we can define a sequence-number between 0 and 7
3) The last 3 bits N(R) correspond to the acknowledgment-number when
piggybacking is used.
4) The single bit between N(S) and N(R) is called the P/F bit.
Control Fields of HDLC Frames
The P/F field is a single bit with a dual purpose. It can mean poll or final.
i) It means poll when the frame is sent by a primary station to a secondary
(when the address field contains the address of the receiver).
ii) It means final when the frame is sent by a secondary to a primary (when
the address field contains the address of the sender).
Control Fields of HDLC Frames
2) Control Field for S-Frames
• Supervisory frames are used for flow and error-control whenever
piggybacking is either impossible or inappropriate (e.g., when the station
either has no data of its own to send or needs to send a command or response
other than an acknowledgment).
• S-frames do not have information fields.
• The subfields in the control field are:
1) If the first 2 bits of the control field is 10, this means the frame is an S-
frame.
2) The last 3 bits N(R) corresponds to the acknowledgment-number (ACK)
or negativeacknowledgment-number (NAK).
3) The 2 bits called code is used to define the type of S-frame itself.
Control Fields of HDLC Frames
With 2 bits, we can have four types of S-frames:
1) Receive Ready (RR) = 00
• This acknowledges the receipt of frame or group of frames.
• The value of N(R) is the acknowledgment-number.

2) Receive Not Ready (RNR) = 10


This is an RR frame with 1 additional function:
• It announces that the receiver is busy and cannot receive more frames.
• It acts as congestion control mechanism by asking the sender to slow down.
The value of N(R) is the acknowledgment-number.
Control Fields of HDLC Frames
3) ReJect (REJ) = 01
• It is a NAK frame used in Go-Back-N ARQ to improve the efficiency of
the process.
• It informs the sender, before the sender time expires, that the last frame is
lost or damaged.
• The value of N(R) is the negative acknowledgment-number.

4) Selective REJect (SREJ) = 11


• This is a NAK frame used in Selective Repeat ARQ.
• The value of N(R) is the negative acknowledgment-number.
Control Fields of HDLC Frames
3) Control Field for U-Frames
• Unnumbered frames are used to exchange session management and control
information between connected devices.
• U-frames contain an information field used for system management
information, but not user data.
• Much of the information carried by U-frames is contained in codes included
in the control field.
• U-frame codes are divided into 2 sections:
i) A 2-bit prefix before the P/F bit
ii) A 3-bit suffix after the P/F bit.
• Together, these two segments (5 bits) can be used to create up to 32
different types of U-frames.
POINT-TO-POINT PROTOCOL (PPP)
• PPP is one of the most common protocols for point-to-point access.
• Today, millions of Internet users who connect their home computers to the
server of an ISP use PPP.
Framing
• PPP uses a character-oriented (or byte-oriented) frame.
POINT-TO-POINT PROTOCOL (PPP)
Various fields of PPP frame are:
1) Flag
• This field has a synchronization pattern 01111110.
• This field identifies both the beginning and the end of a frame.
2) Address
• This field is set to the constant value 11111111 (broadcast address).
3) Control
• This field is set to the constant value 00000011 (imitating unnumbered
frames in HDLC).
• PPP does not provide any flow control.
• Error control is also limited to error detection.
POINT-TO-POINT PROTOCOL (PPP)
4) Protocol
• This field defines what is being carried in the payload field.
• Payload field carries either i) user data or ii) other control information. By
default, size of this field = 2 bytes.
5) Payload field
• This field carries either i) user data or ii) other control information.
• By default, maximum size of this field = 1500 bytes.
POINT-TO-POINT PROTOCOL (PPP)
Byte Stuffing
• Since PPP is a byte-oriented protocol, the flag in PPP is a byte that needs to
be escaped whenever it appears in the data section of the frame.
• The escape byte is 01111101, which means that every time the flag like
pattern appears in the data, this extra byte is stuffed to tell the receiver that the
next byte is not a flag.
• Obviously, the escape byte itself should be stuffed with another escape byte.
POINT-TO-POINT PROTOCOL (PPP)
Transition Phases
• The transition diagram starts with the dead state.
1) Dead State
In dead state, there is no active carrier and the line is quiet.
2) Establish State
When 1 of the 2 nodes starts communication, the connection goes into the
establish state. In establish state, options are negotiated between the two
parties.
3) Authenticate State
If the 2 parties agree that they need authentication, Then the system needs to
do authentication; Otherwise, the parties can simply start communication.
POINT-TO-POINT PROTOCOL (PPP)
Transition Phases
4) Open State
Data transfer takes place in the open state.
5) Terminate State
When 1 of the endpoints wants to terminate connection, the system goes to
terminate state.
POINT-TO-POINT PROTOCOL (PPP)
Transition Phases
4) Open State
Data transfer takes place in the open state.
5) Terminate State
When 1 of the endpoints wants to terminate connection, the system goes to
terminate state.
Media Access Control
Media Access Control
• When nodes are connected via a common link, called
a multipoint or broadcast link, a multiple-access
protocol is needed to coordinate access.
• This coordination is similar to the rules of
speaking in an assembly, ensuring no two people
speak simultaneously, interrupt each other, or
monopolize the discussion.
• Various protocols have been developed to manage
access to the shared link, all of which fall under
the Media Access Control (MAC) sublayer of the
data-link layer.
• These protocols are categorized into three main
groups.
Media Access Control
• Many protocols have been designed to handle access to a shared-link
Random-access
• In random-access methods, no station has control over the others, and all
stations are equal.
• When a station wants to send data, it checks whether the medium is free
(idle) or busy. If the medium is free, the station can transmit its data
following specific rules.
• These methods are called random access because there is no set schedule for
when each station can send data, and contention methods because stations
compete with each other to access the medium.
• However, if multiple stations try to send data at the same time, a collision
occurs, leading to lost or altered frames.
ALOHA
• ALOHA is the earliest random access method, created at the University of
Hawaii in the early 1970s.
• Originally designed for a wireless LAN, it can be used on any shared
communication medium.
• Since the medium is shared, collisions can occur when multiple stations try
to send data simultaneously, resulting in garbled messages.
Pure ALOHA
• The original version of ALOHA is known as pure ALOHA. In this simple
protocol, each station sends data whenever it has something to send.
• However, with only one channel, there’s a chance that frames from different
stations will collide (as shown in Figure 12.2), leading to data loss.
ALOHA
ALOHA
• In this example, four stations are trying to access the same shared channel,
which is a simplified scenario.
• Each station sends two frames, resulting in a total of eight frames on the
channel. However, some frames collide because multiple stations are trying
to send their data simultaneously.
• As shown in Figure 12.2, only two frames successfully transmit: one from
station 1 and one from station 3. It's important to note that if even a single bit
of one frame overlaps with a bit from another frame on the channel, a
collision occurs, and both frames are destroyed.
• This means that any frames lost during transmission must be resent.
ALOHA
• Acknowledgment in Pure ALOHA
• The pure ALOHA protocol depends on acknowledgments
(ACKs) from the receiver.
• When a station sends a frame, it expects to receive
an acknowledgment back.
• If the acknowledgment doesn't arrive within a set
time (time-out period), the station assumes the
frame (or its acknowledgment) was lost and resends
the frame.
ALOHA
• Handling Collisions
• When multiple stations send frames simultaneously,
a collision occurs.
• If these stations attempt to resend their frames
after the time-out, they may collide again.
• To reduce the chance of repeated collisions, each
station waits a random amount of time (called
backoff time, TB) before resending its frame.
ALOHA
• Maximum Retransmission Attempts
• Pure ALOHA has a second method to prevent channel
congestion:
• After reaching a maximum number of retransmission
attempts (Kmax), a station must stop trying tokmax a station must stop
trying to resend and wait

resend and wait before attempting again. before attempting again


ALOHA
• Vulnerable Time
• The vulnerable-time is defined as a time during
which there is a possibility of collision.

• where Tfr = Frame transmission time


ALOHA
• In Figure 12.4, If station B sends a frame between t-Tfr and t, this leads to a
collision between the frames from station A and station B.
• If station C sends a frame between t and t+Tfr, this leads to a collision
between the frames from station A and station C.
ALOHA
ALOHA
• Throughput
• The average number of successful transmissions
is given by

• where G = average no. of frames in one frame


transmission time (Tfr)
• For G = 1/2, the maximum throughput Smax =
0.184.
• In other words, out of 100 frames, 18 frames
reach their destination successfully.
ALOHA
Slotted ALOHA
Slotted ALOHA was invented to improve the
efficiency of pure ALOHA.
• The time is divided into time-slots of Tfr
seconds (Figure 12.5).
• The stations are allowed to send only at the
beginning of the time-slot.
Slotted ALOHA
If a station misses the time-slot, the station
must wait until the beginning of the next time-
slot.
• If 2 stations try to resend at beginning of
the same time-slot, the frames will collide
again
Slotted ALOHA
If a station misses the time-slot, the station
must wait until the beginning of the next time-
slot.
If 2 stations try to resend at beginning of the
same time-slot, the frames will collide again
The vulnerable time is given by:
Slotted ALOHA
Throughput
• The average number of successful transmissions
is given by

• For G = 1, the maximum throughput Smax =


0.368.
• In other words, out of 100 frames, 36 frames
reach their destination successfully.
Carrier sense multiple access
(CSMA)
• CSMA was developed to minimize the chance of
collision and, therefore, increase the
performance.
• CSMA is based on the principle “sense before
transmit” or “listen before talk.”
• Here is how it works:
1) Each station checks the state of the medium:
idle or busy.
2) If the medium is idle, the station sends the
data.
ii) If the medium is busy, the station defers
sending.
Carrier sense multiple access
(CSMA)
• CSMA can reduce the possibility of collision,
but it cannot eliminate it. The possibility of
collision still exists.
• For example:
• When a station sends a frame, it still takes
time for the first bit to reach every station
and for every station to sense it.
Carrier sense multiple access
(CSMA)
• For example: In Figure 12.7,
• At time t1, station B senses & finds the medium
idle, so sends a frame.
• At time t2, station C senses & finds the medium
idle, so sends a frame.
• The 2 signals from both stations B & C collide
and both frames are destroyed.
Carrier sense multiple access
(CSMA)
Vulnerable Time
• The vulnerable time is the propagation time Tp
(Figure 12.8).
Carrier sense multiple access
(CSMA)
Vulnerable Time
• The vulnerable time is the propagation time Tp
.
• The propagation time is the time needed for a
signal to propagate from one end of the medium
to the other.
• Collision occurs when
→ a station sends a frame, and
→ other station also sends a frame during
propagation time
• If the first bit of the frame reaches the end
of the medium, every station will refrain from
sending.
Carrier sense multiple access
(CSMA)
Persistence Methods
• Q: What should a station do if the channel is
busy or idle?
Three methods can be used to answer this
question:
1) 1-persistent method
2) Non-persistent method and
3) p-persistent method
Carrier sense multiple access
(CSMA)
1) 1-persistent method
• Before sending a frame, a station senses the
line.
i) If the line is idle, the station sends
immediately (with probability = 1).
ii) If the line is busy, the station continues
sensing the line.
• This method has the highest chance of collision
because 2 or more stations:
→ may find the line idle and
→ send the frames immediately.
Carrier sense multiple access
(CSMA)
2) Non-Persistent
• Before sending a frame, a station senses the
line (Figure 12.10b).
i) If the line is idle, the station sends
immediately.
ii) If the line is busy, the station waits a
random amount of time and then senses the line
again.
• This method reduces the chance of collision
because 2 or more stations:
→ will not wait for the same amount of time and
→ will not retry to send simultaneously.
Carrier sense multiple access
(CSMA)
3) P-Persistent
• This method is used if the channel has time-
slots with a slot-duration equal to or greater
than the maximum propagation time (Figure 12.10c).
• Advantages:
i) It combines the advantages of the other 2
methods.
ii) It reduces the chance of collision and
improves efficiency.
Carrier sense multiple access
(CSMA)
3) P-Persistent
• After the station finds the line idle, it
follows these steps:
1) With probability p, the station sends the
frame.
2) With probability q=1-p, the station waits for
the beginning of the next time-slot and checks the
line again.
i) If line is idle, it goes to step 1.
ii) If line is busy, it assumes that collision has
occurred and uses the back off procedure.
CSMA/CD (collision detection)
Disadvantage of CSMA: CSMA does not specify the
procedure after a collision has occurred.
Solution: CSMA/CD enhances the CSMA to handle the
collision.
• Here is how it works (Figure 12.12):
1) A station
→ sends the frame & then monitors the medium to
see if the transmission was successful or not.
2) If the transmission was unsuccessful (i.e.
there is a collision), the frame is sent again.
CSMA/CD (collision detection)
CSMA/CD (collision detection)
CSMA/CD (collision detection)
• At time t1, station A has executed its procedure and
starts sending the bits of its frame. At time t2,
station C has executed its procedure and starts sending
the bits of its frame. The collision occurs sometime
after time t2.
• Station C detects a collision at time t3 when it
receives the first bit of A's frame.
• Station C immediately aborts transmission. The length of any frame
divided by the bit rate must be

• Station A detects collision at time t4 when it durations


more than either of these

receives the first bit of C's frame.


• Station A also immediately aborts transmission.
• • Station A transmits for the duration t4-t1. Station C
transmits for the duration t3-t2.
• • For the protocol to work:
• The length of any frame divided by the bit rate must be
CSMA/CD (collision detection)
Minimum Frame Size
• For CSMA/CD to work, we need to restrict the
frame-size.
• Before sending the last bit of the frame, the
sender must
→ detect a collision and
→ abort the transmission.
• This is so because the sender
→ does not keep a copy of the frame and
→ does not monitor the line for collision-detection.
• Frame transmission time Tfr is given by
Tfr=2Tp where Tp=maximum propagation time
CSMA/CD (collision detection)
Procedure
• CSMA/CD is similar to ALOHA with 2 differences
(Figure 12.13):
1) Addition of the persistence process.
We need to sense the channel before sending the
frame by using non-persistent, 1-persistent or p-
persistent.
2) Frame transmission.
i) In ALOHA, first the entire frame is transmitted
and then acknowledgment is waited for.
ii) In CSMA/CD, transmission and collision-detection
is a continuous process.
CSMA/CD (collision detection)
CSMA/CD (collision detection)
Energy Level
• In a channel, the energy-level can have 3 values:
1) Zero 2) Normal and 3) Abnormal.
1) At zero level, the channel is idle (Figure
12.14).
2) At normal level, a station has successfully
captured the channel and is sending its frame.
3) At abnormal level, there is a collision and the
level of the energy is twice the normal level.
• A sender needs to monitor the energy-level to
determine if the channel is
→ Idle A sender needs to monitor the
energy-level to
determine if the channel i

→ Busy or
→ Collision mode
CSMA/CD (collision detection)
Throughput
• The throughput of CSMA/CD is greater than pure
or slotted ALOHA.
• The maximum throughput is based on different
value of G
→ persistence method used (non-persistent, 1-
persistent, or p-persistent) and →‘p‟ value in
the p-persistent method.
• For 1-persistent method, the maximum
throughput is 50% when G =1.
• For non-persistent method, the maximum
throughput is 90% when G is between 3 and 8.
CSMA/CA (collision avoidance)
Carrier sense multiple access with collision
avoidance (CSMA/CA) was invented for wireless
networks.
Here is how it works (Figure 12.15):
1) A station needs to be able to receive while
transmitting to detect a collision.
i) When there is no collision, the station
receives one signal: its own signal. ii) When
there is a collision, the station receives 2
signals:
a) Its own signal and
b) Signal transmitted by a second station.
2) To distinguish b/w these 2 cases, the
CSMA/CA (collision avoidance)
CSMA/CA (collision avoidance)
• Three methods to avoid collisions (Figure
12.16):
• 1) Interframe space
• 2) Contention window and
• 3) Acknowledgments
CSMA/CA (collision avoidance)
• 1) Interframe Space (IFS)
• Collisions are avoided by deferring
transmission even if the channel is found idle.
• When the channel is idle, the station does not
send immediately.
• Rather, the station waits for a period of time
called the inter-frame space or IFS.
• After the IFS time, if the channel is still
idle, then, the station waits for the
contention-time & finally, the station sends the
frame.
• IFS variable can also be used to prioritize
stations or frame types.
CSMA/CA (collision avoidance)
2) Contention Window
• The contention-window is an amount of time
divided into time-slots.
• A ready-station chooses a random-number of
slots as its wait time.
• In the window, the number of slots changes
according to the binary exponential back-off
strategy.
• For example:
At first time, number of slots is set to one
slot and
Then, number of slots is doubled each time if
the station cannot detect an idle channel.
CSMA/CA (collision avoidance)
3) Acknowledgment
• There may be a collision resulting in
destroyed-data.
• In addition, the data may be corrupted during
the transmission.
• To help guarantee that the receiver has
received the frame, we can use
i) Positive acknowledgment and
ii) Time-out timer
CSMA/CA (collision avoidance)
Frame Exchange Time Line
• Two control frames are used:
1) Request to send (RTS)
2) Clear to send (CTS)
• The procedure for exchange of data and control
frames in time (Figure 12.17):
1) The source senses the medium by checking the
energy level at the carrier frequency.
ii) If the medium is idle, then the source waits
for a period of time called the DCF interframe
space (DIFS); finally, the source sends a RTS.
CSMA/CA (collision avoidance)
2) The destination
→ receives the RTS
→ waits a period of time called the short
interframe space (SIFS)
→sends a control frame CTS to the source.
CTS indicates that the destination station is
ready to receive data.
3) The source
→ receives the CTS
→ waits a period of time SIFS
→ sends a data to the destination
CSMA/CA (collision avoidance)
4) The destination
→ receives the data
→ waits a period of time SIFS
→ sends a acknowledgment ACK to the source.
ACK indicates that the destination has been
received the frame.
CSMA/CA (collision avoidance)
Network Allocation Vector
• When a source-station sends an RTS, it
includes the duration of time that it needs to
occupy the
channel.
• The remaining stations create a timer called a
network allocation vector (NAV).
• NAV indicates waiting time to check the
channel for idleness.
• Each time a station accesses the system and
sends an RTS frame, other stations start their
NAV.
CSMA/CA (collision avoidance)
Collision during Handshaking
• Two or more stations may try to send RTS at
the same time.
• These RTS may collide.
• The source assumes there has been a collision
if it has not received CTS from the destination.
• The backoff strategy is employed, and the
source tries again.
CSMA/CA (collision avoidance)
Hidden Station Problem
• Figure 12.17 also shows that the RTS from B
reaches A, but not C.
• However, because both B and C are within the
range of A, the CTS reaches C.
• Station C knows that some hidden station is
using the channel and refrains from transmitting
until that duration is over.
CSMA/CA and Wireless Networks
• CSMA/CA was mostly intended for use in
wireless networks.
• However, it is not sophisticated enough to
handle some particular issues related to
CONTROLLED ACCESS PROTOCOLS
Here, the stations consult one another to find
which station has the right to send.
• A station cannot send unless it has been
authorized by other stations.
• Three popular controlled-access methods are:
1) Reservation
2) Polling
3) Token Passing
CONTROLLED ACCESS PROTOCOLS
1 Reservation
• Before sending data, each station needs to
make a reservation of the medium.
• Time is divided into intervals.
• In each interval, a reservation-frame precedes
the data-frames.
• If no. of stations = N, then there are N
reservation mini-slots in the reservation-frame.
• Each mini-slot belongs to a station.
• When a station wants to send a data-frame, it
makes a reservation in its own mini slot.
• The stations that have made reservations can
send their data-frames.
CONTROLLED ACCESS PROTOCOLS
1 Reservation
For example (Figure 12.18):
• 5 stations have a 5-minislot reservation-frame.
• In the first interval, only stations 1, 3, and
4 have made reservations.
• In the second interval, only station-1 has made
a reservation.
CONTROLLED ACCESS PROTOCOLS
2 Polling
• In a network,
One device is designated as a primary station
and Other devices are designated as secondary
stations.
• Functions of primary-device:
1) The primary-device controls the link.
2) The primary-device is always the initiator of
a session.
3) The primary-device is determining which
device is allowed to use the channel at a given
time.
4) All data exchanges must be made through the
CONTROLLED ACCESS PROTOCOLS
2 Polling
The secondary devices follow instructions of
primary-device.
• Disadvantage: If the primary station fails,
the system goes down.
• Poll and select functions are used to prevent
collisions (Figure 12.19).
CONTROLLED ACCESS PROTOCOLS
1) Select
• If the primary wants to send data, it tells the
secondary to get ready to receive; this is
called select function.
• The primary alerts the secondary about
upcoming transmission by sending select frame
(SEL) then waits for an acknowledgment (ACK)
from secondary
• Then sends the data frame and finally waits for
an acknowledgment (ACK) from the secondary.
CONTROLLED ACCESS PROTOCOLS
2) Poll
If the primary wants to receive data, it asks
the secondaries if they have anything to send;
this is called poll function.
When the first secondary is approached, it
responds either
→ with a NAK frame if it has no data to send or
→ with data-frame if it has data to send.
i) If the response is negative (NAK frame), then
the primary polls the next secondary in the same
manner.
ii) When the response is positive (a data-
frame), the primary
CONTROLLED ACCESS PROTOCOLS
3 Token Passing
• In a network, the stations are organized in a
ring fashion i.e. for each station; there is a
predecessor and a successor.
1) The predecessor is the station which is
logically before the station in the ring.
2) The successor is the station which is after
the station in the ring.
• The current station is the one that is
accessing the channel now.
• A token is a special packet that circulates
through the ring.
CONTROLLED ACCESS PROTOCOLS
3 Token Passing
Here is how it works:
• A station can send the data only if it has the
token.
• When a station wants to send the data, it waits
until it receives the token from its
predecessor.
• Then, the station holds the token and sends
its data.
• When the station finishes sending the data, the
station
→ releases the token
→ passes the token to the successor.
CONTROLLED ACCESS PROTOCOLS
3 Token Passing
Main functions of token management:
1) Stations must be limited in the time they can
hold the token.
2) The token must be monitored to ensure it has
not been lost or destroyed.
For ex: if a station that is holding the token
fails, the token will disappear from the network
3) Assign priorities to the stations and to the
types of data being transmitted.
4) Make low-priority stations release the token
to high priority stations.
CONTROLLED ACCESS PROTOCOLS
Logical Ring
• In a token-passing network, stations do not
have to be physically connected in a ring; the
ring can be a logical one.
• Four physical topologies to create a logical
ring (Figure 12.20):
1) Physical ring
2) Dual ring
3) Bus ring
4) Star ring
CONTROLLED ACCESS PROTOCOLS
1) Physical Ring Topology
• When a station sends token to its successor,
token cannot be seen by other stations. (Figure
12.20a)
• This means that the token does not have the
address of the next successor.
• Disadvantage: If one of the links fails, the
whole system fails.
CONTROLLED ACCESS PROTOCOLS
2) Dual Ring Topology
• A second (auxiliary) ring is used along with
the main ring (Figure 12.20b).
→ operates in the reverse direction compared
with the main ring.
→ is used for emergencies only (such as a spare
tire for a car).
CONTROLLED ACCESS PROTOCOLS
2) Dual Ring Topology
• If the main ring fails, the system
automatically combines the 2 rings to form a
temporary ring.
• After the failed link is restored, the second
ring becomes idle again.
• Each station needs to have 2 transmitter-ports
and 2 receiver-ports.
• This topology is used in
i) FDDI (Fiber Distributed Data Interface) and
ii) CDDI (Copper Distributed Data Interface).
CONTROLLED ACCESS PROTOCOLS
3) Bus Ring Topology
• The stations are connected to a single cable
called a bus (Figure 12.20c). This makes a
logical ring, because each station knows the
address of its successor and predecessor.
• When a station has finished sending its data,
the station
→ releases the token and inserts the address of
its successor in the token. Only the station
gets the token to access the shared media.
• This topology is used in the Token Bus LAN.
CONTROLLED ACCESS PROTOCOLS
4) Star Ring Topology
• The physical topology is a star (Figure
12.20d).
• There is a hub that acts as the connector.
• The wiring inside the hub makes the ring i.e.
the stations are connected to the ring through
the 2 wire connections.
CONTROLLED ACCESS PROTOCOLS
4) Star Ring Topology
• Disadvantages:
1) This topology is less prone to failure
because
If a link goes down, then the link will be
bypassed by the hub and
the rest of the stations can operate.
2) Also adding and removing stations from the
ring is easier.
• This topology is used in the Token Ring LAN.

You might also like