Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
89 views15 pages

Connection Release

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
89 views15 pages

Connection Release

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

CONNECTION RELEASE

➢ Releasing a connection is easier than establishing one.

➢ Nevertheless, there are more pitfalls than one might expect here.

➢ There are two styles of terminating a connection: asymmetric release and symmetric

release.

➢ Asymmetric release is the way the telephone system works: when one party hangs

➢ up, the connection is broken.

➢ Symmetric release treats the connection as two separate unidirectional connections

and requires each one to be released separately.

➢ Asymmetric release is abrupt and may result in data loss.

➢ Consider the scenario of Fig. 6-12.

DR V RAMA KRISHNA TOTTEMPUDI 1


➢ After the connection is established, host 1 sends a segment that arrives properly at

host 2.

➢ Then host 1 sends another segment.

➢ Unfortunately, host 2 issues a DISCONNECT before the second segment arrives.

➢ The result is that the connection is released and data are lost.

➢ Clearly, a more sophisticated release protocol is needed to avoid data loss.

➢ One way is to use symmetric release, in which each direction is released

independently of the other one. Here, a host can continue to receive data even after

it has sent a DISCONNECT segment.

➢ Symmetric release does the job when each process has a fixed amount of data to

send and clearly knows when it has sent it.

DR V RAMA KRISHNA TOTTEMPUDI 2


➢ In other situations, determining that all the work has been done and the connection

should be terminated is not so obvious.

➢ One can envision a protocol in which host 1 says ‘‘I am done.

➢ Are you done too?’’ If host 2 responds: ‘‘I am done too.

➢ Goodbye, the connection can be safely released.’’

➢ Unfortunately, this protocol does not always work.

➢ There is a famous problem that illustrates this issue.

➢ It is called the two-army problem. Imagine that a white army is encamped in a valley,

as shown in Fig. 6-13.

➢ On both of the surrounding hillsides are blue armies.

➢ The white army is larger than either of the blue armies alone, but together the blue

armies are larger than the white army.

➢ If either blue army attacks by itself, it will be defeated, but if the two blue armies

attack simultaneously, they will be victorious.

➢ The blue armies want to synchronize their attacks.

DR V RAMA KRISHNA TOTTEMPUDI 3


➢ However, their only communication medium is to send messengers on foot down

into the valley, where they might be captured and the message lost (i.e., they have

to use an unreliable communication channel).

➢ The question is: does a protocol exist that allows the blue armies to win? Suppose

that the commander of blue army #1 sends a message reading: ‘‘I propose we attack

at dawn on March 29.

➢ How about it?’’ Now suppose that the message arrives, the commander of blue army

#2 agrees, and his reply gets safely back to blue army #1.

➢ Will the attack happen? Probably not, because commander #2 does not know if his

reply got through.

➢ If it did not, blue army #1 will not attack, so it would be foolish for him to charge

into battle.

➢ Now let us improve the protocol by making it a three-way handshake.

➢ The initiator of the original proposal must acknowledge the response.

➢ Assuming no messages are lost, blue army #2 will get the acknowledgement, but the

commander of blue army #1 will now hesitate.

➢ After all, he does not know if his acknowledgement got through, and if it did not, he

knows that blue army #2 will not attack.

➢ We could now make a four-way handshake protocol, but that does not help either.

➢ In fact, it can be proven that no protocol exists that works.

➢ Suppose that some protocol did exist.

➢ Either the last message of the protocol is essential, or it is not.

DR V RAMA KRISHNA TOTTEMPUDI 4


➢ If it is not, we can remove it (and any other unessential messages) until we are left

with a protocol in which every message is essential.

➢ What happens if the final message does not get through? We just said that it was

essential, so if it is lost, the attack does not take place.

➢ Since the sender of the final message can never be sure of its arrival, he will not risk

attacking.

➢ Worse yet, the other blue army knows this, so it will not attack either.

➢ To see the relevance of the two-army problem to releasing connections, rather than

to military affairs, just substitute ‘‘disconnect’’ for ‘‘attack.’’

➢ If neither side is prepared to disconnect until it is convinced that the other side is

prepared to disconnect too, the disconnection will never happen.

➢ In practice, we can avoid this quandary by foregoing the need for agreement and

pushing the problem up to the transport user, letting each side independently decide

when it is done.

➢ This is an easier problem to solve. Figure 6-14 illustrates four scenarios of releasing

using a three-way handshake.

DR V RAMA KRISHNA TOTTEMPUDI 5


➢ While this protocol is not infallible, it is usually adequate.

➢ In Fig. 6-14(a), we see the normal case in which one of the users sends a DR

(DISCONNECTION REQUEST) segment to initiate the connection release.

➢ When it arrives, the recipient sends back a DR segment and starts a timer, just in

case its DR is lost.

DR V RAMA KRISHNA TOTTEMPUDI 6


➢ When this DR arrives, the original sender sends back an ACK segment and releases

the connection.

➢ Finally, when the ACK segment arrives, the receiver also releases the connection.

Releasing a connection means that the transport entity removes the information

about the connection from its table of currently open connections and signals the

connection’s owner (the transport user) somehow.

➢ This action is different from a transport user issuing a DISCONNECT primitive.

➢ If the final ACK segment is lost, as shown in Fig. 6-14(b), the situation is saved by the

timer.

➢ When the timer expires, the connection is released anyway.

➢ Now consider the case of the second DR being lost.

➢ The user initiating the disconnection will not receive the expected response, will

time out, and will start all over again.

➢ In Fig. 6-14(c), we see how this works, assuming that the second time no segments

are lost and all segments are delivered correctly and on time.

➢ Our last scenario, Fig. 6-14(d), is the same as Fig. 6-14(c) except that now we assume

all the repeated attempts to retransmit the DR also fail due to lost segments.

➢ After N retries, the sender just gives up and releases the connection.

➢ Meanwhile, the receiver times out and also exits.

➢ While this protocol usually suffices, in theory it can fail if the initial DR and N

retransmissions are all lost.

➢ The sender will give up and release the connection, while the other side knows

nothing at all about the attempts to disconnect and is still fully active.

DR V RAMA KRISHNA TOTTEMPUDI 7


➢ This situation results in a half-open connection.

➢ We could have avoided this problem by not allowing the sender to give up after N

retries and forcing it to go on forever until it gets a response.

➢ However, if the other side is allowed to time out, the sender will indeed go on

forever, because no response will ever be forthcoming.

➢ If we do not allow the receiving side to time out, the protocol hangs in Fig. 6-14(d).

➢ One way to kill off half-open connections is to have a rule saying that if no segments

have arrived for a certain number of seconds, the connection is automatically

disconnected.

➢ That way, if one side ever disconnects, the other side will detect the lack of activity

and also disconnect.

➢ This rule also takes care of the case where the connection is broken (because the

network can no longer deliver packets between the hosts) without either end

disconnecting first.

➢ Of course, if this rule is introduced, it is necessary for each transport entity to have

a timer that is stopped and then restarted whenever a segment is sent.

➢ If this timer expires, a dummy segment is transmitted, just to keep the other side

from disconnecting.

➢ On the other hand, if the automatic disconnect rule is used and too many dummy

segments in a row are lost on an otherwise idle connection, first one side, then the

➢ other will automatically disconnect.

➢ We will not belabor this point any more, but by now it should be clear that releasing

a connection without data loss is not nearly as simple as it first appears.

DR V RAMA KRISHNA TOTTEMPUDI 8


➢ The lesson here is that the transport user must be involved in deciding when to

disconnect—the problem cannot be cleanly solved by the transport entities

themselves.

➢ To see the importance of the application, consider that while TCP normally does a

symmetric close (with each side independently closing its half of the connection with

a FIN packet when it has sent its data), many Web servers send the client a RST

packet that causes an abrupt close of the connection that is more like an asymmetric

close.

➢ This works only because the Web server knows the pattern of data exchange.

➢ First it receives a request from the client, which is all the data the client will send,

and then it sends a response to the client.

➢ When the Web server is finished with its response, all of the data has been sent in

either direction.

➢ The server can send the client a warning and abruptly shut the connection.

➢ If the client gets this warning, it will release its connection state then and there.

➢ If the client does not get the warning, it will eventually realize that the server is no

longer talking to it and release the connection state.

➢ The data has been successfully transferred in either case.

DR V RAMA KRISHNA TOTTEMPUDI 9


ERROR CONTROL AND FLOW CONTROL

➢ Having examined connection establishment and release in some detail, let us now

look at how connections are managed while they are in use.

➢ The key issues are error control and flow control.

➢ Error control is ensuring that the data is delivered with the desired level of reliability,

usually that all of the data is delivered without any errors.

➢ Flow control is keeping a fast transmitter from overrunning a slow receiver.

➢ Both of these issues have come up before, when we studied the data link layer.

➢ Given that these mechanisms are used on frames at the link layer, it is natural to

wonder why they would be used on segments at the transport layer as well.

➢ However, there is little duplication between the link and transport layers in practice.

➢ Even though the same mechanisms are used, there are differences in function and

degree.

➢ For a difference in function, consider error detection.

➢ The link layer checksum protects a frame while it crosses a single link.

➢ The transport layer checksum protects a segment while it crosses an entire network

path.

➢ It is an end-to-end check, which is not the same as having a check on every link.

➢ The link layer checksums protected the packets only while they traveled across a link,

not while they were inside the router.

➢ Thus, packets were delivered incorrectly even though they were correct according to

the checks on every link.

DR V RAMA KRISHNA TOTTEMPUDI 10


➢ According to this argument, the transport layer check that runs end-to-end is

essential for correctness, and the link layer checks are not essential but nonetheless

valuable for improving performance

➢ As a difference in degree, consider retransmissions and the sliding window protocol.

➢ Most wireless links, other than satellite links, can have only a single frame

outstanding from the sender at a time.

➢ That is, the bandwidth-delay product for the link is small enough that not even a

whole frame can be stored inside the link.

➢ In this case, a small window size is sufficient for good performance.

➢ For example, 802.11 uses a stop-and-wait protocol, transmitting or retransmitting

each frame and waiting for it to be acknowledged before moving on to the next

frame.

➢ Having a window size larger than one frame would add complexity without

improving performance.

➢ For wired and optical fiber links, such as (switched) Ethernet or ISP backbones, the

error-rate is low enough that link-layer retransmissions can be omitted because the

end-to-end retransmissions will repair the residual frame loss.

➢ On the other hand, many TCP connections have a bandwidth-delay product that is

much larger than a single segment.

➢ Given that transport protocols generally use larger sliding windows, we will look at

the issue of buffering data more carefully.

➢ Since a host may have many connections, each of which is treated separately, it may

need a substantial amount of buffering for the sliding windows.

DR V RAMA KRISHNA TOTTEMPUDI 11


➢ The buffers are needed at both the sender and the receiver.

➢ Certainly they are needed at the sender to hold all transmitted but as yet

unacknowledged segments.

➢ They are needed there because these segments may be lost and need to be

retransmitted.

➢ However, since the sender is buffering, the receiver may or may not dedicate specific

buffers to specific connections, as it sees fit.

➢ The receiver may, for example, maintain a single buffer pool shared by all

connections.

➢ When a segment comes in, an attempt is made to dynamically acquire a new buffer.

➢ If one is available, the segment is accepted; otherwise, it is discarded.

➢ Since the sender is prepared to retransmit segments lost by the network, no

permanent harm is done by having the receiver drop segments, although some

resources are wasted.

➢ The sender just keeps trying until it gets an acknowledgement.

➢ The best trade-off between source buffering and destination buffering depends on

the type of traffic carried by the connection.

➢ For low-bandwidth bursty traffic, such as that produced by an interactive terminal,

it is reasonable not to dedicate any buffers, but rather to acquire them dynamically

at both ends, relying on buffering at the sender if segments must occasionally be

discarded.

DR V RAMA KRISHNA TOTTEMPUDI 12


➢ On the other hand, for file transfer and other high-bandwidth traffic, it is better if

the receiver does dedicate a full window of buffers, to allow the data to flow at

maximum speed.

➢ This is the strategy that TCP uses.

➢ There still remains the question of how to organize the buffer pool.

➢ If most segments are nearly the same size, it is natural to organize the buffers as a

pool of identically sized buffers, with one segment per buffer, as in Fig. 6-15(a).

➢ However, if there is wide variation in segment size, from short requests for Web

pages to large packets in peer-to-peer file transfers, a pool of fixed-sized buffers

presents problems.

➢ If the buffer size is chosen to be equal to the largest possible segment, space will be

wasted whenever a short segment arrives.

DR V RAMA KRISHNA TOTTEMPUDI 13


➢ If the buffer size is chosen to be less than the maximum segment size, multiple

buffers will be needed for long segments, with the attendant complexity.

➢ Another approach to the buffer size problem is to use variable-sized buffers, as in

Fig. 6-15(b).

➢ The advantage here is better memory utilization, at the price of more complicated

buffer management.

➢ A third possibility is to dedicate a single large circular buffer per connection, as in Fig.

6-15(c).

➢ This system is simple and elegant and does not depend on segment sizes, but makes

good use of memory only when the connections are heavily loaded.

➢ As connections are opened and closed and as the traffic pattern changes, the sender

and receiver need to dynamically adjust their buffer allocations.

➢ Consequently, the transport protocol should allow a sending host to request buffer

space at the other end.

➢ Buffers could be allocated per connection, or collectively, for all the connections

running between the two hosts.

➢ Alternatively, the receiver, knowing its buffer situation (but not knowing the offered

traffic) could tell the sender ‘‘I have reserved X buffers for you.’’

➢ If the number of open connections should increase, it may be necessary for an

allocation to be reduced, so the protocol should provide for this possibility.

➢ Dynamic buffer management means, in effect, a variable-sized window.

➢ Initially, the sender requests a certain number of buffers, based on its expected

needs.

DR V RAMA KRISHNA TOTTEMPUDI 14


➢ The receiver then grants as many of these as it can afford.

➢ Every time the sender transmits a segment, it must decrement its allocation,

stopping altogether when the allocation reaches zero.

➢ The receiver separately piggybacks both acknowledgements and buffer allocations

onto the reverse traffic.

➢ TCP uses this scheme, carrying buffer allocations in a header field called Window

size.

DR V RAMA KRISHNA TOTTEMPUDI 15

You might also like