Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
5 views79 pages

Data Communication & Cloud Computing

The document covers the fundamentals of data communication, including key concepts such as data transmission, transmission media, and the characteristics of analog and digital data. It discusses the components involved in data communication systems, the types of transmission media (both physical and wireless), and the challenges of transmission impairments like noise and attenuation. Additionally, it highlights the importance of encoding and the differences between digital and analog data in the context of communication systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views79 pages

Data Communication & Cloud Computing

The document covers the fundamentals of data communication, including key concepts such as data transmission, transmission media, and the characteristics of analog and digital data. It discusses the components involved in data communication systems, the types of transmission media (both physical and wireless), and the challenges of transmission impairments like noise and attenuation. Additionally, it highlights the importance of encoding and the differences between digital and analog data in the context of communication systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 79

B.Sc.–III SEMESTER–VI Paper-11.

2(Elective II)
DATA COMMUNICATION WITH CLOUD COMPUTING

UNIT I: Data Communication


Data Transmission-
Concept and Terminology,
Data communication means the exchange of data through some means of transmission medium.
For the data communication to take place the communicating devices should be a part of the
communication system that is the collection of hardware and software. One may also define it as
the process of transferring message from one location to another by means of electrical or optical
system.
Transmission System: The successful transmission of the system depends upon four
characteristics:
1. Delivery: The system must deliver data to the exact destination. Data must be received by the
intended device or user and only by that device or user.
2. Accuracy: The system must deliver the data correctly
3. Timeliness: Data should be delivered by the system on time. Data that is delivered late or out
of time is useless
4. Jitter: It means the difference in the arriving time of the packets. Suppose the video packets
are send after every 30ms but if the packets arrive with the delay of 40ms.

Transmission of Data: The successful transmission of data depends principally on three factors:
1. Quality of the signal being transmitted
2. Characteristics of the transmission media
3. Size of the data being transferred.

Transmission Component: The basic components involved in data communications systems


are:
1. Message: The message is the information (data) to be communicated between devices.
2. Sender: The sender is the device that sends the message. It can be a computer, workstation,
telephone handset, video camera etc.
3. Receiver: The receiver is the device that receives the message. It can be a computer,
workstation, telephone handset, television etc.
4. Transmission Medium: Transmission medium is the path along with the message travels from
the sender to the receiver.
5. Protocol: A protocol is a set of rules agreed upon by the two device which are involved in
communication.

Fig. Five Components of Communication

1
Analog and Digital Data Transmission
While transmitting the data from source to destination, one must be concerned with the nature of
data, the physical means that is used to send the data, and the processing that is needed to ensure
that the received data is meaningful.

Analog Data Transmission


Analog transmission uses continuous form to represent the data suc1h as sound waves or
microwaves. Analog signals are perfect for carrying data such as voice or sound. These signals
are prone to error or noise which are caused from an outside source.
When data is transmitted over long distance digital transmission is hampered by introduction of
spurious signals. To overcome these problems, digital data signal are to be converted to analog
ones. Most common media for analog data transmission is telephone lines although microwaves,
coaxial cables are the alternatives.
There are basically three characteristics that define the analog data
1. Amplitude: The amplitude is defined as the value of analog signal at any point on the
wave. Amplitude of the signal may be represented in voltage, current, speed etc.
2. Frequency: Frequency refers to the numbers of cycles a signal completes in one second
and it is measured in Hertz(Hz).
3. Wavelength/Phase: It is refer to the distance between successive similar points of a given
wave. It is measured in degrees.
The analog signal shown below

Fig. Analog Signals

Digital Data Transmission


Most computers transfer data signals internally using digital data transmission. Digital signaling
is accomplished by inducing small discrete pulses of electricity, typically about 5 volts through a
wire. A digital signal is a sequence of voltage pulses represented in binary form i.e. 0‟s and 1‟s.
It works well within computer and require little power and internal data transfer rates can be very
high. The following fig. shows the digital signal.

Fig. Digital Signals

2
In the computer system, the data are generated and received in digital form. However, in real
world, the data needs to be presented in analog signal. Devices like A/D and D/A converters are
used to convert the digital data to analog data and vice versa.

Transmission Impairment
Signals travel through transmission media, which are not perfect. The imperfection cause signal
impairment. This means that the signal that is received is not the same as transmitted. For digital
data this difference can cause errors in the data that is transmitted.
For analog signals, these impairments introduce various random modifications that degrade the
signal quality.
For digital signals, bit errors are introduced. A binary 1 is transformed into a binary 0 and vice-
versa.
The most common impairments are
1. Attenuation
2. Delay distortion
3. Noise

Attenuation
The strength of a signal fall off with distance over any transmission medium. Attenuation
means loss of energy. The amount of energy depends on the frequency. When signal travels
through medium, it losses some of its energy due to the resistance of the medium.
This is the reason why a wire carrying electric signal gets warm. Some of the electrical energy
in the signal is converted to heat. To balance this loss, amplifiers are used to amplify the signal.

Fig. Attenuation

Delay Distortion
Distortion means change in its form or shape. Distortion can occur in a composite signal made of
different frequencies.
Each signal component has its own propagation speed through a medium and therefore its own
delay in arriving at the final destination.
Difference in delay may create a difference in phase if the delay is not exactly the same as the
period duration. This effect is referred to as delay distortion, since the received signal is distorted
due to variable delay in its component.
Delay distortion is particularly critical for digital data

Fig. Distortion

3
Noise
For any data transmission event, the received signal may be modified by the various distortions
imposed by the transmission system, plus additional unwanted signals that are inserted
somewhere between transmission and reception. The undesired signals are referred to as noise. It
is the major limiting factor in communications system performance.
Noise may be divided into four categories
a. Thermal Noise
b. Intermodulation Noise
c. Crosstalk
d. Impulse Noise

a. Thermal Noise : It is due to random flow of electrons in a conductor. It is present in all


electronic devices and transmission media and is a function of temperature.
Thermal noise is difficult to eliminate and therefore affects the communication system
performance.

Fig. Noise

b. Intermodulation Noise: It is produced when there is some nonlinearity in the transmitter,


receiver or intervening transmission system.
When signals of different frequencies share the same transmission medium, the result
may be intermodulation noise. Intermodulation noise produces signals at a frequencies.
For example, the mixing of signals at frequencies f1 and f1 might produce energy at the
frequency f1+f2. This derived signal could interface with an intended signal at the
frequency f1+f2.

c. Crosstalk: It is the unwanted coupling between signal paths. Crosstalk can occur by
electrical coupling between nearby twisted pair or coax cable lines carrying multiple
signals. Crosstalk can also occur when unwanted signals are picked up by microwave
antennas.

d. Impulse Noise: It is non-continuous, irregular pulses or noise spikes which are of short
duration and or high amplitude. It can be generated from a variety of causes, including
external electromagnetic disturbances, such as lightning and faults in the communication
system.

4
Fig. Effect of noise on a digital signal

Above figure shows the example of the effect of noise on a digital signal. Here the noise
consists of a relatively modest level of thermal noise plus spikes of impulse noise. The noise is
sufficient to change a 1 to a 0 or a 0 to a 1.

Transmission Media.
Communication media or channel represents the type of channel over which messages are
transmitted. Many communication systems use several different media. Like telephone lines,
there are several types of physical channels through which data can be transmitted from one
location to another.
Internal data transmission refers to the transfer of data within a computer, while external data
transmission refers to the transfer of data to either local peripheral equipment or remote
computers. Following are the transmission media
a. Cost of Media
b. Bandwidth of a Cable
c. Coverage of Network
d. Effect of atmosphere
e. Data security
f. Maintenance charges and efforts etc.

There are two types of communication channels


1. Physical connection
2. Wireless connection

Fig. Communication Channel

5
1. Physical Connection: Physical connection use a solid medium to connect sending and
receiving devices. These connection includes
 Twisted Pair
 Coaxial Cable
 Fiber-optic cable

TWISTED PAIR

 It is the oldest and still most common transmission media


 It consists of two insulated copper wires, twisted together to reduce interference by
adjacent wires.
 It can be used for either analog transmission or for digital transmission
 It can carry the signals for some distance without any amplification
 If the thickness of the twisted pair is increased, it can carry more data through it.
 Due to the simplicity of this transmission method, good performance by this medium and
the involved low costs.
 Now a day it is designed to handle data communications with speeds upto 100 mega bits
per second
 It is used in local telephone communications and short distance in digital data
transmission.
 Advantage of twisted pair cabling is that it is cheaper, reliable, flexible, low error rates,
easy to maintain and best suited for small network.
 Disadvantages of twisted pair cabling is, it is susceptibility to noise and should not be
routed near nay device that has strong electromagnetic devices like radio television etc.
as electrical signal are broadcast in network it may be intercepted by station which are
not actually connected to the network, it may harm to the security of the system.

COAXIAL CABLE

 It is relatively new media used for data communication


 It is designed to handle the data communications for longer distance at higher speeds

6
 It consists of a central copper wire surrounded by a PVC insulation over which an outer
shield of thick PVC material again shields a sleeve. The signal is transmitted by the inner
copper wire and is electrically shielded by the outer metal sleeve.
 It is widely used for cable television transmission
 Coaxial cable is used in implementing a local area network called the Ethernet.
 Advantage of coaxial cables is that it is noise immunity and can offer cleaner and crisper
data transmission without distortion or loss of signal
 Disadvantages of coaxial cables are implementation cost; like twisted pair it is also
susceptibility to noise.

OPTICAL FIBER/ FIBER OPTICS


 It is the modern way to data transmission and used advanced optical techniques based
upon laser technology
 It is capable of carrying bulk amount of data to very long distances
 This media consist of an ultra thin fiber of glass. The data/signals are indicated with the
help of a light source. If the light source rapidly turns on the light then it is treated as a 1
and if the light source rapidly turns off then it is treated as a 0.
 This medium is considered to be very fast, safe and secured
 In the areas where copper wire are not used (rainy areas), it provides a very suitable
substitute.
 The data transmitted through this media cannot be caught/diverted by unauthorized
sources, so the data transmission through this medium is considered very safe.
 They provide high quality (low error rate) transmission of signals at very high speeds.
Digital transmission speeds of 1 gigabyte per second have been with error less than 1 in
10^9 bits.
 Fiber optic transmissions are not affected by electromagnetic interference. Hence, noise
and distortion are also reduced with fiber optics.
 Disadvantage of fiber optic is, it is extremely difficult and expensive to tap a fiber optic
cable at various points. This feature also provides security against unauthorized
tampering of information.

Comparison of Physical Transmission Connection

Sr. No. Parameter Twisted Pair Coaxial Cable Fiber Optics


1 Geographical Area 3 KM 10-50 KM Above 10 KM
2 Bandwidth Low Moderate Very High
3 Data transfer Reliability Low High Very High
4 Transmission Security Low Low High
5 Noise Susceptibility High Moderate None
6 Installation Easiest Hard Moderate

7
WIRELESS CONNECTION
 Wireless connection does not use a medium to connect sending and receiving devices
 This communication media does not require any point-to-point connections between sender
and receiver though wires.
 It proves very advantageous for mobile machines, but can also be used by stationery
machines
 It proves very fast transmission of data
 In this transmission, the data waves with the help of a dish antenna. These waves are
transmitted to the receiver, via a satellite. The receiver receives these waves with the help of
a compatible dish antenna
 Wireless communication media may use radio waves, micro waves, infrared waves, light
waves etc., to transmit the data
 This media is a natural choice where very long distance are to be covered or in those areas
where the senders and the receivers cannot be connected through wires.
 They use the air itself and use following techniques
 Microwave
 Satellite

Microwave Communication
 This communication media popular way of transmitting data since it does not incur the
expense of laying cables. It proves very advantageous for mobile machines, but can also be
used by stationary machines.
 It provides very fast transmission of data bout 16 giga (1 giga= 10^9) bits per second.
 In this transmission, the data is made to propagate in the form of waves.
 A sender generates data waves with the help of dish antenna. These waves are transmitted to
the receiver, via a satellite. The receiver receives this wave with the help of compatible dish
antenna
 Wireless communication media may use radio waves, microwave, infrared waves. Light
waves etc. to transmit the data
 This media is natural choice where very long distances are to be covered or in those area
where sender and the receiver can not be connected through wires.
 Disadvantage of this communication media is
Transmitter and receiver of a microwave system, which are mounted on very high towers,
should be in a line of sight. Due this it may not be possible for very long distance
transmission. Moreover, the signals become weaker after traveling a certain distance require
power amplification. Due to this reason, several repeater stations are normally required for
long distance transmission, which increases the cost of data transmission between two
points.

Satellite Communication
It is relatively newer, and more promising data transmission media and overcomes the problems
faced in microwave communication
A communication satellite is basically a microwave relay station placed in outer space
These satellites are launched either by space shuttles and are precisely positioned in a
geosynchronous orbit, it is stationary relative to earth and always stays over the same point on
the ground.
Satellite communication, microwave signal at 6 GHz is transmitted from a transmitter on earth to
the satellite positioned in space.

8
Advantage of a satellite communication is that a single microwave relay station visible from any
point of a very large area.
Indian satellite, INSAT-1B is positioned in such a way that it is accessible from any place in
India.
A major drawback of satellite communication
i. Placing the satellite communication has been the high cost of placing the satellite into
its orbit.
ii. As signal sent to a satellite is broadcast to all receives within the satellites range a
proper security measures are to be taken to prevent unauthorized tampering of
information.

Data Encoding:
Encoding is the process of converting the data or a given sequence of characters, symbols,
alphabets etc., into a specified format, for the secured transmission of data. Decoding is the
reverse process of encoding which is to extract the information from the converted format.
Encoding is the process of using various patterns of voltage or current levels to represent 1s and
0s of the digital signals on the transmission link.

Digital and Analog Data


Digital Data

Digital data, in information theory and information systems, is the discrete, discontinuous
representation of information or works. Numbers and letters are commonly used representations.
Digital data can be contrasted with analog signals which behave in a continuous manner, and
with continuous functions such as sounds, images, and other measurements.
The word digital comes from the same source as the words digit. The term is most commonly
used in computing and electronics, especially where real-world information is converted to
binary numeric form as in digital audio and digital photography.

Digital data is a binary language. When you press a key on the keyboard, an electrical circuit is
closed. The circuit acts like a switch and has only two possible options: open or closed. If you
know Morse code, the idea is the same. A string of dashes and dots represents one letter or
number. This is binary. There is no halfway or in-between. The status of the switch as open or
closed is interpreted by the computer as a 0 or 1. Each digit is known as a bit.

Analog data
Analog data is data that is represented in a physical way. Where digital data is a set of individual
symbols, analog data is stored in physical media, whether that's the surface grooves on a vinyl
record, the magnetic tape of a VCR cassette, or other non-digital media.

One of the big ideas behind today's quickly developing tech world is that much of the world's
natural phenomena can be translated into digital text, image, video, sound, etc. For example,
physical movements of objects can be modeled in a spatial simulation, and real-time audio and
video can be captured using a range of systems and devices.
Analog data may also be known as organic data or real-world data.

9
Signals:
Digital and Analog Signal.

Digital Signal
Digital signal uses discrete signals as on/off representing binary format. Off is 0, On is 1.
Digital signal uses square waves.
Digital signal first transform the analog waves to limited set of numbers and then record them as
digital square waves.
Digital transmission is easy and can be made noise proof with no loss at all.
Digital signal hardware can be easily modulated as per the requirements.
Digital transmission needs more bandwidth to carry same information.
Digital data is stored in form of bits.
Digital signal needs low power as compare to its analog counterpart.
Digital signal are good for computing and digital electronics.
Digital signal are costly.
Digital signal are: Computer, CD, DVD.

Analog Signal
Analog Signal uses continuous signals with varying magnitude.
Analog Signal uses sine waves.
Analog Signal s records the physical waveforms as they are originally generated.
Analog Signal s are affected badly by noise during transmission.
Analog Signal 's hardwares are not flexible.
Analog transmission requires less bandwidth.
Analog data is stored in form of waveform signals.
Analog Signal consumes more power than digital systems.
Analog Signals are good for audio/video recordings.
Analog Signals are cheap.
Analog Signals are: Analog electronics, voice radio using AM frequency.

Digital Data Communication- Asynchronous and Synchronous Transmission


When data is exchanged between two devices linked by a transmission medium, a high degree of
cooperation is required.
Data are transmitted one bit at a time over the medium. The timing (rate, duration, spacing) of these bits
must be the same for transmitter and receiver. Two common techniques for controlling this timing are
asynchronous and synchronous.

Asynchornous Transmission
The approach with this scheme is to avoid the timing problem by not sending long, uninterrupted streams
of bits. Data are transmitted one character at a time, where each character is five to eight bits in length.
Timing or synchronization must only be maintained within each character, the receiver has the option to
resynchronize at the beginning of each new character. This technique is shown in the figure.

10
Fig. Asynchronous Transmission
The line between transmitter and receiver is in idle state, when no character is being transmitted. The term
idle is equivalent to the signaling element for binary 1. Thus, for NRZ-L signaling, which is common for
asynchronous transmission, idle would be the presence of a negative voltage on the line.

Start bit is present at the beginning of a character with a value of binary 0. This is followed by the 5 to 8
bits that actually makeup the character. The bits of the character are transmitted beginning with the least
significant bit (LSB).

For example, for IRA (International Reference Alphabet) characters, the data bits are usually followed by
a parity bit, which therefore is in the most significant bit (MSB) position.

The parity bit is set by the transmitter such that the total number of ones in the character, including the
parity bit, is even (even parity) or odd (odd parity), depending on the convention being used.

The receiver uses this bit for error detection. The final element is a stop element which is a binary 1. A
minimum length for the stop element is usually 1, 1.5 or 2 times the duration of an ordinary bit.

No maximum value is specified. Because the stop element is the same as the idle state, the transmitter will
continue to transmit the stop element until it is ready to send the next character. The timing requirements
for this scheme are modest.

Synchronous Transmission
With synchronous transmission, a block of bits is transmitted in a steady stream without start and stop
codes.
The bit stream is combined into longer “frames”, which may contain multiple bytes. To prevent timing
drift between transmitter and receiver, their clocks must somehow be synchronized.

One possibility is to provide a separate clock line between transmitter and receiver, one side pulses the
line regularly with one short pulse per bit time. The other side uses these regular pulses as a clock. This
technique works well over short distances, but over longer distances the clock pulses are subject to
impairments and timing errors can occurs.

The other alternative is to embed the clocking information in the data signal. For digital signals, this can
be accomplished with Manchester or differential, Manchester encoding.

For analog signals, a number of techniques can be used; for example, the carrier frequency itself can be
used to synchronize the receiver based on the phase of the carrier.

11
With synchronous transmission, there is another level of synchronization required, to allow the receiver to
determine the beginning and end of a block of data. Each block begins with a preamble bit pattern and
ends with a postamble bit pattern.

In addition, other bits are added to the block that convey control information used in the data link control.
The data plus preamble, postamble, and control information are called a frame. The exact format of the
frame depends on which data link control procedure is being used.

Fig. Synchronous transmission

Above figure shows a frame format for synchronous transmission. The frame starts with a preamble
called a flag, which is 8 bits long. The same flag is used as a postamble. The receiver looks for the
occurance of the flag pattern to signal the start of a frame. This is followed by some number of control
fields then a data field and finally the flaog is repeated.

For sizable blocks of data, synchronous transmission is far more efficient than asynchronous.
Asynchronous transmission requires 20% or more overhead.

Error Detection Technique


During transmission, there will be errors, resulting in the change of one or more bits in a
transmitted frame. Data re transmitted as one or more contiguous sequences of bits, called
frames. We define these probabilities with respect to errors in transmitted frames
Pb: Probability that a bit is received in error; also known as the bit error rate(BER)
P1: Probability that a frame arrives with no bit errors
P2: Probability that, with an error-detecting algorithm in use, a frame arrives with one or more
undetected errors
P3: Probability that, with an error-detecting algorithm in use, a frame arrives with one or more
detected bit errors but no undetected bit errors.

First consider the case in which no means are taken to detect errors. Then the probability of
detected errors (p3) is zero. To express the remaining probabilities, assume the probability that
any bit is in error(pb) is constant and independent for each bit. Then we have
P1= (1-pb)f
P2=(1-p1)
Where, f is the number of bits per frame. In words, the probability that a frame arrives with no
bit errors decreases when the probability that a frame arrives with no bit errors decreases when
the probability of a single bit error increases, as you would expect. Also, the probability that a
frame arrives with no bit errors decreases with increasing frame length. The longer the frame, the
more bits it has and hence higher the probability that one of these has errors.

There are two Error Detecting Techniques


1. Parity Bit
2. Cyclic Rendundancy Check

12
1. Parity Bit : One of the simplest error-detection scheme is to add a parity bit to the end of a
block of data. A common example is ASCII transmission, in which a parity bit is attached to
each 7-bit ASCII character. The value of this bit is selected so that the character has an even
number of 1s (even parity) or an odd number of 1s (odd parity). So, for example, if the
transmitter is transmitting an ASCII G (1110001) and using odd parity, it will append a 1
and transmit 11100011.
The receiver examines the received character and, if the total number of 1s is odd, assumes
that no error has occurred. If one bit (or any odd number of bits) is incorrectly inverted
during transmission (for example, 11QO0011), then the receiver will detect an error. Note,
that if two of bits are inverted due to error, an undetected error occurs. Typically, even parity
is used for synchronous transmission and odd parity for asynchronous transmission.

2. Cyclic Redundancy Check (CRC): one of the most powerful, error-detecting codes is the
cyclic redundancy check (CRC), which can be described as follows. Given a k-bit length
data block or message, the transmitter generates an (n-k) bit sequence which is known as a
frame check sequence (FCS), such that the resulting frame now consists of n bits. The
resulting frame of n bits is exactly divisible by some predetermined number.
The receiver then divides the incoming frame by that number and if there is no remainder
assumes that there was no error.
CRC process can be detected by following procedures:
a) Modulo 2 Arithmetic
b) Polynomials
c) Digital Logic

a) Modulo 2 Arithmetic: Modulo 2 arithmetic uses binary addition with no carries, which is
just the exclusive-OR (XOR) operation. Binary subtraction with no carries is also
interpreted as the XOR operation: For example,

1111 1111 11001


+ 1010 - 0101 x 11
0101 1010 11001
11001
101011

Now define
T= n-bit frame to be transmitted
D= k-bit block of data, or message, the first k bits of T
F= (n-k) bit FCS, the last (n-k) bits of T
P= pattern of n-k+1 bits; this is the predetermined divisor
We would like T/P to have no remainder. It should be clear that
T=2n-k D+F

b) Polynomials: A second way of viewing the CRC process is to express all values as
polynomials in a dummy variable X, with binary coefficients. The coefficients
correspond to the bits in the binary number
c) Digital logic: The CRC process can also be represented by using multiple circuits. These
circuits consist of XOR gates and a shift register. The shift registry is simply a string
storage device which stores 1-bit of data. Each device has an output line, which indicates
the value currently stored, and an input line.
13
At discrete time instants, known as clock times, the value in the storage device is
replaced by the value indicated by its input line. The entire register is clocked
simultaneously, causing a 1-bit shift along the entire register.

Interfacing
Most of the digital data processing devices have limited data transmission capability. They
generate a simple digital signal, such as NRZ-L, and the distance across which they can transmit
data is limited. It is rare for such a device to attach directly to a transmission or networking
facility. Such devices include terminals and computers, and are generically referred to as data
terminal equipment(DTE).
A DTE makes use of the transmission system through the mediation of data circuit-terminating
equipment (DCE). One of the common examples is a modem.

Interface has four important characteristics


i. Mechanical
ii. Electrical
iii. Functional
iv. Procedural

i. Mechanical characteristics: These characteristics are related to the actual physical connection
of the DTE to the DCE. The signal and control interchange circuits are bundled into a cable with
a terminator plug, male or female, at each end.

ii. Electrical characteristics: Electrical characteristics are related with the voltage levels and
timing of voltage changes. Both DTE and DCE must use the same code, must use the same
voltage levels to mean the same things, and must use the same duration of signal elements. These
characteristics determine the data rates and distance that can be achieved.

iii. Functional characteristics: these characteristics specify the functions that are performed by
assigning meanings to each of the interchange circuits. Functional can be classified into the
broad categories of data, control, timing, and electrical ground.

iv. Procedural characteristics: These characteristics specify the sequence of events for
transmitting data, based on the functional characteristics of the interface. A variety standard for
interfacing exists. Two of the most important standards are EIA-232-D and the ISDN physical
interface.

ISDN Physical Interface


The X.21 standard for interfacing to public circuit-switched networks, specifying a 15-pin
connector. At present this trend has been carried further with the specification of an 8-pin
physical connector to an Integrated Services Digital Network (ISDN) which is an all-digital
replacement for existing public telephone and analog telecommunications networks.

i. Physical connection: In ISDN terminology, a physical connection is made between terminal


equipment (TE) and network-terminating equipment (NT). These both terms are closely related
to DTE and DCE, respectively.

14
Fig. ISDN Interface
The physical connection which is define in ISO (International Standard Organization) 8877,
specifies that the NT and TE cables shall terminate in matching plugs that provide for 8 contacts.
Figure shows the contact assignments for each of the 8 lines on both the NT and TE sides. Two
pins are used to provide data transmission in each direction. These contract points are used to
connect twisted-air leads coming from the NT and TE devices. Because there are no specific
functional circuits, the transmit receive circuit are used to carry both data and control signals.
The control information is transmitted in the form of message.

The specification provides for the capability to transfer power across the interface. This power
transfer can be accomplished by using the same leads used for digital signal transmission
(c,d,e,f) or on additional wires, using access leads g-h. The remaining two leads are not used in
the ISDN configuration but may be useful in other configurations.

ii. Electrical Specification: The ISDN electrical specification dictates the use of balanced
transmission. With balanced transmission, signals are carried on a line, such as twisted pair,
consisting of two conductors. Signals are transmitted as a current that travels down one
conductor and returns on the other, the two conductors forming a complete circuit.

For digital signals, this technique is known as differential signaling as the binary value depends
on the direction of the voltage difference between the two conductors. Unbalanced transmission,
which is used on older interfaces such as EIA-232, uses a single conductor to carry the signal,
with ground providing the return path.

Data Link Controls:


To achieve the necessary control, a layer of logic is added above the physical layer. This logic is
referred to as data link control or a data link control protocol. When a data link control protocol
is used, the transmission medium between systems is referred to as a data link.

Line Configuration
Two characteristics that distinguish various data link configurations are topology and duplicity
which describes whether the link is half duplex or full duplex.

15
Topology and Duplicity
The topology of a data link refers to the physical arrangement of stations on a transmission
medium. If there are only two stations (e.g. a terminal and a computer or two computers), the
link is point to point. If there are more than two stations, then it is a multipoint topology.

Traditionally, a multipoint link has been used in the case of a computer (primary station) and a
set of terminals (secondary stations). The multipoint topology is found in local area networks.

Fig. Traditional Computer/Terminal Configurations

Traditional multipoint topologies are made possible when the terminals are only transmitting a
fraction of the time.
Above figure shows the advantages of the multipoint configuration. If each terminal has a point-
to-point link to its computer, then the computer must have one I/O port for each terminal. Also
there is a separate transmission line from the computer to each terminal.
In a multipoint configuration, the computer needs only a single I/O port and a single transmission
line which saves costs.

Full Duplex and Half Duplex


Data exchange over a transmission line can be classified as full duplex or half duplex. With half-
duplex transmission, only one of two stations on a point-to-point link may transmit at a time.
This mode is also referred to as two-way alternate, suggestive of the fact that two stations must
alternate in transmitting. This can be compared to a one-lane, two-way bridge. This form of
transmission is often used for terminal-to-computer interaction.

For full duplex transmission, two stations can simultaneously send and receive data from each
other. Thus, this mode is known as two-way simultaneous and may be compared to a two-lane,
two-way bridge. For computer-to-computer data exchange, this form of transmission is more
efficient than half-duplex transmission.

Flow Control
Flow control is a technique for assuring that a transmitting entity does not overcome a receiving
entity with data. The receiving entity typically allocates a data buffer of some maximum length
for a transfer. When data are received, the receiver must do a certain amount of processing

16
before passing the data to the higher-level software. In the absence of flow control, the receiver‟s
buffer may fill up and overflow while it processing old data.

The mechanisms for flow control in the absence of errors is examine with the model as shown in
figure, which is a vertical-time sequence diagram. It has the advantages of showing time
dependencies and show the correct send-receive relationship.

Each arrow represents a single frame transiting a data link between two stations. The data are
sent in a sequence of frames, with each frame containing a portion of the data and some control
information. The time it takes for a station to emit all of the bits of a frame onto the medium is
the transmission time; this is proportional to the length of the frame.

The propagation time is the time it takes for a bit to traverse the link between source and
destination. All frames that are transmitted are successfully received; no frames are lost and none
arrive with errors. Furthermore, frames arrive in the same order in which they are sent. However,
each transmitted frame suffers an arbitrary and variable amount of delay before reception.

Fig. Model of Frame Transmission

Stop-and-wait flow control


The simplest form of flow control is known as stop-and wait flow control. A source entity
transmits a frame. When the destination entity receives the frame, it indicates its willingness to
accept another frame by sending back an acknowledgment to the frame just received. The source
must wait until it receives the acknowledgement before sending the next frame. The destination
can thus stop the flow of data simply by withholding acknowledgement. This procedure works
well and can be improved when message larges frames are received.
It is done form following reasons
1. The buffer size of the receiver may be limited
2. The longer the transmission, the more likely that there will be an error, necessitating
retransmission of the entire frame. With smaller frames, errors are detected sooner and a
smaller amount of data needs to be retransmitted.
3. On a shared medium, such as a LAN, it is usually desirable not to permit one station to
occupy the medium for an extended period, thus causing long delays at the other sending
stations.

17
With the use of multiple frames for a single message, the stop-and –wait procedure may be
insufficient. The essence of the problem is that only one frame at a time can be in transit.

Fig. Stop-and-wait link utilization (transmission time=1, propagation time=a)

In situations where the bit length of the link is grater than the frame length, serious inefficiencies
occurred. In the figure, the transmission time is normalized to one, and the propagation delay is
expressed as the variable a. when a is less than 1, the propagation time is less than the
transmission time. In this case, the frame is sufficiently long that the first bits of the frame have
arrived at the destination before the source has completed the transmission of the frame.

When a is grate than 1, the propagation time is greater than the transmission time. In this case,
the sender completes transmission of the entire frame before the leading bits of that frame arrive
at the receiver.

Sliding Window Protocol


The problem that we face during transmission is that only one frame can be sent at a time. In
such situations where the bit length (a) of the link is greater than the frame length (I) (a>I),
serious inefficiencies result. Efficiency can be improved by allowing multiple frames to be in
transit at the same time.
The working of this for two stations, A and B, connected via a full-duplex link is described as
follows: Station B allocates buffer space for W frames. Thus, B can accept W frames without
waiting for any acknowledgments. To keep track of which frames have been acknowledged, each
is labeled with a sequence number. B acknowledges a frame by sending an acknowledgement
that includes the sequence number of the next frame expected. This acknowledgement also
implicitly announces that B is prepared to receive the next W frames, beginning with the number
specified.
This scheme can also be used to acknowledge multiple frames. For example, B could receive
frames 2, 3 and 4 but withhold acknowledgement until frame 4 has arrived. By then returning an
acknowledgment with sequence number 5, B acknowledges frames 2, 3, and 4 at one time.

A maintains a list of sequence numbers that is allowed to send, and B maintains a list of
sequence numbers that it is prepared to receive. Each of these lists can be thought of as a window
of frames. The operation is referred to as sliding-window flow control.

18
Fig. Sliding-Window Depiction

Error Controls
When data-frame is transmitted there are following two probabilities,
1. Data-frame may be lost in the transit or
2. It is received in corrupted form
In both the above cases, the receiver does not receive the correct data-frame and sender does not
know anything about any loss. In these types of cases, both sender and receiver are equipped
with some protocols which help them to detect transit errors like data-frame lost. Requirements
for error control mechanism:
a. Error detection: The sender and receiver, either both or any, must ascertain that there‟s
been some error on transit
b. Positive ACK: when the receiver receives a correct frame, it should acknowledge it.
c. Negative ACK: when the receiver receives a damaged frame or a duplicate frame, it
sends a NACK back to the sender and the sender must retransmit the correct frame.
d. Retransmission: the sender maintains a clock and sets a timeout period. If an
acknowledgement of a data-frame previously transmitted does not arrive in the timeout
period, the sender retransmit the frame, thinking that the frame or its acknowledge is lost
in transit.

Acknowledgement Techniques
There are three types of technique available which Data-link layer may deploy to control the
errors by Automatic Repeat Requests (ARQ):
1. Stop-and-Wait ARQ

19
Fig. Stop and wait ARQ
The following transition may occur in stop-and wait ARQ
 The sender maintains a timeout counter.
 When a frame is sent the sender starts the timeout counter.
 If acknowledgment of frame comes in time, the sender transmits the next frame in
queue.
 If acknowledgement does not come in time, the sender assumes that either the
frame or its acknowledgement is lost in transit. Sender retransmits the frame and
starts the timeout counter.
 If a negative acknowledgment is received, the sender retransmits the frame.

2. Go-Back-N ARQ
Stop and wait ARQ mechanism does not utilize the resources at their best. For the
acknowledgment is received, the sender sits idle and does nothing. In Go-Back-N ARQ
method, both sender and receiver maintain a window.
The sending-window size enables the sender to send multiple frames without receiving
the acknowledgement of the previous ones. The receiving-window enables the receiver to
receive multiple frames and acknowledge them. The receiver keeps tracks of incoming
frame‟s sequence number.

Fig. Go-Back-N ARQ

20
When the sender sends all the frames in window, it checks up to what sequence number it
has received positive ACK. If all frames are positively acknowledged, the sender sends
next set of frames. If sender finds that it has received NACK or has not receive any ACK
for a particular frame, it retransmits all the frames after which it does not receive any
positive ACK.

3. Selective Repeat ARQ


In Go-Back-N ARQ, it is assumed that the receiver does not have any buffer space for its
window size and has to process each frame as it comes. This enforces the sender to
retransmit al the frames which are not acknowledged.
In selective-repeat ARQ, the receiver while keeping track of sequence numbers, buffers
the frames in memory and send NACK for only frame which is missing or damaged.

Fig. Image: Selective Repeat ARQ

Data Link Control Protocols


The most important Data link control protocol is HDLC (High Level Data Link Control). HDLC
is a bit oriented protocol for communication over point-to-point and multipoint links. It
implements the ARQ mechanisms. HDLC provides two common transfer modes:
1. Point-to-Point: When two devices are connected by a pair of wires or using a microwave
or satellite link.
2. Multipoint: When two or more devices share the same link.

Basic Characteristics
1. Three types of stations
2. Two link configurations
3. Three data transfer modes of operation

1. Three types of stations


 Primary station: Responsible for controlling the operation of the link. Frames
issued by the primary are called commands.
 Secondary station: operates under the control of the primary station. Frames
issued by a secondary are called responses. The primary maintains a separate
logical link with each secondary station on the line.
 Combined station: combines the features of primary and secondary. A combined
station may issue both commands and responses.
21
2. The two link configurations are
 Unbalanced configuration: Consists of one primary and one or more secondary
stations and supports both full-duplex and half duplex transmission.
 Balanced configuration: consists of two combined stations and supports both full-
duplex and half-duplex transmission.

3. The three data transfer modes are


 Normal response mode (NRM): NRM is used on multidrop lines, in which a
number of terminals are connected to a host computer. The computer polls each
terminal for input. NRM is also sometimes used on point-to-point links,
particularly if the link connects a terminal or other peripheral to a computer.
 Asynchronous balanced mode (ABM): Used with a balanced configuration. Either
combined station may initiate transmission without receiving permission from the
other combined station.
 Asynchronous response mode (ARM): ARM is rarely used; it is applicable to
some special situations in which a secondary may need to initiate transmission.
Frame Structure
HDLC uses synchronous transmission. All transmissions are in the form of frames, and a single
frame format suffices for all types of data and control exchanges. Figure explains the structure of
the HDLC frame.

Fig. HDLC frame structure


The flag, address, and control fields that precede the information field are known as a header.
The FCS and flag fields following the data field are referred to as a trailer.

1. Flag Fields : Flag fields delimit the frame at both ends with the unique pattern 0111 1110. A
single flag may be used as the closing flag for one frame and the opening flag for the next. On
both sides of the user-network interface, receivers are continuously looking for the flag sequence
to synchronize on the start of a frame. While receiving a frame, a station continues to look for
that sequence to determine the end of the frame.

To the synchronization problem, a procedure known as bit stuffing is used. For all its between
the starting and ending flags, the transmitter inserts an extra 0 bit after each occurrence of five 1s
in the frame.
22
With the use of bit stuffing, arbitrary bit patterns can be inserted into the data field of the frame.
This property is known as data transparency

2. Address Field: The address field identifies the secondary station that is to receive the frame.
This field is not needed for point-to-point links, but is always included for the sake of uniformity.
The address field is usually eight bits long but, by prior agreement, an extended format may be
used in which the actual address length is a multiple of seven bits.
3. Control Field: Three types of frames are defined by HDLC each having a different control
field format. Information frames carry the data to be transmitted for the user. Additionally, flow
and error control data, using the ARQ mechanism, are piggybacked on an information frame.
Supervisory frames provide the ARQ mechanism when piggybacking is not used. Unnumbered
frames provide supplemental link control functions. The first one or two bits of the control field
serves to identify the frame type. The remaining bit positions are organized into subfields as
indicated in figures (c ) and (d).

Multiplexing:
Multiplexing is the process of combining the transmission, character by character, from several
devices into a single steam that can be transmitted over a single communication channel. A
multiplexer is a device that produces multiplexing. It is also used at the receiving end to separate
the transmission and send them back in their original order for processing.
Multiplexer are more efficient, less expensive and allows the communication channels to
transmit much more data at any one time than what a single device can send. The equipment that
multiplexes and demultiplexes is sometimes called multiplexer or mux.

Fig. Multiplexing

In the above figure, there are n inputs to a multiplexer. The multiplexer is connected by a single
data link to a de-multiplexer. The link is to carry n separate channels of data.
The multiplexer combines(multiplexes) data from the n input lines and transmits over a higher
capacity data link. The de-multiplexer accepts the multiplexed data stream separates
(demultiplexes) the data according to channel, and delivers them to the appropriate output lines.

Frequency Division Multiplexes


Frequency Division Multiplexing (FDM) uses separate frequencies to establish multiple channels
within a broadband medium. FDM is possible when the useful bandwidth of the transmission
medium exceeds the required bandwidth of signals to be transmitted.

A number of signals can be carried simultaneously if each signal is modulated onto a different
carrier frequency and the carrier frequencies are sufficiently separated such that the bandwidths
of the signals do not overlap. A general case of FDM is shown in figure.

23
Fig. Frequency-division multiplexing

Six signal sources are fed into a multiplexer, which modulates each signal onto a different
frequency (f1,…..f6). Each modulated signal requires a certain bandwidth centered around its
carrier frequency, referred to as a channel. To prevent interference, the channels are separated by
guard bands, which are unused portions of the spectrum.

The composite signal transmitted across the medium is analog. In the case of digital input, the
input signals must be passed through modems to be converted to analog. In either case, each
input analog signal must then be modulated to move it to the appropriate frequency band. An
example of FDM is broadcast and cable television. The television signal fits comfortably into a
6-MHz bandwidth.

Synchronous Time Division Multiplexing


Synchronous Time-Division Multiplexing (TDM) allocates time slot dynamically to active
devices on a first-come, first-served or priority basic rather than fix time slot.

Synchronous time-division multiplexing is possible when the achievable data rate sometimes
called bandwidth of the medium exceeds the data rate of digital signals to be transmitted. A
generic description of a synchronous TDS system is provided in following figure.

Fig. Synchronous time-division multiplexing

The transmitted data may have a format something like figure (b). the data are organized into
frames. Each frame contains a cycle of time slots. In each frame, one or more slots are dedicated
to each data source. The sequence of slots dedicated to one source, from frame to frame, is called
a channel. At the receiver, the interleaved data are de-multiplexed and round to the appropriate
destination buffer.

Synchronous TDM is called synchronous not because synchronous transmission is used, but
because the time slots are pre-assigned to source and fixed. The time slots for each source are
transmitted whether or not the source has data to send. Hence, capacity is wasted to achieve

24
simplicity of implementation. Even when fixed assignment is used, however, it is possible for a
synchronous TDM device to handle sources of different data rates. For example, the slowest
input device could be assigned one slot per cycle, while faster devices are assigned multiple slots
per cycle.

25
UNIT II: Data Communication Network Communication Network :
Circuit Switching
Circuit switching is a connection-oriented network switching technique. Here, a dedicated route
is established between the source and the destination and the entire message is transferred
through it.
Phases of Circuit Switch Connection
 Circuit Establishment : In this phase, a dedicated circuit is established from the source to the
destination through a number of intermediate switching centres. The sender and receiver
transmits communication signals to request and acknowledge establishment of circuits.
 Data Transfer : Once the circuit has been established, data and voice are transferred from the
source to the destination. The dedicated connection remains as long as the end parties
communicate.
 Circuit Disconnection : When data transfer is complete, the connection is relinquished. The
disconnection is initiated by any one of the user. Disconnection involves removal of all
intermediate links from the sender to the receiver.
Diagrammatic Representation of Circuit Switching in Telephone
The following diagram represents circuit established between two telephones connected by
circuit switched connection. The blue boxes represent the switching offices and their connection
with other switching offices. The black lines connecting the switching offices represents the
permanent link between the offices. When a connection is requested, links are established within
the switching offices as denoted by white dotted lines, in a manner so that a dedicated circuit is
established between the communicating parties. The links remains as long as communication
continues.

Advantages
 It is suitable for long continuous transmission, since a continuous transmission route is
established, that remains throughout the conversation.
 The dedicated path ensures a steady data rate of communication.
 No intermediate delays are found once the circuit is established. So, they are suitable for real
time communication of both voice and data transmission.

26
Disadvantages
 Circuit switching establishes a dedicated connection between the end parties. This dedicated
connection cannot be used for transmitting any other data, even if the data load is very low.
 Bandwidth requirement is high even in cases of low data volume.
 There is underutilization of system resources. Once resources are allocated to a particular
connection, they cannot be used for other connections.
 Time required to establish connection may be high.

Packet Switching -
Packet switching is a connectionless network switching technique. Here, the message is divided
and grouped into a number of units called packets that are individually routed from the source to
the destination. There is no need to establish a dedicated circuit for communication.
Process
Each packet in a packet switching technique has two parts: a header and a payload. The header
contains the addressing information of the packet and is used by the intermediate routers to direct
it towards its destination. The payload carries the actual data.
A packet is transmitted as soon as it is available in a node, based upon its header information.
The packets of a message are not routed via the same path. So, the packets in the message arrives
in the destination out of order. It is the responsibility of the destination to reorder the packets in
order to retrieve the original message.
The process is diagrammatically represented in the following figure. Here the message comprises
of four packets, A, B, C and D, which may follow different routes from the sender to the
receiver.

Advantages
 Delay in delivery of packets is less, since packets are sent as soon as they are available.
 Switching devices don‟t require massive storage, since they don‟t have to store the entire
messages before forwarding them to the next node.
27
 Data delivery can continue even if some parts of the network faces link failure. Packets can
be routed via other paths.
 It allows simultaneous usage of the same channel by multiple users.
 It ensures better bandwidth usage as a number of packets from multiple sources can be
transferred via the same link.
Disadvantages
 They are unsuitable for applications that cannot afford delays in communication like high
quality voice calls.
 Packet switching high installation costs.
 They require complex protocols for delivery.
 Network problems may introduce errors in packets, delay in delivery of packets or loss of
packets. If not properly handled, this may lead to loss of critical information.

Packet Switching Principal


1. Packet switching network can perform data rate conversion. Two nodes can exchange packets
with different data rates because each device on the network is connected at its proper data
rate.
2. Whenever data traffic is more on circuit-switching network, some of the calls are blocked, it
means network refuses to accept other connection requests until the load on the network
decreases. But in case of packet-switching network, packets will be accepted but delivery
delay increases.
3. Line efficiency is more and greater, because a single node to node link can be randomly
shared by many packets at a time. The packets will be in queue and transferred as fast as
possible to the link.
4. It can be utilized, if a node has a number of packets in queue for transmission, then the higher
priority packets will be transmitted first. These packets will therefore experience less delay
than lower priority packets.
Virtual Circuit and Datagram
There are a number of differences between Virtual circuits and Datagram networks. Virtual
Circuits are computer networks that provide connection-oriented services, while those providing
connection-less services are called as Datagram networks.
Examples: The Internet which we use is based on Datagram network. ATM
(Asynchronous Transfer Mode) and frame relay – are virtual circuit networks and, therefore they
use connections at the network layer.

Virtual Circuits

28
1. Virtual circuits are connection-oriented, which means that there is a reservation of resources
like buffers, bandwidth, etc. for the time during which the newly setup VC is going to be used
by a data transfer session.
2. A virtual circuit network uses a fixed path for a particular session, after which it breaks the
connection and another path has to be set up for the next the next session.
3. All the packets follow the same path and hence a global header is required only for the first
packet of connection and other packets will not require it.
4. Packets reach in order to the destination as data follows the same path.
5. Virtual Circuits are highly reliable.
6. Implementation of virtual circuits is costly as each time a new connection has to be set up
with reservation of resources and extra information handling at routers.
Datagram Networks

1. It is connectionless service. There is no need for reservation of resources as there is no


dedicated path for a connection session.
2. A Datagram based network is a true packet switched network. There is no fixed path for
transmitting data.
3. Every packet is free to choose any path, and hence all the packets must be associated with a
header containing information about the source and the upper layer data.
4. Data packets reach the destination in random order, which means they need not reach in the
order in which they were sent out.
5. Datagram networks are not as reliable as Virtual Circuits.
6. But it is always easy and cost-efficient to implement datagram networks as there is no need
of reserving resources and making a dedicated path each time an application has to
communicate.
Message Switching
Message switching is a connectionless network switching technique where the entire message is
routed from the source node to the destination node, one hop at a time. It was a precursor of
packet switching.
Process
Packet switching treats each message as an individual unit. Before sending the message, the
sender node adds the destination address to the message. It is then delivered entirely to the next
intermediate switching node. The intermediate node stores the message in its entirety, checks for
29
transmission errors, inspects the destination address and then delivers it to the next node. The
process continues till the message reaches the destination.
In the switching node, the incoming message is not discarded if the required outgoing circuit is
busy. Instead, it is stored in a queue for that route and retransmitted when the required route is
available. This is called store and forward network.
The following diagram represents routing of two separate messages from the same source to
same destination via different routes, using message switching –

Advantages
 Sharing of communication channels ensures better bandwidth usage.
 It reduces network congestion due to store and forward method. Any switching node can
store the messages till the network is available.
 Broadcasting messages requires much less bandwidth than circuit switching.
 Messages of unlimited sizes can be sent.
 It does not have to deal with out of order packets or lost packets as in packet switching.
Disadvantages
 In order to store many messages of unlimited sizes, each intermediate switching node
requires large storage capacity.
 Store and forward method introduces delay at each switching node. This renders it
unsuitable for real time applications.

Single Node Network


In data communication, a node is any active, physical, electronic device attached to a network.
These devices are capable of either sending, receiving, or forwarding information; sometimes a
combination of the three. Examples of nodes include bridges, switches, hubs, and modems to
other computers, printers, and servers. One of the most common forms of a node is a host
computer; often referred to as an Internet node.
a single record in the member node is related to only one record in the owner node. Additionally,
a record in the member node cannot exist without being related to an existing record in the owner

30
node. For example, a client must be assigned to an agent, but an agent with no clients can still be
listed in the database.

The above diagram shows a diagram of a basic set structure. One or more sets (connections) can
be defined between a specific pair of nodes, and a single node can also be involved in other sets
with other nodes in the database.
The data can be easily accessed inside a network model with the help of an appropriate set
structure. there are no restrictions on choosing the root node, the data can be accessed via any
node and running backward or forward with the help of related sets.
Advantages
 fast data access.
 It also allows users to create queries that are more complex than those they created using
a hierarchical database. So, a variety of queries can be run over this model.

Digital Network Concept


The process of connecting a home computer to the Internet Service Provider used to take a lot of
effort. The usage of the modulator-demodulator unit, simply called the MODEM was the
essential thing to establish a connection. The following figure shows how the model worked in
the past.

The above figure shows that the digital signals have to be converted into analog and analog
signals to digital using modem during the whole path. What if the digital information at one end
reaches to the other end in the same mode, without all these connections? It is this basic idea that
lead to the development of ISDN.
As the system has to use the telephone cable through the telephone exchange for using the
Internet, the usage of telephone for voice calls was not permitted. The introduction of ISDN has
resolved this problem allowing the transmission of both voice and data simultaneously. This has
many advanced features over the traditional PSTN, Public Switched Telephone Network.
SDN was first defined in the CCITT red book in 1988.The Integrated Services of Digital
Networking, in short ISDN is a telephone network based infrastructure that allows the
transmission of voice and data simultaneously at a high speed with greater efficiency. This is a
circuit switched telephone network system, which also provides access to Packet switched
networks.

31
The model of a practical ISDN is as shown below.

ISDN supports a variety of services. A few of them are listed below −


 Voice calls
 Videotext
 Electronic Mail
 Database access
 Data transmission and voice
 Connection to internet
 Electronic Fund transfer
 Image and graphics exchange
 Document storage and transfer
 Audio and Video Conferencing
 Automatic alarm services to fire stations, police, medical etc.

Routing
When a device has multiple paths to reach a destination, it always selects one path by preferring
it over others. This selection process is termed as Routing. Routing is done by special network
devices called routers or it can be done by means of software processes. The software based
routers have limited functionality and limited scope.

A router is always configured with some default route. A default route tells the router where to
forward a packet if there is no route found for specific destination. In case there are multiple path
existing to reach the same destination, router can make decision based on the following
information:

 Hop Count
 Bandwidth
 Metric
 Prefix-length
 Delay

Routes can be statically configured or dynamically learnt. One route can be configured to be
preferred over others.

Unicast routing
Most of the traffic on the internet and intranets known as unicast data or unicast traffic is sent
with specified destination. Routing unicast data over the internet is called unicast routing. It is
32
the simplest form of routing because the destination is already known. Hence the router just has
to look up the routing table and forward the packet to next hop.

Broadcast routing

By default, the broadcast packets are not routed and forwarded by the routers on any network.
Routers create broadcast domains. But it can be configured to forward broadcasts in some special
cases. A broadcast message is destined to all network devices.
A router creates a data packet and then sends it to each host one by one. In this case, the router
creates multiple copies of single data packet with different destination addresses. All packets are
sent as unicast but because they are sent to all, it simulates as if router is broadcasting.

Multicast Routing
Multicast routing is special case of broadcast routing with significance difference and challenges.
In broadcast routing, packets are sent to all nodes even if they do not want it. But in Multicast
routing, the data is sent to only nodes which wants to receive the packets.

33
Anycast Routing
Anycast packet forwarding is a mechanism where multiple hosts can have same logical address.
When a packet destined to this logical address is received, it is sent to the host which is nearest in
routing topology.

Anycast routing is done with help of DNS server. Whenever an Anycast packet is received it is
enquired with DNS to where to send it. DNS provides the IP address which is the nearest IP
configured on it.

X.25.
X.25 is a protocol for packet switched communications over WAN (Wide Area Network). It was
originally designed for use in the 1970s and became very popular in 1980s. Presently, it is used
for networks for ATMs and credit card verification. It allows multiple logical channels to use the
same physical line. It also permits data exchange between terminals with different
communication speeds.
34
Fig. X. 25 interface

X.25 has three protocol layers


 Physical Layer: It lays out the physical, electrical and functional characteristics that
interface between the computer terminal and the link to the packet switched node. X.21
physical implementer is commonly used for the linking.
 Data Link Layer: It comprises the link access procedures for exchanging data over the link.
Here, control information for transmission over the link is attached to the packets from the
packet layer to form the LAPB frame (Link Access Procedure Balanced). This service
ensures a bit-oriented, error-free, and ordered delivery of frames.
 Packet Layer: This layer defines the format of data packets and the procedures for control
and transmission of the data packets. It provides external virtual circuit service. Virtual
circuits may be of two types: virtual call and permanent virtual circuit. The virtual call is
established dynamically when needed through call set up procedure, and the circuit is
relinquished through call clearing procedure. Permanent virtual circuit, on the other hand, is
fixed and network assigned.
Types of Network (Extra)
11 Types of Networks in Use Today
1. Personal Area Network (PAN)
The smallest and most basic type of network, a PAN is made up of a wireless modem, a
computer or two, phones, printers, tablets, etc., and revolves around one person in one building.
These types of networks are typically found in small offices or residences, and are managed by
one person or organization from a single device.
2. Local Area Network (LAN)
We‟re confident that you‟ve heard of these types of networks before – LANs are the most
frequently discussed networks, one of the most common, one of the most original and one of the
simplest types of networks. LANs connect groups of computers and low-voltage devices together
across short distances (within a building or between a group of two or three buildings in close
proximity to each other) to share information and resources. Enterprises typically manage and
maintain LANs.
3. Wireless Local Area Network (WLAN)
Functioning like a LAN, WLANs make use of wireless network technology, such as WiFi.
Typically seen in the same types of applications as LANs, these types of networks don‟t require
that devices rely on physical cables to connect to the network.
4. Campus Area Network (CAN)
Larger than LANs, but smaller than metropolitan area networks (MANs, explained below), these
types of networks are typically seen in universities, large K-12 school districts or small

35
businesses. They can be spread across several buildings that are fairly close to each other so
users can share resources.
5. Metropolitan Area Network (MAN)
These types of networks are larger than LANs but smaller than WANs – and incorporate
elements from both types of networks. MANs span an entire geographic area (typically a town or
city, but sometimes a campus). Ownership and maintenance is handled by either a single person
or company (a local council, a large company, etc.).
6. Wide Area Network (WAN)
Slightly more complex than a LAN, a WAN connects computers together across longer physical
distances. This allows computers and low-voltage devices to be remotely connected to each other
over one large network to communicate even when they‟re miles apart.
7. Storage-Area Network (SAN)
As a dedicated high-speed network that connects shared pools of storage devices to several
servers, these types of networks don‟t rely on a LAN or WAN. Instead, they move storage
resources away from the network and place them into their own high-performance network.
SANs can be accessed in the same fashion as a drive attached to a server. Types of storage-area
networks include converged, virtual and unified SANs.
8. System-Area Network (also known as SAN)
This term is fairly new within the past two decades. It is used to explain a relatively local
network that is designed to provide high-speed connection in server-to-server applications
(cluster environments), storage area networks (called “SANs” as well) and processor-to-
processor applications. The computers connected on a SAN operate as a single system at very
high speeds.
9. Passive Optical Local Area Network (POLAN)
As an alternative to traditional switch-based Ethernet LANs, POLAN technology can be
integrated into structured cabling to overcome concerns about supporting traditional Ethernet
protocols and network applications such as PoE (Power over Ethernet). A point-to-multipoint
LAN architecture, POLAN uses optical splitters to split an optical signal from one strand of
single mode optical fiber into multiple signals to serve users and devices.
10. Enterprise Private Network (EPN)
These types of networks are built and owned by businesses that want to securely connect its
various locations to share computer resources.
11. Virtual Private Network (VPN)
By extending a private network across the Internet, a VPN lets its users send and receive data as
if their devices were connected to the private network – even if they‟re not. Through a virtual
point-to-point connection, users can access a private network remotely.

LAN and MAN :


LAN
A Local Area Network (LAN) is a private network that connects computers and devices within a
limited area like a residence, an office, a building or a campus. On a small scale, LANs are used
to connect personal computers to printers. However, LANs can also extend to a few kilometers
when used by companies, where a large number of computers share a variety of resources like
hardware (e.g. printers, scanners, audiovisual devices etc), software (e.g. application programs)
and data.
The distinguishing features of LAN are
 Network size is limited to a small geographical area, presently to a few kilometers.
 Data transfer rate is generally high. They range from 100 Mbps to 1000 Mbps.

36
 In general, a LAN uses only one type of transmission medium, commonly category 5
coaxial cables.
 A LAN is distinguished from other networks by their topologies. The common topologies
are bus, ring, mesh, and star.
 The number of computers connected to a LAN is usually restricted. In other words, LANs
are limitedly scalable.
 IEEE 802.3 or Ethernet is the most common LAN. They use a wired medium in
conjuncture with a switch or a hub. Originally, coaxial cables were used for
communications. But now twisted pair cables and fiber optic cables are also used.

Types of LAN
Wireless LANs (WLAN)
Wireless LANs use high-frequency radio waves instead of cables for communications. They
provide clutter free homes, offices and other networked places. They have an Access Point or a
wireless router or a base station for transferring packets to and from the wireless computers and
the internet. Most WLANs are based on the standard IEEE 802.11 or WiFi.
Virtual LANs (VLAN)
Virtual LANs are a logical group of computers that appear to be on the same LAN irrespective of
the configuration of the underlying physical network. Network administrators partition the
networks to match the functional requirements of the VLANs so that each VLAN comprise a
subset of ports on a single or multiple switches. This allows computers and devices on a VLAN
to communicate in the simulated environment as if it is a separate LAN.

MAN Technology
A metropolitan area network (MAN) is a network with a size greater than LAN but smaller than
a WAN. It normally comprises networked interconnections within a city that also offers a
connection to the Internet.
The distinguishing features of MAN are
 Network size generally ranges from 5 to 50 km. It may be as small as a group of
buildings in a campus to as large as covering the whole city.
 Data rates are moderate to high.
 In general, a MAN is either owned by a user group or by a network provider who sells
service to users, rather than a single organization as in LAN.
 It facilitates sharing of regional resources.
 They provide uplinks for connecting LANs to WANs and Internet.

37
Example of MAN
 Cable TV network
 Telephone networks providing high-speed DSL lines
 IEEE 802.16 or WiMAX, that provides high-speed broadband access with Internet
connectivity to customer premise

Topologies:
The way in which devices are interconnected to form a network is called network topology.
Some of the factors that affect choice of topology for a network are −
 Cost − Installation cost is a very important factor in overall cost of setting up an
infrastructure. So cable lengths, distance between nodes, location of servers, etc. have to
be considered when designing a network.
 Flexibility − Topology of a network should be flexible enough to allow reconfiguration
of office set up, addition of new nodes and relocation of existing nodes.
 Reliability − Network should be designed in such a way that it has minimum down time.
Failure of one node or a segment of cabling should not render the whole network useless.
 Scalability − Network topology should be scalable, i.e. it can accommodate load of new
devices and nodes without perceptible drop in performance.
 Ease of installation − Network should be easy to install in terms of hardware, software
and technical personnel requirements.
 Ease of maintenance − Troubleshooting and maintenance of network should be easy.

Bus topology
Data network with bus topology has a linear transmission cable, usually coaxial, to which
many network devices and workstations are attached along the length. Server is at one end of
the bus. When a workstation has to send data, it transmits packets with destination address in
its header along the bus.

38
The data travels in both the directions along the bus. When the destination terminal sees the data,
it copies it to the local disk.
Advantages of Bus Topology
 Easy to install and maintain
 Can be extended easily
 Very reliable because of single transmission line
Disadvantages of Bus Topology
 Troubleshooting is difficult as there is no single point of control
 One faulty node can bring the whole network down
 Dumb terminals cannot be connected to the bus

Tress
Tree topology has a group of star networks connected to a linear bus backbone cable. It
incorporates features of both star and bus topologies. Tree topology is also called hierarchical
topology.

Advantages of Tree Topology


 Existing network can be easily expanded
 Point-to-point wiring for individual segments means easier installation and maintenance
 Well suited for temporary networks
Disadvantages of Tree Topology
 Technical expertise required to configure and wire tree topology
 Failure of backbone cable brings down entire network
 Insecure network

39
 Maintenance difficult for large networks

Star
In star topology, server is connected to each node individually. Server is also called the central
node. Any exchange of data between two nodes must take place through the server. It is the most
popular topology for information and voice networks as central node can process data received
from source node before sending it to the destination node.

Advantages of Star Topology


 Failure of one node does not affect the network
 Troubleshooting is easy as faulty node can be detected from central node immediately
 Simple access protocols required as one of the communicating nodes is always the central
node
Disadvantages of Star Topology
 Long cables may be required to connect each node to the server
 Failure of central node brings down the whole network

Ring
In ring topology each terminal is connected to exactly two nodes, giving the network a circular
shape. Data travels in only one pre-determined direction.

40
When a terminal has to send data, it transmits it to the neighboring node which transmits it to the
next one. Before further transmission data may be amplified. In this way, data raverses the
network and reaches the destination node, which removes it from the network. If the data reaches
the sender, it removes the data and resends it later.

Advantages of Ring Topology


 Small cable segments are needed to connect two nodes
 Ideal for optical fibres as data travels in only one direction
 Very high transmission speeds possible
Disadvantages of Ring Topology
 Failure of single node brings down the whole network
 Troubleshooting is difficult as many nodes may have to be inspected before faulty one is
identified
 Difficult to remove one or more nodes while keeping the rest of the network intact

Medium Access Control Protocols


The medium access control (MAC) is a sublayer of the data link layer of the open system
interconnections (OSI) reference model for data transmission. It is responsible for flow control
and multiplexing for transmission medium. It controls the transmission of data packets via
remotely shared channels. It sends data over the network interface card.
The Open System Interconnections (OSI) model is a layered networking framework that
conceptualizes how communications should be done between heterogeneous systems. The data
link layer is the second lowest layer. It is divided into two sublayers −
 The logical link control (LLC) sublayer
 The medium access control (MAC) sublayer
The following diagram depicts the position of the MAC layer

Functions of MAC Layer


 It provides an idea of the physical layer to the LLC and upper layers of the OSI network.

41
 It is responsible for encapsulating frames so that they are suitable for transmission via the
physical medium.
 It resolves the addressing of source station as well as the destination station, or groups of
destination stations.
 It performs multiple access resolutions when more than one data frame is to be
transmitted. It determines the channel access methods for transmission.
 It also performs collision resolution and initiating retransmission in case of collisions.
 It generates the frame check sequences and thus contributes to protection against
transmission errors.

LAN/MAN Standards.
All LANs and MANs consist of collections of devices that share the network‟s transmission
capacity. Some means should be there to control the access to the transmission medium to
provide an orderly and efficient use of transmission capacity. This is the function of a Medium
Access Control (MAC) protocol.

The key parameters in any Medium Access Control technique are where and how. „where‟ refers
to whether the control is in a centralized or distributed fashion.
In a centralized scheme, a controller is designated that has the authority to grant access to the
network i.e. if a station wishes to transmit it must wait until it receives permission from the
controller.
In a decentralized network, the stations collectively perform a Medium Access Control function
and dynamically determine the order in which stations should transmit.

The second parameter, „How‟, is constrained by the topology and is a trade-off among competing
factors, including cost, performance, and complexity. In general, we can categorize access
control techniques as being either synchronous or asynchronous.

With synchronous technique, as specific capacity is dedicated to a connection, same as in


frequency-division multiplexing (FDM) and synchronous time-division multiplexing (TDM).
But these techniques are not optimal in LANs and MANs because the needs of the stations are
unpredictable. It is preferable to allocate capacity in an asynchronous (dynamic) fashion, more or
less in response to immediate demand.

Asynchronous approach divided into three types


1. Round robin 2. Reservation 3. Contention

1. Round Robin
With round robin, each station is turn is given the opportunity to transmit. During that
opportunity, the station may decline to transmit or may transmit subjected to a specified upper
bound i.e. maximum amount of data transmitted.

In any case, the station when it is finishes transmission, relinquishes its turn, and the right to
transmit passes to the next station in logical sequence. Control of sequence may be centralized or
distributed.

2. Reservation
For stream traffic, reservation techniques are well suited. For these techniques, time on the
medium is divided into slots, much as with synchronous TDM. A station wishing to transmit
42
reserves future slots for an extended or even an indefinite period. Again, reservations may be
made in a centralized or distributed fashion.

3. Contention
For busty traffic, contention techniques are usually appropriate. With these techniques, no
control is exercised to determine whose turn it is. All stations contend for time in a way that can
be. These techniques are distributed by nature. Their principal advantages is that they are simple
to implement and under light to moderate load, efficient.

Table: Standardized medium access control techniques


Type Bus Topology Ring Topology Switched Topology
Round robin Token Bus (IEEE 802.4) Token Ring Request/ priority
Polling (IEEE 802.11) (IEEE 802.5 FDDI) (IEEE 802.12)
Reservation DQDB (IEEE 802.6)
Contention CSMA/CD (IEEE 802.3) CSMA/CD
CSMA (IEEE 802.11) (IEEE 802.3)

43
UNIT III: Communication Architecture Protocols and Architecture:
Protocol
A protocol is a standard set of rules that allow electronic devices to communicate with each
other. These rules include what type of data may be transmitted, what commands are used to
send and receive data, and how data transfers are confirmed.

HDLC (High Level Data Link Control) is an example of a protocol. The data to be exchanged
must be sent in frames in specific format (syntax). The control field provides a variety of
regulatory functions, such as setting a mode and establishing a connection (semantics).
Provisions are also included for flow control (timing).

Characteristics of Protocol
1. Direct/Indirect
2. Monolithic/structured
3. Symmetric/asymmetric
4. Standard/ nonstandard

1. Direct/Indirect protocol: Communication between two entities can be direct or indirect. If


two systems share a point-to-point link, the entities in these systems can communicate directly.
That is, data and control information is passed directly between entities with no intervening
active agent.

Fig. Means of connection of communicating systems


If system are connected through a switched communication network, a direct protocol is no
longer possible. The two entities must depend on the functioning of other entities to exchange
data.

2. Monolithic/Structured
Consider an electronic mail package running on two computers connected by a synchronous
HDLC link. To be monolithic, the package needs to include all the HDLC logic. It needs for
breaking up the mail into packet-sized chunks and logic for requesting a virtual circuit.

44
Mail should only be sent when the destination system and entity are active and ready to receive.
Logic is needed for coordination. An alternative is to use structured design and implementation
techniques.

3. Symmetric/ asymmetric protocol


A protocol may be either symmetric or asymmetric. Most of the protocols are symmetric i.e. they
involve communication between peer entitites
Asymmetry may be dictated by the logic of an exchange (e.g. a client and a server process) or by
the desire to keep one of the entities or systems as simple as possible.

4. Standard or nonstandard protocol


A protocol may be either standard or nonstandard. A nonstandard protocol is one that is built for
a specific communication situation or a particular model of a computer.
Protocol Functions
All protocols do not have same functions. There are many instances of the same type of
functions being present in protocols at different levels. Protocol functions are grouped into the
following types:
1. Segmentation and Reassembly
A protocol is concerned with exchanging streams of data between two entities. The transfer of
data consists of sequence of blocks of bounded size. At the application level, we refer to a logical
unit of data transfer as a message.
When the application entity sends data in messages or in a continuous stream, lower level
protocols may need to break the data into blocks of smaller bounded size. This process is called
segmentation. A block of data that is exchanged between two entities via a protocol is referred as
a protocol data unit (PDU)

2. Encapsulation
Each PDU contains not only data but control information. Some PDUs contain only control
information and no data. Control information such as address, error-detection code and protocol
control. The addition of control information to data is referred to as encapsulation.

3. Connection Control
An entity may transmit data to another entity in such a way that each PDU is treated
independently of all previous PDUs. This process is known as connectionless data transfer.
Connection-oriented data transfer can be preferred is stations expect a lengthy exchange of data.
A logical association or connection is established between the entities. The entities go through
three phases for establishing the connection. Three phases occurs while establishing the
connection such as Connection establishment, Data transfer and Connection termination.
Following figure shows a connection-oriented data.

Fig. The phases of a connection-oriented data

45
4. Ordered Delivery
If two communicating entities are in different hosts connected by a network, there is a risk that
PDUs will not arrive in the order, because they may follow different paths through the network.
In connection-oriented protocols, it is generally required that PDU order must be maintained. If
each PDU is given a unique number, and the members are assigned sequentially, then it is easy
for the receiving entity to recorder the received PDUs.

5. Flow control
Flow control is a function performed by a receiving entity to limit the amount or rate of data that
is sent by a transmitting entity. The simplest form of flow control is a stop-and-wait procedure,
in which each PDU must be acknowledged before the next can be sent.

6. Error Control
Techniques are needed to guard the data and control information transfer against loss or damage.
Most techniques involve error detection, based on a frame check sequence, and PDU
retransmission.
Retransmission is often activated by a timer. If a sending entity fails to receive an acknowledge
to a PDU within a specified period of time, it will retransmit. Error control is a function that must
be performed at various levels of protocol.

7. Addressing
For two entities to communicate each other over a point-to-point link, they must be able to
identify each other. On switched network, the network needs to know the identity of the
destination station in order to properly rout the data blocks or setup a connection. A distinction is
generally made among names, address and routes.

8. Multiplexing
One form of multiplexing is supported by means of multiple connections into a single system.
For example, with X.25, there can be multiple virtual circuits terminating in a single end
systems. The virtual circuits are multiplexed over the single physical interface between the end
system and the network.

9. Transmission services
Additional services are provided to the entities that use the particular protoro

The Layered Approach


The basic reason for using a layered networking approach is that a layered model takes a task,
such as data communications, and breaks it into a series of tasks, activities, or components, each
of which is defined and developed independently.

The Reasons for a Layered Model:

Change: When changes are made to one layer, the impact on the other layers is minimized. If
the model consists of a single, all-encompassing layer, any change affects the entire model.

Design: A layered model defines each layer separately. As long as the interconnections between
46
layers remain constant, protocol designers can specialize in one area (layer) without worrying
about how any new implementations affect other layers.

Learning: The layered approach reduces a very complex set of topics, activities, and actions into
several smaller, interrelated groupings. This makes learning and understanding the actions of
each layer and the model generally much easier.

Troubleshooting: The protocols, actions, and data contained in each layer of the model relate
only to the purpose of that layer. This enables troubleshooting efforts to be pinpointed on the
layer that carries out the suspected cause of the problem.

Standards: Probably the most important reason for using a layered model is that it establishes a
prescribed guideline for interoperability between the various vendors developing products that
perform different data communications tasks. Remember, that layered models, including the OSI
model, provide only a guideline and framework, not a rigid standard that manufacturers can use
when creating their products.

OSI Model
There are n numbers of users who use computer network and are located over the world. So to
ensure, national and worldwide data communication, systems must be developed which are
compatible to communicate with each other ISO has developed a standard. ISO stands for
International organization of Standardization. This is called a model for Open System
Interconnection (OSI) and is commonly known as OSI model.
The ISO-OSI model is a seven layer architecture. It defines seven layers or levels in a complete
communication system. They are:
1. Application Layer
2. Presentation Layer
3. Session Layer
4. Transport Layer
5. Network Layer
6. Datalink Layer
7. Physical Layer
Below we have the complete representation of the OSI model, showcasing all the layers and how
they communicate with each other.

47
In the table below, we have specified the protocols used and the data unit exchanged by each
layer of the OSI Model.

Principles of OSI Reference Model


The OSI reference model has 7 layers. The principles that were applied to arrive at the seven
layers can be briefly summarized as follows:
1. A layer should be created where a different abstraction is needed.
2. Each layer should perform a well-defined function.

48
3. The function of each layer should be chosen with an eye toward defining internationally
standardized protocols.
4. The layer boundaries should be chosen to minimize the information flow across the
interfaces.
5. The number of layers should be large enough that distinct functions need not be thrown
together in the same layer out of necessity and small enough that architecture does not
become unwieldly.

Functions of Different Layers


Following are the functions performed by each layer of the OSI model. This is just an
introduction, we will cover each layer in details in the coming tutorials.

Layer 1: The Physical Layer


1. Physical Layer is the lowest layer of the OSI Model.
2. It activates, maintains and deactivates the physical connection.
3. It is responsible for transmission and reception of the unstructured raw data over network.
4. Voltages and data rates needed for transmission is defined in the physical layer.
5. It converts the digital/analog bits into electrical signal or optical signals.
6. Data encoding is also done in this layer.

Layer 2: Data Link Layer


1. Data link layer synchronizes the information which is to be transmitted over the physical
layer.
2. The main function of this layer is to make sure data transfer is error free from one node to
another, over the physical layer.
3. Transmitting and receiving data frames sequentially is managed by this layer.
4. This layer sends and expects acknowledgements for frames received and sent
respectively. Resending of non-acknowledgement received frames is also handled by this
layer.
5. This layer establishes a logical layer between two nodes and also manages the Frame
traffic control over the network. It signals the transmitting node to stop, when the frame
buffers are full.

Layer 3: The Network Layer


1. Network Layer routes the signal through different channels from one node to other.
2. It acts as a network controller. It manages the Subnet traffic.
3. It decides by which route data should take.
4. It divides the outgoing messages into packets and assembles the incoming packets into
messages for higher levels.

Layer 4: Transport Layer


1. Transport Layer decides if data transmission should be on parallel path or single path.
2. Functions such as Multiplexing, Segmenting or Splitting on the data are done by this
layer
3. It receives messages from the Session layer above it, convert the message into smaller
units and passes it on to the Network layer.
4. Transport layer can be very complex, depending upon the network requirements.
Transport layer breaks the message (data) into small units so that they are handled more
efficiently by the network layer.
49
Layer 5: The Session Layer
1. Session Layer manages and synchronize the conversation between two different
applications.
2. Transfer of data from source to destination session layer streams of data are marked and
are resynchronized properly, so that the ends of the messages are not cut prematurely and
data loss is avoided.

Layer 6: The Presentation Layer


1. Presentation Layer takes care that the data is sent in such a way that the receiver will
understand the information (data) and will be able to use the data.
2. While receiving the data, presentation layer transforms the data to be ready for the
application layer.
3. Languages(syntax) can be different of the two communicating systems. Under this
condition presentation layer plays a role of translator.
4. It perfroms Data compression, Data encryption, Data conversion etc.

Layer 7: Application Layer


1. Application Layer is the topmost layer.
2. Transferring of files disturbing the results to the user is also done in this layer. Mail
services, directory services, network resource etc are services provided by application
layer.
3. This layer mainly holds application programs to act upon the received and to be sent data.

TCP/IP Protocol Suite


TCP/IP means Transmission Control Protocol and Internet Protocol. It is the network model used
in the current Internet architecture as well. Protocols are set of rules which govern every
possible communication over a network. These protocols describe the movement of data between
the source and destination or the internet. They also offer simple naming and addressing
schemes.

Protocols and networks in the TCP/IP model:

50
TCP/IP that is Transmission Control Protocol and Internet Protocol was developed by
Department of Defence's Project Research Agency (ARPA, later DARPA) as a part of a
research project of network interconnection to connect remote machines.
The features that stood out during the research, which led to making the TCP/IP reference model
were:
 Support for a flexible architecture. Adding more machines to a network was easy.
 The network was robust, and connections remained intact untill the source and
destination machines were functioning.
The overall idea was to allow one application on one computer to talk to(send data packets)
another application running on different computer.

The 4 layers that form the TCP/IP reference model:

Layer 1: Host-to-network / Network Access Layer


1. Lowest layer of the all.
2. Protocol is used to connect to the host, so that the packets can be sent over it.
3. Varies from host to host and network to network.

Layer 2: Internet layer


1. Selection of a packet switching network which is based on a connectionless internetwork
layer is called a internet layer.
2. It is the layer which holds the whole architecture together.
3. It helps the packet to travel independently to the destination.
4. Order in which packets are received is different from the way they are sent.
5. IP (Internet Protocol) is used in this layer.
6. The various functions performed by the Internet Layer are:
o Delivering IP packets
o Performing routing
o Avoiding congestion

Layer 3: Transport Layer


1. It decides if data transmission should be on parallel path or single path.
2. Functions such as multiplexing, segmenting or splitting on the data is done by transport
layer.
3. The applications can read and write to the transport layer.
4. Transport layer adds header information to the data.

51
5. Transport layer breaks the message (data) into small units so that they are handled more
efficiently by the network layer.
6. Transport layer also arrange the packets to be sent, in sequence.

Layer 4: Application Layer


The TCP/IP specifications described a lot of applications that were at the top of the protocol
stack. Some of them were TELNET, FTP, SMTP, DNS etc.
1. TELNET is a two-way communication protocol which allows connecting to a remote
machine and run applications on it.
2. FTP(File Transfer Protocol) is a protocol, that allows File transfer amongst computer
users connected over a network. It is reliable, simple and efficient.
3. SMTP(Simple Mail Transport Protocol) is a protocol, which is used to transport
electronic mail between a source and destination, directed via a route.
4. DNS(Domain Name Server) resolves an IP address into a textual address for Hosts
connected over a network.
5. It allows peer entities to carry conversation.
6. It defines two end-to-end protocols: TCP and UDP
o TCP(Transmission Control Protocol): It is a reliable connection-oriented
protocol which handles byte-stream from source to destination without error and
flow control.
o UDP(User-Datagram Protocol): It is an unreliable connection-less protocol that
do not want TCPs, sequencing and flow control. Eg: One-shot request-reply kind
of service.

System Network Architecture


Short for System Network Architecture, SNA was introduced in 1974. It is a communications
format developed by IBM for LANs (local area networks) to allow multiple systems access
centralized data.
SNA consist of seven layers
7 Transaction Services
6 Presentation services
5 Data flow control
4 Transmission control
3 Path control
2 Data link control
1 Physical control
1. Physical Control: The physical control layer of SNA corresponds to layer 1 of OSI. It specifies
the physical interface between nodes.
Two types of interface are specified i.e. serial communication links and parallel links. SNA
network uses high-speed parallel links between a mainframe and a front-end communication
processor.

2. Data link control: The data link control layer corresponds to OSI layer 2. This layer provides
for the reliable transfer of data across a physical link. The protocol specified for serial
communications links is SDLC. SDLC is basically a subset of HDLC.

3. Path Control: The path control layer creates logical channels between endpoints referred as
network addressable unit (NAUs). NAU is an application-level entity capable of being addressed
and of exchanging data with other entities. Thus main functions of the path control layer are
52
routing and flow control. Path control is based on the concept of transmission group, explicit rout
and virtual route.

4. Transmission Control: the transmission control of SNA corresponds roughly to layer 4 of the
OSI model. The transmission layer is responsible for establishing, maintaining and terminating
SNA sessions. The transmission control can establish a session when a request is made from the
next higher layer.

5. Data Flow control: The data flow control layer is end-user oriented and corresponds to OSI‟s
layer 5. This layer is responsible for providing session-related services that the visible to end-
user processes and terminals.

6. Presentation services: Top two layers of SNA were considered as a single layer until recently
called as the Function Management Data (FMD) service layer. The layer corresponds to OSI
layers 6 and 7.

7. Transaction services: this layer provides network management services. These services are
directly used by the user and hence best fit for OSI layer 7. It includes configuration services i.e.
activating and deactivating links. Network operator services i.e. communication of data from
users and process to the network operator. Session services i.e. support the activation of session
on behalf of end user and applications. Maintenance and management services i.e. provide
facilities for testing of network facilities and enables fault isolation and identification.

WAN Architecture and Transmission Mechanism.

One way to categorize networks is to divide them into local-area networks (LAN) and wide-area
networks (WAN). LANs typically are connected workstations, printers, and other devices within
a limited geographic area such as a building. All the devices in the LAN are under the common
administration of the owner of that LAN, such as a company or an educational institution. Most
LANs today are Ethernet LANs. WANs are networks that span a larger geographic area and
usually require the services of a common carrier. Examples of WAN technologies and protocols
include Frame Relay, ATM, and DSL.
A WAN is a data communications network that operates beyond the geographic scope of a
LAN. Figure shows the relative location of a LAN and WAN.

53
Fig. a. WAN Location

Fig. b. OSI and WAN Services

WANs differ from LANs in several ways. Whereas a LAN connects computers, peripherals, and
other devices in a single building or other small geographic area, a WAN allows the transmission
of data across greater geographic distances. In addition, an enterprise must subscribe to a WAN

54
service provider to use WAN carrier network services. LANs typically are owned by the
company or organization that uses them.

WANs use facilities provided by a service provider, or carrier, such as a telephone or cable
company, to connect the locations of an organization to each other, to locations of other
organizations, to external services, and to remote users. WANs provide network capabilities to
support a variety of mission-critical traffic such as voice, video, and data.

Here are the three major characteristics of WANs:


■ WANs generally connect devices that are separated by a broader geographic area than can be
served by a LAN.
■ WANs use the services of carriers, such as telephone companies, cable companies, satellite
systems, and network providers.
■ WANs use serial connections of various types to provide access to bandwidth over large
geographic areas.

As highlighted in Figure b, the physical layer (OSI Layer 1) protocols describe how to provide
electrical, mechanical, operational, and functional connections to the services of a
communications service provider.

The data link layer (OSI Layer 2) protocols define how data is encapsulated for transmission
toward a remote location and the mechanisms for transferring the resulting frames. A variety of
technologies are used, such as Frame Relay and Asynchronous Transfer Mode (ATM). Some of
these protocols use the same basic framing mechanism, High-Level Data Link Control (HDLC),
an ISO standard, or one of its subsets or variants.

WAN Physical Layer


One primary difference between a WAN and a LAN is that for a company or organization to use
WAN carrier network services, it must subscribe to an outside WAN service provider. A WAN
uses data links provided by carrier services to access or connect the locations of an organization
to each other, to locations of other organizations, to external services, and to remote users. The
WAN access physical layer describes the physical connection between the company network and
the service provider network.

WAN Data Link Layer


In addition to physical layer devices, WANs require data link layer protocols to establish the link
across the communication line from the sending to the receiving device. Data link layer protocols
define how data is encapsulated for transmission to remote sites and the mechanisms for
transferring the resulting frames. A variety of technologies are used, such as ISDN, Frame Relay,
or ATM, as shown in Figure

55
Internetworking:
Internetworking is the process or technique of connecting different networks by using
intermediary devices such as routers or gateway devices. Or
Any interconnection among or between public, private, commercial, industrial, or governmental
computer networks may also be defined as an internetwork or "Internetworking".

Internetworking ensures data communication among networks owned and operated by different
entities using a common data communication and the Internet Routing Protocol. The Internet is
the largest pool of networks geographically located throughout the world but these networks are
interconnected using the same protocol stack, TCP/IP. Internetworking is only possible when the
all the connected networks use the same protocol stack or communication methodologies.

In modern practice, the interconnected computer networks or Internetworking use the Internet
Protocol. Two architectural models are commonly used to describe the protocols and methods
used in internetworking. The standard reference model for internetworking is Open Systems
Interconnection (OSI).

There are three variants of internetwork or Internetworking


• Extranet
• Intranet
• Internet

Extranet: An extranet is a network of internetwork or Internetworking that is limited in scope to


a single organisation or entity but which also has limited connections to the networks of one or
more other usually, but not necessarily, trusted organizations or entities .Technically, an extranet
may also be categorized as a MAN, WAN, or other type of network, although, by definition, an
extranet cannot consist of a single LAN; it must have at least one connection with an external
network.

Intranet: An intranet is a set of interconnected networks or Internetworking, using the Internet


Protocol and uses IP-based tools such as web browsers and ftp tools, that is under the control of a
single administrative entity. That administrative entity closes the intranet to the rest of the world,
and allows only specific users. Most commonly, an intranet is the internal network of a company
56
or other enterprise. A large intranet will typically have its own web server to provide users with
browse able information.

Internet : A specific Internetworking, consisting of a worldwide interconnection of


governmental, academic, public, and private networks based upon the Advanced Research
Projects Agency Network (ARPANET) developed by ARPA of the U.S. Department of Defense
also home to the World Wide Web (WWW) and referred to as the 'Internet' with a capital 'I' to
distinguish it from other generic internetworks. Participants in the Internet, or their service
providers, use IP Addresses obtained from address registries that control assignments.

Principles of Internetworking
Following are the principles of internetworking
1. Different addressing schemes: the networks may use different endpoint names and addresses
and directory maintenance schemes. Hence to maintain uniformity some form of global network-
addressing must be used.

2. Different packet size: Packets when transferred from one network may have to be broken up
into smaller pieces for transmitting to another network. This process is called as segmentation or
fragmentation

3. Different network-access mechanism: The network-access mechanism may be different for


stations on different networks.

4. Different timeout: A connection-oriented transport service will wait for an acknowledgement


until timeout expires, after which block of data is retransmitted. Internetwork timing procedures
must allow for successful transmission of data so as to avoid unnecessary retransmissions.

5. Error recovery: Intra-network procedures may provide anything from no error recovery up to
reliable end-to-end (within the network) service. The internetwork service should not depend
upon the nature of the individual network‟s error recovery capability.

6. Status reporting: Different networks report different status and performance. But
internetworking should provide to each network. The internetworking facility must be able to
coordinate these to adaptively route data between stations on different networks.

7. Routing techniques: Intra-network routing depends on fault detection and congestion control
techniques peculiar to each network. The internetworking facility must be able to coordinate
these to adaptively route data between stations on different networks.

8. User-access control: Each network will have its own user-access control technique that must
be invoked by the internetwork facility as needed. Further, a separate internetwork access control
technique may be required.
9. Connection, connectionless: Individual networks may provide connection oriented or
connectionless service. It may be desirable for the internetwork service not to depend on the
nature of the connection service of the individual networks.

57
The Bridge
A bridge is a device that connects two LANs (local area networks), or two segments of the same
LAN. Unlike a router, bridges are protocol independent. They forward packets without analyzing
and re-routing messages
Bridge devices work at the data link layer of the Open System Interconnect (OSI) model,
connecting two different networks together and providing communication between them.
Bridges are similar to repeaters and hubs in that they broadcast data to every node. However,
bridges maintain the media access control (MAC) address table as soon as they discover new
segments, so subsequent transmissions are sent to only to the desired recipient

Bridge Operation
1. Reliability: if all data processing devices in an organization are connected to a single network
then any fault on the network may disable communication for all devices. Hence bridges can be
used, so that the network can be partitioned into self-contained units.
2. Performance: Performance on a LAN or MAN decreases with an increase in the number of
devices or with the length of the medium. A number of smaller LANs can improve the
performance if the devices are grouped, so that intra-network traffic exceeds inter-network
traffic.
3. Security: The use of multiple LANs can improve the security of communications. For security
purpose we can maintain difference type of traffic that have different security needs on separate
media. Also it is advisable to have different levels of security for different types of users which
enables monitoring and controlled communication.
4. Geography: To support the devices clustered in two geographically distant locations, two
separate LANs can be used. Even in the case of two buildings separated by a highways, a
microwave bridge link can be used instead of string coaxial cable between the two buildings.

Functions of Bridge
1. Reads all the frames transmitted on A, and accepts those which are addressed to stations on B.
2. Using the Medium Access Control protocol for B retransmits the frames onto B.
3. Follows the same procedure for B-to-A traffic.
Routing With Bridge
Consider the configuration of following figure

58
Fig. Configuration of bridges and LANs

suppose that station 1 transmits a frame on LAN A intended for station 5. The frame will be read
by both bridges 1010 and bridge 102. For each bridge, the addressed station is not on a LAN to
which the bridge is attached. Therefore, each bridge must make a decision of whether or not to
retransmit the frame on its other LAN, in order to move it closer to its intended destination.
In this case, bridge should repeat the frame on LAN B, whereas bridge 102 should refrain from
retransmitting the frame. Once the frame has been transmitted on LAN B, it will be picked up by
both bridges 103 and 104.
Again, each must decide whether or not to forward the frame. In this case, bridge 104 should
retransmit the frame on LAN E, where it will be received by the destination, station5. Thus, we
see that in the general case, the bridge must be equipped with a routing capability. When a bridge
receives a frame, it must decide whether or not to forward it.
If the bridge is attached to two or more networks, then it has to decide whether or not to forward
the frame and on which LAN the frame should be transmitted. The routing decision always may
not be an easy task.

Connectionless Internetworking
The Internet Protocol (IP) and Connectionless Network Protocol (CLNP) is very similar. They
differ in the format that is used and in some functional aspects.
IP provides a connectionless or datagram service between end systems. Hence, there are a
number of advantages regarding this approach.
 A connectionless internet facility is flexible. It can deal with a variety of networks, some
of which are themselves connectionless.
 A connectionless internet service can be highly robust.
 A connectionless internet service is best for connectionless transport protocols.

59
Fig. Internet protocol operation
Figure shows IP, in which two LANs are interconnected by an X.25 packet-switched WAN. The
figure shows the operation of the internet protocol for data exchange between host A on one
LAN (subnetwork 1) and host B on another departmental LAN (subnetwork 2) though the
WAN.

The IP at A receives blocks of data to be sent to B from the higher layers of software in A. IP
attaches a header specifying the global internet address of B. This address is logically in two
parts.
1. Network identifier
2. End system identifier

The result is called an Internet-Protocol data unitor a datagram. The dataram is summarized with
the LAN protocol and sent to the router, which strips off the LAN fields to read the IP header.
The router then encapsulates the datagram with the X.25 protocol fields and transmits across the
WAN to another router. This router strips off the X.25 fields and recovers the datagram, which it
then wraps in LAN fields appropriate to LAN 2 and sends it to B.
The sequence of steps involved in sending a datagram between two stations on different
networks. The process starts in the sending station. The station wants to send an IP datagram to a
station in another network. The IP module constructs the datagram with global network address
and recognizes that the destination is an another network. So the first step is to send the datagram
to a router.

To do this IP module appends to the IP datagram a header that appropriate to the network that
contains the address of router. Next the packet travels through the network to the router, which
receives it via DCE-DTE protocol.

The router unpacks the packet to find out the original datagram. The router analyzes the IP
header to determine whether the datagram contains control information intended for outer or data
intended for station. If it contains data for a station it has to use routing decision. There are four
possibilities.

60
1. The destination station is attached directly to one of the network to which the router is
attached referred as directly connected.

2. The destination station is on a network that has a router that directly connects to this router.

3. To reach the destination station more than one additional router must be traversed and this is
known as multiple-hop.

4. The router does not known the destination address.

In case 4, router returns error message to the source that generated datagram. For the case 1 to 3
the router must select the appropriate router for the data and insert them in appropriate network
with proper address. Before sending data, the router may need to segment datagram to
accommodate smaller maximum packet size limitation on the outgoing network.

Router-Level Protocol
The routers in an internet are responsible for receiving and forwarding the packets through the
interconnected set of subnetworks. Each router makes routing decisions based on knowledge of
the topology and conditions of the internet.

In a simple internet, a fixed routing scheme is advisable. In more complex internets, dynamic
cooperation among the routers is needed. The router must avoid the portions of the network that
have already failed and also the portions of the network that are congested. In order to make such
dynamic routing decisions, routers exchange routing information using a special routing
protocol.

The routers consider the following concepts while calculating the routing function

 Routing information: Information about the topology and delays of the internet.
 Routing algorithm: The algorithm used to make a routing decision for a particular
datagram, based on current routing information.

There is another way to partition the problem that is useful from two points of view

 Routing between end system (ESs) and routers


 Routing between routers

The reason for the partition is that there are basic differences between what an ES must know to
route a packet and what a router must know. In the case of an ES, the router must first know
whether the destination ES is on the same subnet. If the answer is yes, then the data can be
delivered directly using the subnetwork access protocol otherwise the ES must forward the data
to a router attached to the same subntwork and if there is more than one such router, then simply
we have to choose one.

Connection Oriented Internetworking.


Connection-oriented service is modeled after the telephone system. To talk to someone, user pick
up the phone, dial the number, talk, and then hang up. Similarly, to use a connection-oriented
61
network service, the service user first establishes a connection, uses the connection, and then
releases the connection. The essential aspect of a connection is that it acts like a tube: the sender
pushes objects(bits) in at one end, and the receiver takes them out at the other end.
Following figure shows the protocol activity for transferring data from DTE attached to a packet-
switching network, though a router, to a DTE attached to LAN which has been enhanced with
the use of X.25. The lower portion of the figure shows the format of the data unit being
processed at various points during the transfer.

Network layers data is presented by the transport layer to the network layer (t1). This is the form
of data unit consisting of transport protocol header and data from the transport user. This block
of data is received by the packet level protocol of X.25, which appends a packet header (t2) to
form an X.25 packet. The header will include the virtual circuit that connects host A to the
router.

The packet is then passed down to the data link layer protocol which appends a link header and
trailer(t3) and transmits the resulting frame to DCE(t4). At the DCE, the link header and trailer
are removed and the result is passed up to the packet level (t5) . The packet is transferred to the
network through DCE Y to the router which appears as another DTE to the network.

62
UNIT IV: Cloud Computing Basics Cloud Computing
Overview
Cloud Computing provides us means of accessing the applications as utilities over the Internet. It
allows us to create, configure, and customize the applications online. The term Cloud refers to a
Network or Internet. In other words, we can say that Cloud is something, which is present at
remote location. Cloud can provide services over public and private networks, i.e., WAN, LAN
or VPN.
Applications such as e-mail, web conferencing, customer relationship management (CRM)
execute on cloud.

Cloud Computing refers to manipulating, configuring, and accessing the hardware and
software resources remotely. It offers online data storage, infrastructure, and application.
Cloud computing offers platform independency, as the software is not required to be installed
locally on the PC. Hence, the Cloud Computing is making our business
applications mobile and collaborative.

The idea of cloud computing is based on a very fundamental principal of reusability of IT


capabilities. Promoters claim that cloud computing allows companies to avoid upfront
infrastructure costs, and focus on projects that differentiate their business instead of on
infrastructure.
Cloud computing provides several tools and technologies by using which we can build data
intensive parallel applications with much more inexpensive prices compared to traditional
parallel computing techniques.

Benefits

63
Cloud Computing has numerous advantages. Some of them are listed below -
 One can access applications as utilities, over the Internet.
 One can manipulate and configure the applications online at any time.
 It does not require to install a software to access or manipulate cloud application.
 Cloud Computing offers online development and deployment tools, programming
runtime environment through PaaS model.
 Cloud resources are available over the network in a manner that provide platform
independent access to any type of clients.
 Cloud Computing offers on-demand self-service. The resources can be used without
interaction with cloud service provider.
 Cloud Computing is highly cost effective because it operates at high efficiency with
optimum utilization. It just requires an Internet connection
 Cloud Computing offers load balancing that makes it more reliable.

History

64
The origin of the term “Cloud computing” is not clear. The expression cloud is commonly used
in science to describe a large collection of objects that visually appear from a distance as a cloud
and describes any set of things which have not been described in detail.
Cloud computing has evolved through a number of phases which includes Grid computing and
Utility computing, Application Service Provision (ASP), and software as a service (SaaS), but its
evolution started in 1950s with mainframe computing.

The 1950s
Multiple users were capable of accessing a central computer through dumb terminals, whose
only function was to provide access to the mainframe. Because of the costs to buy and maintain
mainframe computers, it was not practical for an organization to buy and maintain one for every
employee.
The typical user doesn‟t need the large storage capacity and processing power which a
mainframe provides. Hence, providing shared access to a single resource on the web was the
solution for this sophisticated technology.

The 1990s
In the 190s, telecommunications companies who basically offered dedicated point-to-point data
circuits began offering virtual private network (VPN) services with comparable quality of
service, but at a lower cost. The network bandwidth could be used more effectively by switching
the traffic.
Cloud computing extends its boundaries to cover all servers as well as the network infrastructure.
As computers became more prevalent, scientists and technologies explored ways to make large-
scale computing power available to most of the users through time-sharing.

Since 2000

In early 2008, Eucalyptus became the first open-source, AWS, API-compatible platform for
deploying private clouds.

By mid-2008, cloud computing was used “to shape the relationship among consumers of IT
services, those who use IT services and those who sell them” and organizations switched from
company-owned hardware and software assets to per-use services-based models.

In July 2020, Rackspace Hosting and NASA jointly launched an open-source cloud-software
initiative known as OpenStack. The OpenStack project is intended to help the organizations offer
cloud-computing services running on standard hardware.

On March 1, 2011, IBM announced the IBM SmartCloud framework to support Smarter Planet,
Among the various components of the Smarter Computing foundation, cloud computing is also a
major component.

On June 7, 2012, Oracle announced the Oracle Cloud. This cloud offers accesses to an
integrated set of IT solutions including the Applications (Saas), Platform(PaaS) and
Infrastructure (IaaS) layers.

65
Characteristics/Capabilities of Clouds
Characteristics/capabilities associated with clouds that are considered essential and relevant and
distinguished as non-functional, economic and technological capabilities.

Non-Functional
1. Elasticity: It is a core feature of cloud systems and restricts the capability of the underlying
infrastructure to adapt to changing, non-functional requirements. For example amount and size of
data supported by an application, number of concurrent users etc.

2. Reliability: It is essential for all cloud systems in order to support today‟s data centric
applications. Reliability denotes the capability to ensure constant operation of the system without
disruption i.e. no loss of data.

3. Quality of Service (QoS) support: It is a relevant capability that is essential in many use cases
where specific requirements have to be met by the outsourced services and resources. In business
cases, basic QoS metrics like response time, throughput etc. must be guaranteed at least, so as to
ensure that the quality guarantees of the cloud user are met.

4. Agility and adaptability: These are essential features of cloud systems related to the elastic
capabilities. It includes on-time reaction to change the amount of request and size of resources,
but also adaptation to changes in the environmental conditions e.g. require different type of
resources, different quality of different routes etc.

5. Availability of services and data: It is an essential capability of cloud systems. It lies in the
ability to introduce redundancy for services and data so that the failures can be masked
transparently.

Economical
1. Cost reduction; it is one of the factors used to build up a cloud system that can adapt to
consumer behavior and reduce the cost for infrastructure maintenance. Scalability and Pay per
use are essential aspects in this issue. Setting up a cloud system needs additional costs.

2. Pay per use: The capability to build up cost according to the consumption of resources on the
cloud is a relevant feature of cloud systems. Pay as per use relates to the quality of services
supported, where some requirements should be met by the system.

3. Improved time to market: It is essential for small to medium enterprises that want to sell their
services quickly and easily with little delays caused by acquiring and setting up the
infrastructure. Cloud can support larger enterprises by providing infrastructures dedicated to
specific use and thus reduce time to market.

4. Return of Investment (ROI): It is also essential for all investors and cannot always be
guaranted- in fact some cloud systems currently fail to achieve this aspect. Employing a cloud
system must ensure that the cost and efforts put into it is balanced by its benefits to be
commercially viable- this may need direct (e.g. more customers) and indirect (e.g. benefits form
advertisements) ROI.

5. Going Green: It is relevant not only to reduce additional costs of energy consumption, but also
to reduce the carbon footprint.
66
Technological

1. Virtualization: It is an essential technological characteristic of cloud which hides the


technological complexity from the user and enable flexibility, virtualization supports the
following features.
 Ease of use: By hiding the complexity of the infrastructure virtualization makes it easier
for the user to develop new applications.
 Infrastructure independency: Virtualization used in cloud computing allows for higher
interoperability by making the code platform independent.
 Flexibility of Adaptability: By exposing a virtual execution environment, the underlying
infrastructure can be changed to more flexible according to different conditions and
requirements.
 Location independence: Services can be accessed independent of the physical location of
the user and the resources.

2. Multi-tenancy: It a highly essential issues in cloud systems, where the location of code or data
is unknown and the same resource may be assigned to multiple users. This affects infrastructure
resources as well as data / applications / services that are hosted.

3. Security, Privacy and Compliance: It is obviously essential in all systems dealing with
sensitive data and code.

4. Data Management: It is for storage cloud where data is flexibly distributed across multiple
resources. Data consistency needs to be maintained over a wide distribution of replicated data
sources. At the same time, the system always needs to be aware of the data location.

5. APIs or Programming Enhancements: These are essential to exploit the cloud features,
common programming models require that the developer should take care of the scalability and
automatic capabilities himself, whereas a cloud environment provides the features by using
which the user the user can leave such management to the sytem.

Cloud Components
Cloud computing solutions is made up of several elements like clients, datacenter and distributed
servers as shown in following figure. Each element has a purpose and plays a specific role in
delivering a functional cloud based applications.

67
Fig. Components of the cloud computing
1. Clients: Clients are the computers that just sit on your desk and also laptops, tablet computers,
mobile phones, or PDAs (personal digital assistant)all big drivers for cloud computing because
of their mobility. Clients are the devices that the end users interact with to manage their
information on the cloud.

Clients generally fall into three categories.


a. Mobile devicesb. Thin Clients c. Thick Clients

a. Mobile devices: Mobile devices include PDAs or smartphones, like a Balck berry, windows
mobile smartphone or an iPhone.

b. Thin Clients: These are the computers that do not have internal hard drives, but rather let the
server do all the work, but then display the information.

c. Thick Clients: This type of client is a regular computer, using a web browser like Firefox or
Internet Explorer to connect to the cloud.

Thin clients have some benefits such as.

 Lower hardware costs: Thin clients are cheaper than thick clients because they do not
contain as much hardware. They also last longer before they need to be upgraded or become
obsolete.
 Lower IT costs: Thin clients are managed at the server and there are fewer points of failure.
 Security: Since the processing takes place on the server and there is no hand drive, there‟s
less chance of malware invading the device. Also, since thin clients don‟t work without a
server, there‟s less chance of them being physically stolen.
 Data security: Since data is stored on the server, there‟s less chance for data to be lost if the
client computer crashes or is stolen.
 Less power consumption: Thin clients consume less power than thick clients. This means
you will pay less to power them.

2. Datacenter: The datacenter is the collection of servers where the application to which you
subscribe is housed. It cloud be a large room in the basement of your building or a room full of
68
servers on the other side of the world that you access via the Internet. A growing trend in the IT
world is virtualizing servers. That is, software can be installed allowing multiple instances of
virtual servers to be used. In this way, you can have half a dozen virtual servers running on one
physical server.

3. Distributed Servers: But the servers don‟t all have to be housed in the same location. Often,
servers are in geographically disparate locations. But to you, the cloud subscriber, these servers
act as if they‟re humming away right next to each other.

This gives the service provider more flexibility in options and security. For instance, Amazon
has their cloud solution in servers all over the world. If something were to happen at one site,
causing a failure, the service would still be accessed through another site.

First stake holders of Cloud market


There are many vendors who offer cloud services. What they have offer varies based on the
vendor and their pricing models are different. Some of the companies that provide cloud
computing services:
1. Amazon 2. Google 3. Microsoft

1. Amazon : Amazon was one of the first companies to offer cloud services to the public.
Amazon offers a number of cloud services, including

 Elastic Compute Cloud (EC2) offers virtual machines and extra CPU cycles for your
organization.
 Simple Storage Service (S3) allows you to store items up to 5GB in size in Amazon‟s
virtual storage service.
 Simple Queue Service (SQS) allows your machines to talk to each other using this
message-passing API
 SimpleDB A Web service for running queries on structured data in real time.

2. Google: Contrast to Amazon‟s offerings is the Google‟s App Engine. On Amazon you get not
privileges, but on App Engine, you can‟t write a file in your own directory. Google removed the
file write feature out of Python as a security measure, and to store data you must use Google‟s
database.
Google offers online documents and spreadsheets, and encourages developers to build features
for those and other online software, using its Google App Engine. Google also offers handy
debugging features.

3. Microsoft: Microsoft‟s cloud computing solution is called Windows Azure, an operating


system that allows organizations to run windows applications and store files and data using
Microsofts datacenters. It is also offers its Azure Services Platform, which are services that allow
developers to establish user identifiers, manage workflows, synchronize data, and perform other
functions as they build software programs on Microsoft‟s online computing platform. Following
are the components of Azure Services Platform

 Windows Azure provides service hosting and management and low-level scalable storage,
computation, and networking
69
 Microsoft SQL services provides database services and reporting
 Microsoft .NET Services provides service-based implementations of .NET Framework
concepts such as workflow.
 Live Services used to share, store, and synchronize documents, photos, and files across PCs,
phones, PC applications and web sites.
 Microsoft Share Point Services and Microsoft Dynamic CRM Services used for business
content, collaboration, and solution development in the cloud.

Virtualization

Fig. Virtualization

Virtualization is one of the core technologies used by the cloud computing. Virtualization has
been around for more than 40 years, but its application has always been limited. Virtualization is
a technology which creates different computing environments. These environments are termed as
virtual machines. Virtualization is a key technology used in datacenters to optimize resources. As
IT needs continue to evolve, virtualization can no longer be regarded as an isolated technology to
solve a single problem.

The most common example of virtualization is hardware virtualization. The hardware


virtualization allows the existence of different software stacks on the top of the same hardware.

Virtualization technologies are also used for replicating runtime environment for programs. In
this case the programs are not executed by the OS itself but the virtual machine.

Cloud Computing Architecture


Cloud Computing architecture comprises of many cloud components, which are loosely coupled.
We can broadly divide the cloud architecture into two parts:
 Front End
 Back End
Each of the ends is connected through a network, usually Internet. The following diagram shows
the graphical view of cloud computing architecture:

70
Front End
The front end is used by the client. It contains client-side interfaces and applications that are
required to access the cloud computing platforms. The front end includes web servers (including
Chrome, Firefox, internet explorer, etc.), thin & fat clients, tablets, and mobile devices.
Back End
The back end is used by the service provider. It manages all the resources that are required to
provide cloud computing services. It includes a huge amount of data storage, security
mechanism, virtual machines, deploying models, servers, traffic control mechanisms, etc.

Components of Cloud Computing Architecture


There are the following components of cloud computing architecture -

1. Client Infrastructure
Client Infrastructure is a Front end component. It provides GUI (Graphical User Interface) to
interact with the cloud.

2. Application
The application may be any software or platform that a client wants to access.

3. Service
A Cloud Services manages that which type of service you access according to the client‟s
requirement.
Cloud computing offers the following three type of services:
i. Software as a Service (SaaS) – It is also known as cloud application services. Mostly, SaaS
applications run directly through the web browser means we do not require to download and
install these applications. Some important example of SaaS is given below –
Example: Google Apps, Salesforce Dropbox, Slack, Hubspot, Cisco WebEx.
ii. Platform as a Service (PaaS) – It is also known as cloud platform services. It is quite
similar to SaaS, but the difference is that PaaS provides a platform for software creation, but
using SaaS, we can access software over the internet without the need of any platform.
Example: Windows Azure, Force.com, Magento Commerce Cloud, OpenShift.
iii. Infrastructure as a Service (IaaS) – It is also known as cloud infrastructure services. It is
responsible for managing applications data, middleware, and runtime environments.
71
Example: Amazon Web Services (AWS) EC2, Google Compute Engine (GCE), Cisco Metapod.

4. Runtime Cloud
Runtime Cloud provides the execution and runtime environment to the virtual machines.

5. Storage
Storage is one of the most important components of cloud computing. It provides a huge amount
of storage capacity in the cloud to store and manage data.

6. Infrastructure
It provides services on the host level, application level, and network level. Cloud infrastructure
includes hardware and software components such as servers, storage, network devices,
virtualization software, and other storage resources that are needed to support the cloud
computing model.

7. Management
Management is used to manage components such as application, service, runtime cloud, storage,
infrastructure, and other security issues in the backend and establish coordination between them.

8. Security
Security is an in-built back end component of cloud computing. It implements a security
mechanism in the back end.

9. Internet
The Internet is medium through which front end and back end can interact and communicate
with each other.

Cloud Computing Services:


SaaS(Software as a Service)
 In this model, a complete application is offered to the customer, as a service on demand.
 A single instance of the service runs ;on the cloud and multiple end users are serviced.
 On the customers side, there is no need for upfront investment in servers or software
licenses, while for the provider, the costs are lowered, since only a single application needs
to be hosted and maintained.
 SaaS is offered by many companies such as Google, Salesforce, Microsoft, Zoho etc.
PaaS(Platform as a Service)
 A layers of software or the development environment is encapsulated and offered as a
service to the customer on demand upon which other higher levels of service can be built.
 The customers has the freedon to built his own application which can then run on the
providers infrastructure.
 To meet manageability and scalability requirements of the application, Paas providers offer
a predefined combination of OS and application servers, such as LAMP platform (Linux,
Apache,MySql and PHP) etc.
 Google app Engine, Force.com etc. are some of the popular PaaS examples.
IaaS(Infrastructure as a Service)
IaaS providers basic storage and computing capabilities on the network. Servers, storage system,
networking equipment, datacenter space etc. are pooled and are made available to handle the

72
workload. The customer can deploy his own software on the infrastructure. Some common
example of the companies which provide IaaS are Amazon, GoGrid. 3Tera etc.

Cloud Computing Deployment Models –


Enterprises can choose to deploy application on Public, Private or Hybrid cloud. Cloud Integers
can play a vital part in determining the right cloud path for the organization.

Public Cloud
1) public clouds are owned and operated by third parties.
2) Public cloud gives each individuals client an attractive low-cast, “Pay-as-you-go” model.
3) All the customers share the same infrastructure pool with limited configuration, security
protection, and availability variances. These are managed and supported by the cloud
providers.
4) Public cloud (also known as external cloud), is the traditional way where services are
provided by a third part via Internet, and they are visible to everybody.
5) One of the advantage of a Public cloud is ;that they are larger in size and hence providers
the ability to scale faultlessly on demand.

Private Cloud
1) Private clouds are built execlusively for a single enterprise.
2) They aim to address concerns on data security and offer greater control which is lacking
in a public clouds.
3) There are two variation to a private cloud:-

A) On-premise Private Cloud:


On –premise private clouds, also known as internal clouds are hosted within one‟s own data
center. This model provides amore standardized process and protection, but is limited in aspects
of size and scalability. IT department would also need to acquire the capital and operational costs
for the physical resources. This is best suited for applications which require complete control and
configurability of the infrastructure and security.

B)Externally hosted Private Cloud:-


This type of private cloud is hosted externally with a cloud provider, where the provider facilities
an exclusive cloud environment with full guarantee of privacy. This is a best option for
enterprises that don‟t want to go for public cloud.

Hybrid Cloud and others.


1) Hybrid clouds combine both public and private cloud models.
2) With a hybrid clouds, service providers can utilize third party Cloud Providers in a full or
partial manner which increase the flexibility of computing.
3) The hybrid cloud environment is capable of providing on-demand, externally provisioned
scale.
4) The ability to augment a private cloud with the resources of a public cloud and be used to
manage any unexpected increase in workload.

Community Cloud: It allows system and services to be accessible by group of organizations. It


shares the infrastructure between several organizations from a specific community. It may be
managed internally by organizations or by the third-party. The Community Cloud Model is
shown in the diagram below.
73
Benefits
There are many benefits of deploying cloud as community cloud model.
1. Cost Effective: Community cloud offers same advantages as that of private cloud at low
cost.
2. Sharing Among Organizations: Community cloud provides an infrastructure to share cloud
resources and capabilities among several organizations.
3. Security: The community cloud is comparatively more secure than the public cloud but less
secured than the private cloud.
Issues
 Since all data is located at one place, one must be careful in storing data in community
cloud because it might be accessible to others.
 It is also challenging to allocate responsibilities of governance, security and cost among
organizations.

Cloud Benefits and Limitations


Advantages:-
 Accessibility: access your data anywhere, anytime, Productivity and efficiency of an
enterprises is increased by ensuring that an application is always accessible at anytime from
anywhere.
 No Hardware Required: since everything is hosted in the cloud, a physical storage center is
not needed. But still a backup of the important documents and sensitive data is mandatory.
 Cost per Head: overhead technology cost are kept at a minimum with cloud hosting services,
enabling business to use the extra time and resources for improving the company
infrastructure.
 Flexibility for Growth: the cloud is easily scalable so companies can add or subtract resource
based on their needs. As companies grow, their system will grown with them.
 Efficient Recovery: cloud computing provides faster and more accurate retrievals of
application and data.
 Increased storage: application grow when we add storage, RAM and CPU capacity as
needed. With the massive infrastructure offered by cloud providers today, storage&

74
maintenance of large volumes of data is possible. Sudden work load spikes are also managed
effectively and efficiently.
Disadvantages:
 No longer in control: When we are using the cloud, it means you are handling over your data
and information to someone on the cloud. Hence there are chances of the data being
misused.
 No Redundancy: A cloud server is not redundant nor it is backed up. Whatever data is on the
cloud no backup is maintained. Therefore it is recommended to not to rely 100% on the
cloud and maintain the redundant data.
 Compatibility: Not every existing tool, software and computer is compatible with the web
based service, platform or infrastructure.
 Unpredicted Costs: The cloud can reduce staff and hardware costs, but the price colud end
up being more than you bargained for.

Security concerns & benefits

Security concerns
1. Data Loss: Data loss is the most common cloud security risks of cloud computing. It is also
known as data leakage. Data loss is the process in which data is being deleted, corrupted, and
unreadable by a user, software, or application. In a cloud computing environment, data loss
occurs when our sensitive data is somebody else's hands, one or more data elements can not be
utilized by the data owner, hard disk is not working properly, and software is not updated.

2. Hacked Interfaces and Insecure APIs :As we all know, cloud computing is completely
depends on Internet, so it is compulsory to protect interfaces and (Application Programming
Interface) APIs that are used by external users. APIs are the easiest way to communicate with
most of the cloud services. In cloud computing, few services are available in the public domain.
These services can be accessed by third parties, so there may be a chance that these services
easily harmed and hacked by hackers.

3. Data Breach: Data Breach is the process in which the confidential data is viewed, accessed,
or stolen by the third party without any authorization, so organization's data is hacked by the
hackers.

4. Vendor lock-in: Vendor lock-in is the of the biggest security risks in cloud computing.
Organizations may face problems when transferring their services from one vendor to another.
As different vendors provide different platforms, that can cause difficulty moving one cloud to
another.

5. Increased complexity strains IT staff: Migrating, integrating, and operating the cloud
services is complex for the IT staff. IT staff must require the extra capability and skills to
manage, integrate, and maintain the data to the cloud.

6. Spectre & Meltdown: Spectre & Meltdown allows programs to view and steal data which is
currently processed on computer. It can run on personal computers, mobile devices, and in the
cloud. It can store the password, your personal information such as images, emails, and business
documents in the memory of other running programs.

75
7. Denial of Service (DoS) attacks: Denial of service (DoS) attacks occur when the system
receives too much traffic to buffer the server. Mostly, DoS attackers target web servers of large
organizations such as banking sectors, media companies, and government organizations. To
recover the lost data, DoS attackers charge a great deal of time and money to handle the data.

8. Account hijacking: Account hijacking is a serious security risk in cloud computing. It is the
process in which individual user's or organization's cloud account (bank account, e-mail account,
and social media account) is stolen by hackers. The hackers use the stolen account to perform
unauthorized activities.

Benefits
1. Reduced IT costs: Moving to cloud computing may reduce the cost of managing and
maintaining your IT systems. Rather than purchasing expensive systems and equipment for your
business, you can reduce your costs by using the resources of your cloud computing service
provider. You may be able to reduce your operating costs because:
 the cost of system upgrades, new hardware and software may be included in your
contract
 you no longer need to pay wages for expert staff
 your energy consumption costs may be reduced
 there are fewer time delays.

2. Scalability: Your business can scale up or scale down your operation and storage needs
quickly to suit your situation, allowing flexibility as your needs change. Rather than purchasing
and installing expensive upgrades yourself, your cloud computer service provider can handle this
for you. Using the cloud frees up your time so you can get on with running your business.

3. Business continuity: Protecting your data and systems is an important part of business
continuity planning. Whether you experience a natural disaster, power failure or other crisis,
having your data stored in the cloud ensures it is backed up and protected in a secure and safe
location. Being able to access your data again quickly allows you to conduct business as usual,
minimising any downtime and loss of productivity.

4. Collaboration efficiency: Collaboration in a cloud environment gives your business the


ability to communicate and share more easily outside of the traditional methods. If you are
working on a project across different locations, you could use cloud computing to give
employees, contractors and third parties access to the same files. You could also choose a cloud
computing model that makes it easy for you to share your records with your advisers (e.g. a
quick and secure way to share accounting records with your accountant or financial adviser).

5. Flexibility of work practices: Cloud computing allows employees to be more flexible in their
work practices. For example, you have the ability to access data from home, on holiday, or via
the commute to and from work (providing you have an internet connection). If you need access
to your data while you are off-site, you can connect to your virtual office, quickly and easily.
6. Access to automatic updates: Access to automatic updates for your IT requirements may be
included in your service fee. Depending on your cloud computing service provider, your system
will regularly be updated with the latest technology. This could include up-to-date versions of
software, as well as upgrades to servers and computer processing power.

76
Cloud Environment Roles

An all-cloud environment describes a company, organization or individual that uses a Web-based


application for every task rather than installing software or storing data on a computer. All-cloud
environments are not common, but a move toward this is a long-term goal for cloud computing
enthusiasts and cloud capitalists.
Cloud providers use different types of service models, and some service models stand to benefit
more from standardization than others.

SaaS(Software as a Service)
 In this model, a complete application is offered to the customer, as a service on demand.
 A single instance of the service runs ;on the cloud and multiple end users are serviced.
 On the customers side, there is no need for upfront investment in servers or software
licenses, while for the provider, the costs are lowered, since only a single application needs
to be hosted and maintained.
 SaaS is offered by many companies such as Google, Salesforce, Microsoft, Zoho etc.

PaaS(Platform as a Service)
 A layers of software or the development environment is encapsulated and offered as a
service to the customer on demand upon which other higher levels of service can be built.
 The customers has the freedon to built his own application which can then run on the
providers infrastructure.
 To meet manageability and scalability requirements of the application, Paas providers offer
a predefined combination of OS and application servers, such as LAMP platform (Linux,
Apache,MySql and PHP) etc.
 Google app Engine, Force.com etc. are some of the popular PaaS examples.

IaaS(Infrastructure as a Service)
IaaS providers basic storage and computing capabilities on the network. Servers, storage system,
networking equipment, datacenter space etc. are pooled and are made available to handle the
workload. The customer can deploy his own software on the infrastructure. Some common
example of the companies which provide IaaS are Amazon, GoGrid. 3Tera etc.

Cloud vs. Distributed Computing

Cloud Computing Distributed Computing


Definition Cloud computing is used to define a new Distributed computing
class of computing that is based on network comprises of multiples of
technology. Cloud computing takes place software components that
over the internet. It comprises of a collection belong to multiple computers.
of integrated and network and hardware, The system works or runs as a
software and internet infrastructure. single system. Cloud
computing can be referred to
as a form that originated from
distributed computing and
virtualization.
Goals  Reduced Investments and proportional  Resource Sharing

77
costs  Openness
 Increased Scalability  Transparency
 Increased Availability and Reliability  Scalability
Types  Public Clouds  Distributed Computing
 Private Clouds Systems
 Community Clouds  Distributed Information
 Hybrid Clouds Systems
 Distributed Pervasive
Systems
Characteristics  It provides a shared pool of configurable  A task is distributed
computing resources amongst different machine for
 An on-demand network model is used to the computation job at the
provide access same time.
 The clouds are provisioned by the service  Technologies such as remote
providers procedure calls and remote
method invocation are used to
 It provides broad network access
construct distributed
computations.
Disadvantages  More elasticity means less control  Higher level of failure of
especially in the case of public clouds nodes than a dedicated
 Restrictions on available services may be parallel machine
faced as it depends upon the cloud provider  Few of the algorithms are
not able to match with slow
networks
 Nature of the computing job
may present too much
overhead.

Regulatory issues with cloud


Following are the issues in cloud computing
 Location (where is your data; what law governs?)
 Operational ( including service levels and security)
 Security (with respect to data storage and processing)
 Investigative/Litigation (e-discovery)

In determining strategic approach, suppliers and business customers should carefully consider
the following key data protection legal issues.

1. Liability: Cloud providers can be held liable for the illegal data they may be hosting. Escape
routes bear no liability for services that “consist of” the storage of electronic information under
the condition that the provider has no knowledge or awareness of its illegal nature and removes
or blocks illegal data when it does gain knowledge or become aware of illegal nature.

2. Law: Laws or regulations typically specify who within an enterprise should be held
responsible and accountable for data accuracy and security.

78
3. Compliance: the intermediary should follow the below mentioned duties:
(a) The intermediary shall publish the rules and regulations, privacy policy and user agreement
for access or usage of the intermediary‟s computer resources by any person.

(b) Such rules and regulations, terms and conditions or user agreement shall inform the users of
computer resource not to host, display, upload, modify, publish, transmit, update or share any
information that, if such hosting reported, then action to be taken in 36 hours.

4. Data Portability: Data Portability can be loosely described as the free flow of people‟s
personal information across the Internet, within their control. It has now become a standard term
in the internet industry in the context of cloud computing, open standards and privacy. Examples
cloud include, being able to import all your social network connections, ability to reuse your
health records while visiting different doctors, etc.

79

You might also like