Data Communication & Cloud Computing
Data Communication & Cloud Computing
2(Elective II)
DATA COMMUNICATION WITH CLOUD COMPUTING
Transmission of Data: The successful transmission of data depends principally on three factors:
1. Quality of the signal being transmitted
2. Characteristics of the transmission media
3. Size of the data being transferred.
1
Analog and Digital Data Transmission
While transmitting the data from source to destination, one must be concerned with the nature of
data, the physical means that is used to send the data, and the processing that is needed to ensure
that the received data is meaningful.
2
In the computer system, the data are generated and received in digital form. However, in real
world, the data needs to be presented in analog signal. Devices like A/D and D/A converters are
used to convert the digital data to analog data and vice versa.
Transmission Impairment
Signals travel through transmission media, which are not perfect. The imperfection cause signal
impairment. This means that the signal that is received is not the same as transmitted. For digital
data this difference can cause errors in the data that is transmitted.
For analog signals, these impairments introduce various random modifications that degrade the
signal quality.
For digital signals, bit errors are introduced. A binary 1 is transformed into a binary 0 and vice-
versa.
The most common impairments are
1. Attenuation
2. Delay distortion
3. Noise
Attenuation
The strength of a signal fall off with distance over any transmission medium. Attenuation
means loss of energy. The amount of energy depends on the frequency. When signal travels
through medium, it losses some of its energy due to the resistance of the medium.
This is the reason why a wire carrying electric signal gets warm. Some of the electrical energy
in the signal is converted to heat. To balance this loss, amplifiers are used to amplify the signal.
Fig. Attenuation
Delay Distortion
Distortion means change in its form or shape. Distortion can occur in a composite signal made of
different frequencies.
Each signal component has its own propagation speed through a medium and therefore its own
delay in arriving at the final destination.
Difference in delay may create a difference in phase if the delay is not exactly the same as the
period duration. This effect is referred to as delay distortion, since the received signal is distorted
due to variable delay in its component.
Delay distortion is particularly critical for digital data
Fig. Distortion
3
Noise
For any data transmission event, the received signal may be modified by the various distortions
imposed by the transmission system, plus additional unwanted signals that are inserted
somewhere between transmission and reception. The undesired signals are referred to as noise. It
is the major limiting factor in communications system performance.
Noise may be divided into four categories
a. Thermal Noise
b. Intermodulation Noise
c. Crosstalk
d. Impulse Noise
Fig. Noise
c. Crosstalk: It is the unwanted coupling between signal paths. Crosstalk can occur by
electrical coupling between nearby twisted pair or coax cable lines carrying multiple
signals. Crosstalk can also occur when unwanted signals are picked up by microwave
antennas.
d. Impulse Noise: It is non-continuous, irregular pulses or noise spikes which are of short
duration and or high amplitude. It can be generated from a variety of causes, including
external electromagnetic disturbances, such as lightning and faults in the communication
system.
4
Fig. Effect of noise on a digital signal
Above figure shows the example of the effect of noise on a digital signal. Here the noise
consists of a relatively modest level of thermal noise plus spikes of impulse noise. The noise is
sufficient to change a 1 to a 0 or a 0 to a 1.
Transmission Media.
Communication media or channel represents the type of channel over which messages are
transmitted. Many communication systems use several different media. Like telephone lines,
there are several types of physical channels through which data can be transmitted from one
location to another.
Internal data transmission refers to the transfer of data within a computer, while external data
transmission refers to the transfer of data to either local peripheral equipment or remote
computers. Following are the transmission media
a. Cost of Media
b. Bandwidth of a Cable
c. Coverage of Network
d. Effect of atmosphere
e. Data security
f. Maintenance charges and efforts etc.
5
1. Physical Connection: Physical connection use a solid medium to connect sending and
receiving devices. These connection includes
Twisted Pair
Coaxial Cable
Fiber-optic cable
TWISTED PAIR
COAXIAL CABLE
6
It consists of a central copper wire surrounded by a PVC insulation over which an outer
shield of thick PVC material again shields a sleeve. The signal is transmitted by the inner
copper wire and is electrically shielded by the outer metal sleeve.
It is widely used for cable television transmission
Coaxial cable is used in implementing a local area network called the Ethernet.
Advantage of coaxial cables is that it is noise immunity and can offer cleaner and crisper
data transmission without distortion or loss of signal
Disadvantages of coaxial cables are implementation cost; like twisted pair it is also
susceptibility to noise.
7
WIRELESS CONNECTION
Wireless connection does not use a medium to connect sending and receiving devices
This communication media does not require any point-to-point connections between sender
and receiver though wires.
It proves very advantageous for mobile machines, but can also be used by stationery
machines
It proves very fast transmission of data
In this transmission, the data waves with the help of a dish antenna. These waves are
transmitted to the receiver, via a satellite. The receiver receives these waves with the help of
a compatible dish antenna
Wireless communication media may use radio waves, micro waves, infrared waves, light
waves etc., to transmit the data
This media is a natural choice where very long distance are to be covered or in those areas
where the senders and the receivers cannot be connected through wires.
They use the air itself and use following techniques
Microwave
Satellite
Microwave Communication
This communication media popular way of transmitting data since it does not incur the
expense of laying cables. It proves very advantageous for mobile machines, but can also be
used by stationary machines.
It provides very fast transmission of data bout 16 giga (1 giga= 10^9) bits per second.
In this transmission, the data is made to propagate in the form of waves.
A sender generates data waves with the help of dish antenna. These waves are transmitted to
the receiver, via a satellite. The receiver receives this wave with the help of compatible dish
antenna
Wireless communication media may use radio waves, microwave, infrared waves. Light
waves etc. to transmit the data
This media is natural choice where very long distances are to be covered or in those area
where sender and the receiver can not be connected through wires.
Disadvantage of this communication media is
Transmitter and receiver of a microwave system, which are mounted on very high towers,
should be in a line of sight. Due this it may not be possible for very long distance
transmission. Moreover, the signals become weaker after traveling a certain distance require
power amplification. Due to this reason, several repeater stations are normally required for
long distance transmission, which increases the cost of data transmission between two
points.
Satellite Communication
It is relatively newer, and more promising data transmission media and overcomes the problems
faced in microwave communication
A communication satellite is basically a microwave relay station placed in outer space
These satellites are launched either by space shuttles and are precisely positioned in a
geosynchronous orbit, it is stationary relative to earth and always stays over the same point on
the ground.
Satellite communication, microwave signal at 6 GHz is transmitted from a transmitter on earth to
the satellite positioned in space.
8
Advantage of a satellite communication is that a single microwave relay station visible from any
point of a very large area.
Indian satellite, INSAT-1B is positioned in such a way that it is accessible from any place in
India.
A major drawback of satellite communication
i. Placing the satellite communication has been the high cost of placing the satellite into
its orbit.
ii. As signal sent to a satellite is broadcast to all receives within the satellites range a
proper security measures are to be taken to prevent unauthorized tampering of
information.
Data Encoding:
Encoding is the process of converting the data or a given sequence of characters, symbols,
alphabets etc., into a specified format, for the secured transmission of data. Decoding is the
reverse process of encoding which is to extract the information from the converted format.
Encoding is the process of using various patterns of voltage or current levels to represent 1s and
0s of the digital signals on the transmission link.
Digital data, in information theory and information systems, is the discrete, discontinuous
representation of information or works. Numbers and letters are commonly used representations.
Digital data can be contrasted with analog signals which behave in a continuous manner, and
with continuous functions such as sounds, images, and other measurements.
The word digital comes from the same source as the words digit. The term is most commonly
used in computing and electronics, especially where real-world information is converted to
binary numeric form as in digital audio and digital photography.
Digital data is a binary language. When you press a key on the keyboard, an electrical circuit is
closed. The circuit acts like a switch and has only two possible options: open or closed. If you
know Morse code, the idea is the same. A string of dashes and dots represents one letter or
number. This is binary. There is no halfway or in-between. The status of the switch as open or
closed is interpreted by the computer as a 0 or 1. Each digit is known as a bit.
Analog data
Analog data is data that is represented in a physical way. Where digital data is a set of individual
symbols, analog data is stored in physical media, whether that's the surface grooves on a vinyl
record, the magnetic tape of a VCR cassette, or other non-digital media.
One of the big ideas behind today's quickly developing tech world is that much of the world's
natural phenomena can be translated into digital text, image, video, sound, etc. For example,
physical movements of objects can be modeled in a spatial simulation, and real-time audio and
video can be captured using a range of systems and devices.
Analog data may also be known as organic data or real-world data.
9
Signals:
Digital and Analog Signal.
Digital Signal
Digital signal uses discrete signals as on/off representing binary format. Off is 0, On is 1.
Digital signal uses square waves.
Digital signal first transform the analog waves to limited set of numbers and then record them as
digital square waves.
Digital transmission is easy and can be made noise proof with no loss at all.
Digital signal hardware can be easily modulated as per the requirements.
Digital transmission needs more bandwidth to carry same information.
Digital data is stored in form of bits.
Digital signal needs low power as compare to its analog counterpart.
Digital signal are good for computing and digital electronics.
Digital signal are costly.
Digital signal are: Computer, CD, DVD.
Analog Signal
Analog Signal uses continuous signals with varying magnitude.
Analog Signal uses sine waves.
Analog Signal s records the physical waveforms as they are originally generated.
Analog Signal s are affected badly by noise during transmission.
Analog Signal 's hardwares are not flexible.
Analog transmission requires less bandwidth.
Analog data is stored in form of waveform signals.
Analog Signal consumes more power than digital systems.
Analog Signals are good for audio/video recordings.
Analog Signals are cheap.
Analog Signals are: Analog electronics, voice radio using AM frequency.
Asynchornous Transmission
The approach with this scheme is to avoid the timing problem by not sending long, uninterrupted streams
of bits. Data are transmitted one character at a time, where each character is five to eight bits in length.
Timing or synchronization must only be maintained within each character, the receiver has the option to
resynchronize at the beginning of each new character. This technique is shown in the figure.
10
Fig. Asynchronous Transmission
The line between transmitter and receiver is in idle state, when no character is being transmitted. The term
idle is equivalent to the signaling element for binary 1. Thus, for NRZ-L signaling, which is common for
asynchronous transmission, idle would be the presence of a negative voltage on the line.
Start bit is present at the beginning of a character with a value of binary 0. This is followed by the 5 to 8
bits that actually makeup the character. The bits of the character are transmitted beginning with the least
significant bit (LSB).
For example, for IRA (International Reference Alphabet) characters, the data bits are usually followed by
a parity bit, which therefore is in the most significant bit (MSB) position.
The parity bit is set by the transmitter such that the total number of ones in the character, including the
parity bit, is even (even parity) or odd (odd parity), depending on the convention being used.
The receiver uses this bit for error detection. The final element is a stop element which is a binary 1. A
minimum length for the stop element is usually 1, 1.5 or 2 times the duration of an ordinary bit.
No maximum value is specified. Because the stop element is the same as the idle state, the transmitter will
continue to transmit the stop element until it is ready to send the next character. The timing requirements
for this scheme are modest.
Synchronous Transmission
With synchronous transmission, a block of bits is transmitted in a steady stream without start and stop
codes.
The bit stream is combined into longer “frames”, which may contain multiple bytes. To prevent timing
drift between transmitter and receiver, their clocks must somehow be synchronized.
One possibility is to provide a separate clock line between transmitter and receiver, one side pulses the
line regularly with one short pulse per bit time. The other side uses these regular pulses as a clock. This
technique works well over short distances, but over longer distances the clock pulses are subject to
impairments and timing errors can occurs.
The other alternative is to embed the clocking information in the data signal. For digital signals, this can
be accomplished with Manchester or differential, Manchester encoding.
For analog signals, a number of techniques can be used; for example, the carrier frequency itself can be
used to synchronize the receiver based on the phase of the carrier.
11
With synchronous transmission, there is another level of synchronization required, to allow the receiver to
determine the beginning and end of a block of data. Each block begins with a preamble bit pattern and
ends with a postamble bit pattern.
In addition, other bits are added to the block that convey control information used in the data link control.
The data plus preamble, postamble, and control information are called a frame. The exact format of the
frame depends on which data link control procedure is being used.
Above figure shows a frame format for synchronous transmission. The frame starts with a preamble
called a flag, which is 8 bits long. The same flag is used as a postamble. The receiver looks for the
occurance of the flag pattern to signal the start of a frame. This is followed by some number of control
fields then a data field and finally the flaog is repeated.
For sizable blocks of data, synchronous transmission is far more efficient than asynchronous.
Asynchronous transmission requires 20% or more overhead.
First consider the case in which no means are taken to detect errors. Then the probability of
detected errors (p3) is zero. To express the remaining probabilities, assume the probability that
any bit is in error(pb) is constant and independent for each bit. Then we have
P1= (1-pb)f
P2=(1-p1)
Where, f is the number of bits per frame. In words, the probability that a frame arrives with no
bit errors decreases when the probability that a frame arrives with no bit errors decreases when
the probability of a single bit error increases, as you would expect. Also, the probability that a
frame arrives with no bit errors decreases with increasing frame length. The longer the frame, the
more bits it has and hence higher the probability that one of these has errors.
12
1. Parity Bit : One of the simplest error-detection scheme is to add a parity bit to the end of a
block of data. A common example is ASCII transmission, in which a parity bit is attached to
each 7-bit ASCII character. The value of this bit is selected so that the character has an even
number of 1s (even parity) or an odd number of 1s (odd parity). So, for example, if the
transmitter is transmitting an ASCII G (1110001) and using odd parity, it will append a 1
and transmit 11100011.
The receiver examines the received character and, if the total number of 1s is odd, assumes
that no error has occurred. If one bit (or any odd number of bits) is incorrectly inverted
during transmission (for example, 11QO0011), then the receiver will detect an error. Note,
that if two of bits are inverted due to error, an undetected error occurs. Typically, even parity
is used for synchronous transmission and odd parity for asynchronous transmission.
2. Cyclic Redundancy Check (CRC): one of the most powerful, error-detecting codes is the
cyclic redundancy check (CRC), which can be described as follows. Given a k-bit length
data block or message, the transmitter generates an (n-k) bit sequence which is known as a
frame check sequence (FCS), such that the resulting frame now consists of n bits. The
resulting frame of n bits is exactly divisible by some predetermined number.
The receiver then divides the incoming frame by that number and if there is no remainder
assumes that there was no error.
CRC process can be detected by following procedures:
a) Modulo 2 Arithmetic
b) Polynomials
c) Digital Logic
a) Modulo 2 Arithmetic: Modulo 2 arithmetic uses binary addition with no carries, which is
just the exclusive-OR (XOR) operation. Binary subtraction with no carries is also
interpreted as the XOR operation: For example,
Now define
T= n-bit frame to be transmitted
D= k-bit block of data, or message, the first k bits of T
F= (n-k) bit FCS, the last (n-k) bits of T
P= pattern of n-k+1 bits; this is the predetermined divisor
We would like T/P to have no remainder. It should be clear that
T=2n-k D+F
b) Polynomials: A second way of viewing the CRC process is to express all values as
polynomials in a dummy variable X, with binary coefficients. The coefficients
correspond to the bits in the binary number
c) Digital logic: The CRC process can also be represented by using multiple circuits. These
circuits consist of XOR gates and a shift register. The shift registry is simply a string
storage device which stores 1-bit of data. Each device has an output line, which indicates
the value currently stored, and an input line.
13
At discrete time instants, known as clock times, the value in the storage device is
replaced by the value indicated by its input line. The entire register is clocked
simultaneously, causing a 1-bit shift along the entire register.
Interfacing
Most of the digital data processing devices have limited data transmission capability. They
generate a simple digital signal, such as NRZ-L, and the distance across which they can transmit
data is limited. It is rare for such a device to attach directly to a transmission or networking
facility. Such devices include terminals and computers, and are generically referred to as data
terminal equipment(DTE).
A DTE makes use of the transmission system through the mediation of data circuit-terminating
equipment (DCE). One of the common examples is a modem.
i. Mechanical characteristics: These characteristics are related to the actual physical connection
of the DTE to the DCE. The signal and control interchange circuits are bundled into a cable with
a terminator plug, male or female, at each end.
ii. Electrical characteristics: Electrical characteristics are related with the voltage levels and
timing of voltage changes. Both DTE and DCE must use the same code, must use the same
voltage levels to mean the same things, and must use the same duration of signal elements. These
characteristics determine the data rates and distance that can be achieved.
iii. Functional characteristics: these characteristics specify the functions that are performed by
assigning meanings to each of the interchange circuits. Functional can be classified into the
broad categories of data, control, timing, and electrical ground.
iv. Procedural characteristics: These characteristics specify the sequence of events for
transmitting data, based on the functional characteristics of the interface. A variety standard for
interfacing exists. Two of the most important standards are EIA-232-D and the ISDN physical
interface.
14
Fig. ISDN Interface
The physical connection which is define in ISO (International Standard Organization) 8877,
specifies that the NT and TE cables shall terminate in matching plugs that provide for 8 contacts.
Figure shows the contact assignments for each of the 8 lines on both the NT and TE sides. Two
pins are used to provide data transmission in each direction. These contract points are used to
connect twisted-air leads coming from the NT and TE devices. Because there are no specific
functional circuits, the transmit receive circuit are used to carry both data and control signals.
The control information is transmitted in the form of message.
The specification provides for the capability to transfer power across the interface. This power
transfer can be accomplished by using the same leads used for digital signal transmission
(c,d,e,f) or on additional wires, using access leads g-h. The remaining two leads are not used in
the ISDN configuration but may be useful in other configurations.
ii. Electrical Specification: The ISDN electrical specification dictates the use of balanced
transmission. With balanced transmission, signals are carried on a line, such as twisted pair,
consisting of two conductors. Signals are transmitted as a current that travels down one
conductor and returns on the other, the two conductors forming a complete circuit.
For digital signals, this technique is known as differential signaling as the binary value depends
on the direction of the voltage difference between the two conductors. Unbalanced transmission,
which is used on older interfaces such as EIA-232, uses a single conductor to carry the signal,
with ground providing the return path.
Line Configuration
Two characteristics that distinguish various data link configurations are topology and duplicity
which describes whether the link is half duplex or full duplex.
15
Topology and Duplicity
The topology of a data link refers to the physical arrangement of stations on a transmission
medium. If there are only two stations (e.g. a terminal and a computer or two computers), the
link is point to point. If there are more than two stations, then it is a multipoint topology.
Traditionally, a multipoint link has been used in the case of a computer (primary station) and a
set of terminals (secondary stations). The multipoint topology is found in local area networks.
Traditional multipoint topologies are made possible when the terminals are only transmitting a
fraction of the time.
Above figure shows the advantages of the multipoint configuration. If each terminal has a point-
to-point link to its computer, then the computer must have one I/O port for each terminal. Also
there is a separate transmission line from the computer to each terminal.
In a multipoint configuration, the computer needs only a single I/O port and a single transmission
line which saves costs.
For full duplex transmission, two stations can simultaneously send and receive data from each
other. Thus, this mode is known as two-way simultaneous and may be compared to a two-lane,
two-way bridge. For computer-to-computer data exchange, this form of transmission is more
efficient than half-duplex transmission.
Flow Control
Flow control is a technique for assuring that a transmitting entity does not overcome a receiving
entity with data. The receiving entity typically allocates a data buffer of some maximum length
for a transfer. When data are received, the receiver must do a certain amount of processing
16
before passing the data to the higher-level software. In the absence of flow control, the receiver‟s
buffer may fill up and overflow while it processing old data.
The mechanisms for flow control in the absence of errors is examine with the model as shown in
figure, which is a vertical-time sequence diagram. It has the advantages of showing time
dependencies and show the correct send-receive relationship.
Each arrow represents a single frame transiting a data link between two stations. The data are
sent in a sequence of frames, with each frame containing a portion of the data and some control
information. The time it takes for a station to emit all of the bits of a frame onto the medium is
the transmission time; this is proportional to the length of the frame.
The propagation time is the time it takes for a bit to traverse the link between source and
destination. All frames that are transmitted are successfully received; no frames are lost and none
arrive with errors. Furthermore, frames arrive in the same order in which they are sent. However,
each transmitted frame suffers an arbitrary and variable amount of delay before reception.
17
With the use of multiple frames for a single message, the stop-and –wait procedure may be
insufficient. The essence of the problem is that only one frame at a time can be in transit.
In situations where the bit length of the link is grater than the frame length, serious inefficiencies
occurred. In the figure, the transmission time is normalized to one, and the propagation delay is
expressed as the variable a. when a is less than 1, the propagation time is less than the
transmission time. In this case, the frame is sufficiently long that the first bits of the frame have
arrived at the destination before the source has completed the transmission of the frame.
When a is grate than 1, the propagation time is greater than the transmission time. In this case,
the sender completes transmission of the entire frame before the leading bits of that frame arrive
at the receiver.
A maintains a list of sequence numbers that is allowed to send, and B maintains a list of
sequence numbers that it is prepared to receive. Each of these lists can be thought of as a window
of frames. The operation is referred to as sliding-window flow control.
18
Fig. Sliding-Window Depiction
Error Controls
When data-frame is transmitted there are following two probabilities,
1. Data-frame may be lost in the transit or
2. It is received in corrupted form
In both the above cases, the receiver does not receive the correct data-frame and sender does not
know anything about any loss. In these types of cases, both sender and receiver are equipped
with some protocols which help them to detect transit errors like data-frame lost. Requirements
for error control mechanism:
a. Error detection: The sender and receiver, either both or any, must ascertain that there‟s
been some error on transit
b. Positive ACK: when the receiver receives a correct frame, it should acknowledge it.
c. Negative ACK: when the receiver receives a damaged frame or a duplicate frame, it
sends a NACK back to the sender and the sender must retransmit the correct frame.
d. Retransmission: the sender maintains a clock and sets a timeout period. If an
acknowledgement of a data-frame previously transmitted does not arrive in the timeout
period, the sender retransmit the frame, thinking that the frame or its acknowledge is lost
in transit.
Acknowledgement Techniques
There are three types of technique available which Data-link layer may deploy to control the
errors by Automatic Repeat Requests (ARQ):
1. Stop-and-Wait ARQ
19
Fig. Stop and wait ARQ
The following transition may occur in stop-and wait ARQ
The sender maintains a timeout counter.
When a frame is sent the sender starts the timeout counter.
If acknowledgment of frame comes in time, the sender transmits the next frame in
queue.
If acknowledgement does not come in time, the sender assumes that either the
frame or its acknowledgement is lost in transit. Sender retransmits the frame and
starts the timeout counter.
If a negative acknowledgment is received, the sender retransmits the frame.
2. Go-Back-N ARQ
Stop and wait ARQ mechanism does not utilize the resources at their best. For the
acknowledgment is received, the sender sits idle and does nothing. In Go-Back-N ARQ
method, both sender and receiver maintain a window.
The sending-window size enables the sender to send multiple frames without receiving
the acknowledgement of the previous ones. The receiving-window enables the receiver to
receive multiple frames and acknowledge them. The receiver keeps tracks of incoming
frame‟s sequence number.
20
When the sender sends all the frames in window, it checks up to what sequence number it
has received positive ACK. If all frames are positively acknowledged, the sender sends
next set of frames. If sender finds that it has received NACK or has not receive any ACK
for a particular frame, it retransmits all the frames after which it does not receive any
positive ACK.
Basic Characteristics
1. Three types of stations
2. Two link configurations
3. Three data transfer modes of operation
1. Flag Fields : Flag fields delimit the frame at both ends with the unique pattern 0111 1110. A
single flag may be used as the closing flag for one frame and the opening flag for the next. On
both sides of the user-network interface, receivers are continuously looking for the flag sequence
to synchronize on the start of a frame. While receiving a frame, a station continues to look for
that sequence to determine the end of the frame.
To the synchronization problem, a procedure known as bit stuffing is used. For all its between
the starting and ending flags, the transmitter inserts an extra 0 bit after each occurrence of five 1s
in the frame.
22
With the use of bit stuffing, arbitrary bit patterns can be inserted into the data field of the frame.
This property is known as data transparency
2. Address Field: The address field identifies the secondary station that is to receive the frame.
This field is not needed for point-to-point links, but is always included for the sake of uniformity.
The address field is usually eight bits long but, by prior agreement, an extended format may be
used in which the actual address length is a multiple of seven bits.
3. Control Field: Three types of frames are defined by HDLC each having a different control
field format. Information frames carry the data to be transmitted for the user. Additionally, flow
and error control data, using the ARQ mechanism, are piggybacked on an information frame.
Supervisory frames provide the ARQ mechanism when piggybacking is not used. Unnumbered
frames provide supplemental link control functions. The first one or two bits of the control field
serves to identify the frame type. The remaining bit positions are organized into subfields as
indicated in figures (c ) and (d).
Multiplexing:
Multiplexing is the process of combining the transmission, character by character, from several
devices into a single steam that can be transmitted over a single communication channel. A
multiplexer is a device that produces multiplexing. It is also used at the receiving end to separate
the transmission and send them back in their original order for processing.
Multiplexer are more efficient, less expensive and allows the communication channels to
transmit much more data at any one time than what a single device can send. The equipment that
multiplexes and demultiplexes is sometimes called multiplexer or mux.
Fig. Multiplexing
In the above figure, there are n inputs to a multiplexer. The multiplexer is connected by a single
data link to a de-multiplexer. The link is to carry n separate channels of data.
The multiplexer combines(multiplexes) data from the n input lines and transmits over a higher
capacity data link. The de-multiplexer accepts the multiplexed data stream separates
(demultiplexes) the data according to channel, and delivers them to the appropriate output lines.
A number of signals can be carried simultaneously if each signal is modulated onto a different
carrier frequency and the carrier frequencies are sufficiently separated such that the bandwidths
of the signals do not overlap. A general case of FDM is shown in figure.
23
Fig. Frequency-division multiplexing
Six signal sources are fed into a multiplexer, which modulates each signal onto a different
frequency (f1,…..f6). Each modulated signal requires a certain bandwidth centered around its
carrier frequency, referred to as a channel. To prevent interference, the channels are separated by
guard bands, which are unused portions of the spectrum.
The composite signal transmitted across the medium is analog. In the case of digital input, the
input signals must be passed through modems to be converted to analog. In either case, each
input analog signal must then be modulated to move it to the appropriate frequency band. An
example of FDM is broadcast and cable television. The television signal fits comfortably into a
6-MHz bandwidth.
Synchronous time-division multiplexing is possible when the achievable data rate sometimes
called bandwidth of the medium exceeds the data rate of digital signals to be transmitted. A
generic description of a synchronous TDS system is provided in following figure.
The transmitted data may have a format something like figure (b). the data are organized into
frames. Each frame contains a cycle of time slots. In each frame, one or more slots are dedicated
to each data source. The sequence of slots dedicated to one source, from frame to frame, is called
a channel. At the receiver, the interleaved data are de-multiplexed and round to the appropriate
destination buffer.
Synchronous TDM is called synchronous not because synchronous transmission is used, but
because the time slots are pre-assigned to source and fixed. The time slots for each source are
transmitted whether or not the source has data to send. Hence, capacity is wasted to achieve
24
simplicity of implementation. Even when fixed assignment is used, however, it is possible for a
synchronous TDM device to handle sources of different data rates. For example, the slowest
input device could be assigned one slot per cycle, while faster devices are assigned multiple slots
per cycle.
25
UNIT II: Data Communication Network Communication Network :
Circuit Switching
Circuit switching is a connection-oriented network switching technique. Here, a dedicated route
is established between the source and the destination and the entire message is transferred
through it.
Phases of Circuit Switch Connection
Circuit Establishment : In this phase, a dedicated circuit is established from the source to the
destination through a number of intermediate switching centres. The sender and receiver
transmits communication signals to request and acknowledge establishment of circuits.
Data Transfer : Once the circuit has been established, data and voice are transferred from the
source to the destination. The dedicated connection remains as long as the end parties
communicate.
Circuit Disconnection : When data transfer is complete, the connection is relinquished. The
disconnection is initiated by any one of the user. Disconnection involves removal of all
intermediate links from the sender to the receiver.
Diagrammatic Representation of Circuit Switching in Telephone
The following diagram represents circuit established between two telephones connected by
circuit switched connection. The blue boxes represent the switching offices and their connection
with other switching offices. The black lines connecting the switching offices represents the
permanent link between the offices. When a connection is requested, links are established within
the switching offices as denoted by white dotted lines, in a manner so that a dedicated circuit is
established between the communicating parties. The links remains as long as communication
continues.
Advantages
It is suitable for long continuous transmission, since a continuous transmission route is
established, that remains throughout the conversation.
The dedicated path ensures a steady data rate of communication.
No intermediate delays are found once the circuit is established. So, they are suitable for real
time communication of both voice and data transmission.
26
Disadvantages
Circuit switching establishes a dedicated connection between the end parties. This dedicated
connection cannot be used for transmitting any other data, even if the data load is very low.
Bandwidth requirement is high even in cases of low data volume.
There is underutilization of system resources. Once resources are allocated to a particular
connection, they cannot be used for other connections.
Time required to establish connection may be high.
Packet Switching -
Packet switching is a connectionless network switching technique. Here, the message is divided
and grouped into a number of units called packets that are individually routed from the source to
the destination. There is no need to establish a dedicated circuit for communication.
Process
Each packet in a packet switching technique has two parts: a header and a payload. The header
contains the addressing information of the packet and is used by the intermediate routers to direct
it towards its destination. The payload carries the actual data.
A packet is transmitted as soon as it is available in a node, based upon its header information.
The packets of a message are not routed via the same path. So, the packets in the message arrives
in the destination out of order. It is the responsibility of the destination to reorder the packets in
order to retrieve the original message.
The process is diagrammatically represented in the following figure. Here the message comprises
of four packets, A, B, C and D, which may follow different routes from the sender to the
receiver.
Advantages
Delay in delivery of packets is less, since packets are sent as soon as they are available.
Switching devices don‟t require massive storage, since they don‟t have to store the entire
messages before forwarding them to the next node.
27
Data delivery can continue even if some parts of the network faces link failure. Packets can
be routed via other paths.
It allows simultaneous usage of the same channel by multiple users.
It ensures better bandwidth usage as a number of packets from multiple sources can be
transferred via the same link.
Disadvantages
They are unsuitable for applications that cannot afford delays in communication like high
quality voice calls.
Packet switching high installation costs.
They require complex protocols for delivery.
Network problems may introduce errors in packets, delay in delivery of packets or loss of
packets. If not properly handled, this may lead to loss of critical information.
Virtual Circuits
28
1. Virtual circuits are connection-oriented, which means that there is a reservation of resources
like buffers, bandwidth, etc. for the time during which the newly setup VC is going to be used
by a data transfer session.
2. A virtual circuit network uses a fixed path for a particular session, after which it breaks the
connection and another path has to be set up for the next the next session.
3. All the packets follow the same path and hence a global header is required only for the first
packet of connection and other packets will not require it.
4. Packets reach in order to the destination as data follows the same path.
5. Virtual Circuits are highly reliable.
6. Implementation of virtual circuits is costly as each time a new connection has to be set up
with reservation of resources and extra information handling at routers.
Datagram Networks
Advantages
Sharing of communication channels ensures better bandwidth usage.
It reduces network congestion due to store and forward method. Any switching node can
store the messages till the network is available.
Broadcasting messages requires much less bandwidth than circuit switching.
Messages of unlimited sizes can be sent.
It does not have to deal with out of order packets or lost packets as in packet switching.
Disadvantages
In order to store many messages of unlimited sizes, each intermediate switching node
requires large storage capacity.
Store and forward method introduces delay at each switching node. This renders it
unsuitable for real time applications.
30
node. For example, a client must be assigned to an agent, but an agent with no clients can still be
listed in the database.
The above diagram shows a diagram of a basic set structure. One or more sets (connections) can
be defined between a specific pair of nodes, and a single node can also be involved in other sets
with other nodes in the database.
The data can be easily accessed inside a network model with the help of an appropriate set
structure. there are no restrictions on choosing the root node, the data can be accessed via any
node and running backward or forward with the help of related sets.
Advantages
fast data access.
It also allows users to create queries that are more complex than those they created using
a hierarchical database. So, a variety of queries can be run over this model.
The above figure shows that the digital signals have to be converted into analog and analog
signals to digital using modem during the whole path. What if the digital information at one end
reaches to the other end in the same mode, without all these connections? It is this basic idea that
lead to the development of ISDN.
As the system has to use the telephone cable through the telephone exchange for using the
Internet, the usage of telephone for voice calls was not permitted. The introduction of ISDN has
resolved this problem allowing the transmission of both voice and data simultaneously. This has
many advanced features over the traditional PSTN, Public Switched Telephone Network.
SDN was first defined in the CCITT red book in 1988.The Integrated Services of Digital
Networking, in short ISDN is a telephone network based infrastructure that allows the
transmission of voice and data simultaneously at a high speed with greater efficiency. This is a
circuit switched telephone network system, which also provides access to Packet switched
networks.
31
The model of a practical ISDN is as shown below.
Routing
When a device has multiple paths to reach a destination, it always selects one path by preferring
it over others. This selection process is termed as Routing. Routing is done by special network
devices called routers or it can be done by means of software processes. The software based
routers have limited functionality and limited scope.
A router is always configured with some default route. A default route tells the router where to
forward a packet if there is no route found for specific destination. In case there are multiple path
existing to reach the same destination, router can make decision based on the following
information:
Hop Count
Bandwidth
Metric
Prefix-length
Delay
Routes can be statically configured or dynamically learnt. One route can be configured to be
preferred over others.
Unicast routing
Most of the traffic on the internet and intranets known as unicast data or unicast traffic is sent
with specified destination. Routing unicast data over the internet is called unicast routing. It is
32
the simplest form of routing because the destination is already known. Hence the router just has
to look up the routing table and forward the packet to next hop.
Broadcast routing
By default, the broadcast packets are not routed and forwarded by the routers on any network.
Routers create broadcast domains. But it can be configured to forward broadcasts in some special
cases. A broadcast message is destined to all network devices.
A router creates a data packet and then sends it to each host one by one. In this case, the router
creates multiple copies of single data packet with different destination addresses. All packets are
sent as unicast but because they are sent to all, it simulates as if router is broadcasting.
Multicast Routing
Multicast routing is special case of broadcast routing with significance difference and challenges.
In broadcast routing, packets are sent to all nodes even if they do not want it. But in Multicast
routing, the data is sent to only nodes which wants to receive the packets.
33
Anycast Routing
Anycast packet forwarding is a mechanism where multiple hosts can have same logical address.
When a packet destined to this logical address is received, it is sent to the host which is nearest in
routing topology.
Anycast routing is done with help of DNS server. Whenever an Anycast packet is received it is
enquired with DNS to where to send it. DNS provides the IP address which is the nearest IP
configured on it.
X.25.
X.25 is a protocol for packet switched communications over WAN (Wide Area Network). It was
originally designed for use in the 1970s and became very popular in 1980s. Presently, it is used
for networks for ATMs and credit card verification. It allows multiple logical channels to use the
same physical line. It also permits data exchange between terminals with different
communication speeds.
34
Fig. X. 25 interface
35
businesses. They can be spread across several buildings that are fairly close to each other so
users can share resources.
5. Metropolitan Area Network (MAN)
These types of networks are larger than LANs but smaller than WANs – and incorporate
elements from both types of networks. MANs span an entire geographic area (typically a town or
city, but sometimes a campus). Ownership and maintenance is handled by either a single person
or company (a local council, a large company, etc.).
6. Wide Area Network (WAN)
Slightly more complex than a LAN, a WAN connects computers together across longer physical
distances. This allows computers and low-voltage devices to be remotely connected to each other
over one large network to communicate even when they‟re miles apart.
7. Storage-Area Network (SAN)
As a dedicated high-speed network that connects shared pools of storage devices to several
servers, these types of networks don‟t rely on a LAN or WAN. Instead, they move storage
resources away from the network and place them into their own high-performance network.
SANs can be accessed in the same fashion as a drive attached to a server. Types of storage-area
networks include converged, virtual and unified SANs.
8. System-Area Network (also known as SAN)
This term is fairly new within the past two decades. It is used to explain a relatively local
network that is designed to provide high-speed connection in server-to-server applications
(cluster environments), storage area networks (called “SANs” as well) and processor-to-
processor applications. The computers connected on a SAN operate as a single system at very
high speeds.
9. Passive Optical Local Area Network (POLAN)
As an alternative to traditional switch-based Ethernet LANs, POLAN technology can be
integrated into structured cabling to overcome concerns about supporting traditional Ethernet
protocols and network applications such as PoE (Power over Ethernet). A point-to-multipoint
LAN architecture, POLAN uses optical splitters to split an optical signal from one strand of
single mode optical fiber into multiple signals to serve users and devices.
10. Enterprise Private Network (EPN)
These types of networks are built and owned by businesses that want to securely connect its
various locations to share computer resources.
11. Virtual Private Network (VPN)
By extending a private network across the Internet, a VPN lets its users send and receive data as
if their devices were connected to the private network – even if they‟re not. Through a virtual
point-to-point connection, users can access a private network remotely.
36
In general, a LAN uses only one type of transmission medium, commonly category 5
coaxial cables.
A LAN is distinguished from other networks by their topologies. The common topologies
are bus, ring, mesh, and star.
The number of computers connected to a LAN is usually restricted. In other words, LANs
are limitedly scalable.
IEEE 802.3 or Ethernet is the most common LAN. They use a wired medium in
conjuncture with a switch or a hub. Originally, coaxial cables were used for
communications. But now twisted pair cables and fiber optic cables are also used.
Types of LAN
Wireless LANs (WLAN)
Wireless LANs use high-frequency radio waves instead of cables for communications. They
provide clutter free homes, offices and other networked places. They have an Access Point or a
wireless router or a base station for transferring packets to and from the wireless computers and
the internet. Most WLANs are based on the standard IEEE 802.11 or WiFi.
Virtual LANs (VLAN)
Virtual LANs are a logical group of computers that appear to be on the same LAN irrespective of
the configuration of the underlying physical network. Network administrators partition the
networks to match the functional requirements of the VLANs so that each VLAN comprise a
subset of ports on a single or multiple switches. This allows computers and devices on a VLAN
to communicate in the simulated environment as if it is a separate LAN.
MAN Technology
A metropolitan area network (MAN) is a network with a size greater than LAN but smaller than
a WAN. It normally comprises networked interconnections within a city that also offers a
connection to the Internet.
The distinguishing features of MAN are
Network size generally ranges from 5 to 50 km. It may be as small as a group of
buildings in a campus to as large as covering the whole city.
Data rates are moderate to high.
In general, a MAN is either owned by a user group or by a network provider who sells
service to users, rather than a single organization as in LAN.
It facilitates sharing of regional resources.
They provide uplinks for connecting LANs to WANs and Internet.
37
Example of MAN
Cable TV network
Telephone networks providing high-speed DSL lines
IEEE 802.16 or WiMAX, that provides high-speed broadband access with Internet
connectivity to customer premise
Topologies:
The way in which devices are interconnected to form a network is called network topology.
Some of the factors that affect choice of topology for a network are −
Cost − Installation cost is a very important factor in overall cost of setting up an
infrastructure. So cable lengths, distance between nodes, location of servers, etc. have to
be considered when designing a network.
Flexibility − Topology of a network should be flexible enough to allow reconfiguration
of office set up, addition of new nodes and relocation of existing nodes.
Reliability − Network should be designed in such a way that it has minimum down time.
Failure of one node or a segment of cabling should not render the whole network useless.
Scalability − Network topology should be scalable, i.e. it can accommodate load of new
devices and nodes without perceptible drop in performance.
Ease of installation − Network should be easy to install in terms of hardware, software
and technical personnel requirements.
Ease of maintenance − Troubleshooting and maintenance of network should be easy.
Bus topology
Data network with bus topology has a linear transmission cable, usually coaxial, to which
many network devices and workstations are attached along the length. Server is at one end of
the bus. When a workstation has to send data, it transmits packets with destination address in
its header along the bus.
38
The data travels in both the directions along the bus. When the destination terminal sees the data,
it copies it to the local disk.
Advantages of Bus Topology
Easy to install and maintain
Can be extended easily
Very reliable because of single transmission line
Disadvantages of Bus Topology
Troubleshooting is difficult as there is no single point of control
One faulty node can bring the whole network down
Dumb terminals cannot be connected to the bus
Tress
Tree topology has a group of star networks connected to a linear bus backbone cable. It
incorporates features of both star and bus topologies. Tree topology is also called hierarchical
topology.
39
Maintenance difficult for large networks
Star
In star topology, server is connected to each node individually. Server is also called the central
node. Any exchange of data between two nodes must take place through the server. It is the most
popular topology for information and voice networks as central node can process data received
from source node before sending it to the destination node.
Ring
In ring topology each terminal is connected to exactly two nodes, giving the network a circular
shape. Data travels in only one pre-determined direction.
40
When a terminal has to send data, it transmits it to the neighboring node which transmits it to the
next one. Before further transmission data may be amplified. In this way, data raverses the
network and reaches the destination node, which removes it from the network. If the data reaches
the sender, it removes the data and resends it later.
41
It is responsible for encapsulating frames so that they are suitable for transmission via the
physical medium.
It resolves the addressing of source station as well as the destination station, or groups of
destination stations.
It performs multiple access resolutions when more than one data frame is to be
transmitted. It determines the channel access methods for transmission.
It also performs collision resolution and initiating retransmission in case of collisions.
It generates the frame check sequences and thus contributes to protection against
transmission errors.
LAN/MAN Standards.
All LANs and MANs consist of collections of devices that share the network‟s transmission
capacity. Some means should be there to control the access to the transmission medium to
provide an orderly and efficient use of transmission capacity. This is the function of a Medium
Access Control (MAC) protocol.
The key parameters in any Medium Access Control technique are where and how. „where‟ refers
to whether the control is in a centralized or distributed fashion.
In a centralized scheme, a controller is designated that has the authority to grant access to the
network i.e. if a station wishes to transmit it must wait until it receives permission from the
controller.
In a decentralized network, the stations collectively perform a Medium Access Control function
and dynamically determine the order in which stations should transmit.
The second parameter, „How‟, is constrained by the topology and is a trade-off among competing
factors, including cost, performance, and complexity. In general, we can categorize access
control techniques as being either synchronous or asynchronous.
1. Round Robin
With round robin, each station is turn is given the opportunity to transmit. During that
opportunity, the station may decline to transmit or may transmit subjected to a specified upper
bound i.e. maximum amount of data transmitted.
In any case, the station when it is finishes transmission, relinquishes its turn, and the right to
transmit passes to the next station in logical sequence. Control of sequence may be centralized or
distributed.
2. Reservation
For stream traffic, reservation techniques are well suited. For these techniques, time on the
medium is divided into slots, much as with synchronous TDM. A station wishing to transmit
42
reserves future slots for an extended or even an indefinite period. Again, reservations may be
made in a centralized or distributed fashion.
3. Contention
For busty traffic, contention techniques are usually appropriate. With these techniques, no
control is exercised to determine whose turn it is. All stations contend for time in a way that can
be. These techniques are distributed by nature. Their principal advantages is that they are simple
to implement and under light to moderate load, efficient.
43
UNIT III: Communication Architecture Protocols and Architecture:
Protocol
A protocol is a standard set of rules that allow electronic devices to communicate with each
other. These rules include what type of data may be transmitted, what commands are used to
send and receive data, and how data transfers are confirmed.
HDLC (High Level Data Link Control) is an example of a protocol. The data to be exchanged
must be sent in frames in specific format (syntax). The control field provides a variety of
regulatory functions, such as setting a mode and establishing a connection (semantics).
Provisions are also included for flow control (timing).
Characteristics of Protocol
1. Direct/Indirect
2. Monolithic/structured
3. Symmetric/asymmetric
4. Standard/ nonstandard
2. Monolithic/Structured
Consider an electronic mail package running on two computers connected by a synchronous
HDLC link. To be monolithic, the package needs to include all the HDLC logic. It needs for
breaking up the mail into packet-sized chunks and logic for requesting a virtual circuit.
44
Mail should only be sent when the destination system and entity are active and ready to receive.
Logic is needed for coordination. An alternative is to use structured design and implementation
techniques.
2. Encapsulation
Each PDU contains not only data but control information. Some PDUs contain only control
information and no data. Control information such as address, error-detection code and protocol
control. The addition of control information to data is referred to as encapsulation.
3. Connection Control
An entity may transmit data to another entity in such a way that each PDU is treated
independently of all previous PDUs. This process is known as connectionless data transfer.
Connection-oriented data transfer can be preferred is stations expect a lengthy exchange of data.
A logical association or connection is established between the entities. The entities go through
three phases for establishing the connection. Three phases occurs while establishing the
connection such as Connection establishment, Data transfer and Connection termination.
Following figure shows a connection-oriented data.
45
4. Ordered Delivery
If two communicating entities are in different hosts connected by a network, there is a risk that
PDUs will not arrive in the order, because they may follow different paths through the network.
In connection-oriented protocols, it is generally required that PDU order must be maintained. If
each PDU is given a unique number, and the members are assigned sequentially, then it is easy
for the receiving entity to recorder the received PDUs.
5. Flow control
Flow control is a function performed by a receiving entity to limit the amount or rate of data that
is sent by a transmitting entity. The simplest form of flow control is a stop-and-wait procedure,
in which each PDU must be acknowledged before the next can be sent.
6. Error Control
Techniques are needed to guard the data and control information transfer against loss or damage.
Most techniques involve error detection, based on a frame check sequence, and PDU
retransmission.
Retransmission is often activated by a timer. If a sending entity fails to receive an acknowledge
to a PDU within a specified period of time, it will retransmit. Error control is a function that must
be performed at various levels of protocol.
7. Addressing
For two entities to communicate each other over a point-to-point link, they must be able to
identify each other. On switched network, the network needs to know the identity of the
destination station in order to properly rout the data blocks or setup a connection. A distinction is
generally made among names, address and routes.
8. Multiplexing
One form of multiplexing is supported by means of multiple connections into a single system.
For example, with X.25, there can be multiple virtual circuits terminating in a single end
systems. The virtual circuits are multiplexed over the single physical interface between the end
system and the network.
9. Transmission services
Additional services are provided to the entities that use the particular protoro
Change: When changes are made to one layer, the impact on the other layers is minimized. If
the model consists of a single, all-encompassing layer, any change affects the entire model.
Design: A layered model defines each layer separately. As long as the interconnections between
46
layers remain constant, protocol designers can specialize in one area (layer) without worrying
about how any new implementations affect other layers.
Learning: The layered approach reduces a very complex set of topics, activities, and actions into
several smaller, interrelated groupings. This makes learning and understanding the actions of
each layer and the model generally much easier.
Troubleshooting: The protocols, actions, and data contained in each layer of the model relate
only to the purpose of that layer. This enables troubleshooting efforts to be pinpointed on the
layer that carries out the suspected cause of the problem.
Standards: Probably the most important reason for using a layered model is that it establishes a
prescribed guideline for interoperability between the various vendors developing products that
perform different data communications tasks. Remember, that layered models, including the OSI
model, provide only a guideline and framework, not a rigid standard that manufacturers can use
when creating their products.
OSI Model
There are n numbers of users who use computer network and are located over the world. So to
ensure, national and worldwide data communication, systems must be developed which are
compatible to communicate with each other ISO has developed a standard. ISO stands for
International organization of Standardization. This is called a model for Open System
Interconnection (OSI) and is commonly known as OSI model.
The ISO-OSI model is a seven layer architecture. It defines seven layers or levels in a complete
communication system. They are:
1. Application Layer
2. Presentation Layer
3. Session Layer
4. Transport Layer
5. Network Layer
6. Datalink Layer
7. Physical Layer
Below we have the complete representation of the OSI model, showcasing all the layers and how
they communicate with each other.
47
In the table below, we have specified the protocols used and the data unit exchanged by each
layer of the OSI Model.
48
3. The function of each layer should be chosen with an eye toward defining internationally
standardized protocols.
4. The layer boundaries should be chosen to minimize the information flow across the
interfaces.
5. The number of layers should be large enough that distinct functions need not be thrown
together in the same layer out of necessity and small enough that architecture does not
become unwieldly.
50
TCP/IP that is Transmission Control Protocol and Internet Protocol was developed by
Department of Defence's Project Research Agency (ARPA, later DARPA) as a part of a
research project of network interconnection to connect remote machines.
The features that stood out during the research, which led to making the TCP/IP reference model
were:
Support for a flexible architecture. Adding more machines to a network was easy.
The network was robust, and connections remained intact untill the source and
destination machines were functioning.
The overall idea was to allow one application on one computer to talk to(send data packets)
another application running on different computer.
51
5. Transport layer breaks the message (data) into small units so that they are handled more
efficiently by the network layer.
6. Transport layer also arrange the packets to be sent, in sequence.
2. Data link control: The data link control layer corresponds to OSI layer 2. This layer provides
for the reliable transfer of data across a physical link. The protocol specified for serial
communications links is SDLC. SDLC is basically a subset of HDLC.
3. Path Control: The path control layer creates logical channels between endpoints referred as
network addressable unit (NAUs). NAU is an application-level entity capable of being addressed
and of exchanging data with other entities. Thus main functions of the path control layer are
52
routing and flow control. Path control is based on the concept of transmission group, explicit rout
and virtual route.
4. Transmission Control: the transmission control of SNA corresponds roughly to layer 4 of the
OSI model. The transmission layer is responsible for establishing, maintaining and terminating
SNA sessions. The transmission control can establish a session when a request is made from the
next higher layer.
5. Data Flow control: The data flow control layer is end-user oriented and corresponds to OSI‟s
layer 5. This layer is responsible for providing session-related services that the visible to end-
user processes and terminals.
6. Presentation services: Top two layers of SNA were considered as a single layer until recently
called as the Function Management Data (FMD) service layer. The layer corresponds to OSI
layers 6 and 7.
7. Transaction services: this layer provides network management services. These services are
directly used by the user and hence best fit for OSI layer 7. It includes configuration services i.e.
activating and deactivating links. Network operator services i.e. communication of data from
users and process to the network operator. Session services i.e. support the activation of session
on behalf of end user and applications. Maintenance and management services i.e. provide
facilities for testing of network facilities and enables fault isolation and identification.
One way to categorize networks is to divide them into local-area networks (LAN) and wide-area
networks (WAN). LANs typically are connected workstations, printers, and other devices within
a limited geographic area such as a building. All the devices in the LAN are under the common
administration of the owner of that LAN, such as a company or an educational institution. Most
LANs today are Ethernet LANs. WANs are networks that span a larger geographic area and
usually require the services of a common carrier. Examples of WAN technologies and protocols
include Frame Relay, ATM, and DSL.
A WAN is a data communications network that operates beyond the geographic scope of a
LAN. Figure shows the relative location of a LAN and WAN.
53
Fig. a. WAN Location
WANs differ from LANs in several ways. Whereas a LAN connects computers, peripherals, and
other devices in a single building or other small geographic area, a WAN allows the transmission
of data across greater geographic distances. In addition, an enterprise must subscribe to a WAN
54
service provider to use WAN carrier network services. LANs typically are owned by the
company or organization that uses them.
WANs use facilities provided by a service provider, or carrier, such as a telephone or cable
company, to connect the locations of an organization to each other, to locations of other
organizations, to external services, and to remote users. WANs provide network capabilities to
support a variety of mission-critical traffic such as voice, video, and data.
As highlighted in Figure b, the physical layer (OSI Layer 1) protocols describe how to provide
electrical, mechanical, operational, and functional connections to the services of a
communications service provider.
The data link layer (OSI Layer 2) protocols define how data is encapsulated for transmission
toward a remote location and the mechanisms for transferring the resulting frames. A variety of
technologies are used, such as Frame Relay and Asynchronous Transfer Mode (ATM). Some of
these protocols use the same basic framing mechanism, High-Level Data Link Control (HDLC),
an ISO standard, or one of its subsets or variants.
55
Internetworking:
Internetworking is the process or technique of connecting different networks by using
intermediary devices such as routers or gateway devices. Or
Any interconnection among or between public, private, commercial, industrial, or governmental
computer networks may also be defined as an internetwork or "Internetworking".
Internetworking ensures data communication among networks owned and operated by different
entities using a common data communication and the Internet Routing Protocol. The Internet is
the largest pool of networks geographically located throughout the world but these networks are
interconnected using the same protocol stack, TCP/IP. Internetworking is only possible when the
all the connected networks use the same protocol stack or communication methodologies.
In modern practice, the interconnected computer networks or Internetworking use the Internet
Protocol. Two architectural models are commonly used to describe the protocols and methods
used in internetworking. The standard reference model for internetworking is Open Systems
Interconnection (OSI).
Principles of Internetworking
Following are the principles of internetworking
1. Different addressing schemes: the networks may use different endpoint names and addresses
and directory maintenance schemes. Hence to maintain uniformity some form of global network-
addressing must be used.
2. Different packet size: Packets when transferred from one network may have to be broken up
into smaller pieces for transmitting to another network. This process is called as segmentation or
fragmentation
5. Error recovery: Intra-network procedures may provide anything from no error recovery up to
reliable end-to-end (within the network) service. The internetwork service should not depend
upon the nature of the individual network‟s error recovery capability.
6. Status reporting: Different networks report different status and performance. But
internetworking should provide to each network. The internetworking facility must be able to
coordinate these to adaptively route data between stations on different networks.
7. Routing techniques: Intra-network routing depends on fault detection and congestion control
techniques peculiar to each network. The internetworking facility must be able to coordinate
these to adaptively route data between stations on different networks.
8. User-access control: Each network will have its own user-access control technique that must
be invoked by the internetwork facility as needed. Further, a separate internetwork access control
technique may be required.
9. Connection, connectionless: Individual networks may provide connection oriented or
connectionless service. It may be desirable for the internetwork service not to depend on the
nature of the connection service of the individual networks.
57
The Bridge
A bridge is a device that connects two LANs (local area networks), or two segments of the same
LAN. Unlike a router, bridges are protocol independent. They forward packets without analyzing
and re-routing messages
Bridge devices work at the data link layer of the Open System Interconnect (OSI) model,
connecting two different networks together and providing communication between them.
Bridges are similar to repeaters and hubs in that they broadcast data to every node. However,
bridges maintain the media access control (MAC) address table as soon as they discover new
segments, so subsequent transmissions are sent to only to the desired recipient
Bridge Operation
1. Reliability: if all data processing devices in an organization are connected to a single network
then any fault on the network may disable communication for all devices. Hence bridges can be
used, so that the network can be partitioned into self-contained units.
2. Performance: Performance on a LAN or MAN decreases with an increase in the number of
devices or with the length of the medium. A number of smaller LANs can improve the
performance if the devices are grouped, so that intra-network traffic exceeds inter-network
traffic.
3. Security: The use of multiple LANs can improve the security of communications. For security
purpose we can maintain difference type of traffic that have different security needs on separate
media. Also it is advisable to have different levels of security for different types of users which
enables monitoring and controlled communication.
4. Geography: To support the devices clustered in two geographically distant locations, two
separate LANs can be used. Even in the case of two buildings separated by a highways, a
microwave bridge link can be used instead of string coaxial cable between the two buildings.
Functions of Bridge
1. Reads all the frames transmitted on A, and accepts those which are addressed to stations on B.
2. Using the Medium Access Control protocol for B retransmits the frames onto B.
3. Follows the same procedure for B-to-A traffic.
Routing With Bridge
Consider the configuration of following figure
58
Fig. Configuration of bridges and LANs
suppose that station 1 transmits a frame on LAN A intended for station 5. The frame will be read
by both bridges 1010 and bridge 102. For each bridge, the addressed station is not on a LAN to
which the bridge is attached. Therefore, each bridge must make a decision of whether or not to
retransmit the frame on its other LAN, in order to move it closer to its intended destination.
In this case, bridge should repeat the frame on LAN B, whereas bridge 102 should refrain from
retransmitting the frame. Once the frame has been transmitted on LAN B, it will be picked up by
both bridges 103 and 104.
Again, each must decide whether or not to forward the frame. In this case, bridge 104 should
retransmit the frame on LAN E, where it will be received by the destination, station5. Thus, we
see that in the general case, the bridge must be equipped with a routing capability. When a bridge
receives a frame, it must decide whether or not to forward it.
If the bridge is attached to two or more networks, then it has to decide whether or not to forward
the frame and on which LAN the frame should be transmitted. The routing decision always may
not be an easy task.
Connectionless Internetworking
The Internet Protocol (IP) and Connectionless Network Protocol (CLNP) is very similar. They
differ in the format that is used and in some functional aspects.
IP provides a connectionless or datagram service between end systems. Hence, there are a
number of advantages regarding this approach.
A connectionless internet facility is flexible. It can deal with a variety of networks, some
of which are themselves connectionless.
A connectionless internet service can be highly robust.
A connectionless internet service is best for connectionless transport protocols.
59
Fig. Internet protocol operation
Figure shows IP, in which two LANs are interconnected by an X.25 packet-switched WAN. The
figure shows the operation of the internet protocol for data exchange between host A on one
LAN (subnetwork 1) and host B on another departmental LAN (subnetwork 2) though the
WAN.
The IP at A receives blocks of data to be sent to B from the higher layers of software in A. IP
attaches a header specifying the global internet address of B. This address is logically in two
parts.
1. Network identifier
2. End system identifier
The result is called an Internet-Protocol data unitor a datagram. The dataram is summarized with
the LAN protocol and sent to the router, which strips off the LAN fields to read the IP header.
The router then encapsulates the datagram with the X.25 protocol fields and transmits across the
WAN to another router. This router strips off the X.25 fields and recovers the datagram, which it
then wraps in LAN fields appropriate to LAN 2 and sends it to B.
The sequence of steps involved in sending a datagram between two stations on different
networks. The process starts in the sending station. The station wants to send an IP datagram to a
station in another network. The IP module constructs the datagram with global network address
and recognizes that the destination is an another network. So the first step is to send the datagram
to a router.
To do this IP module appends to the IP datagram a header that appropriate to the network that
contains the address of router. Next the packet travels through the network to the router, which
receives it via DCE-DTE protocol.
The router unpacks the packet to find out the original datagram. The router analyzes the IP
header to determine whether the datagram contains control information intended for outer or data
intended for station. If it contains data for a station it has to use routing decision. There are four
possibilities.
60
1. The destination station is attached directly to one of the network to which the router is
attached referred as directly connected.
2. The destination station is on a network that has a router that directly connects to this router.
3. To reach the destination station more than one additional router must be traversed and this is
known as multiple-hop.
In case 4, router returns error message to the source that generated datagram. For the case 1 to 3
the router must select the appropriate router for the data and insert them in appropriate network
with proper address. Before sending data, the router may need to segment datagram to
accommodate smaller maximum packet size limitation on the outgoing network.
Router-Level Protocol
The routers in an internet are responsible for receiving and forwarding the packets through the
interconnected set of subnetworks. Each router makes routing decisions based on knowledge of
the topology and conditions of the internet.
In a simple internet, a fixed routing scheme is advisable. In more complex internets, dynamic
cooperation among the routers is needed. The router must avoid the portions of the network that
have already failed and also the portions of the network that are congested. In order to make such
dynamic routing decisions, routers exchange routing information using a special routing
protocol.
The routers consider the following concepts while calculating the routing function
Routing information: Information about the topology and delays of the internet.
Routing algorithm: The algorithm used to make a routing decision for a particular
datagram, based on current routing information.
There is another way to partition the problem that is useful from two points of view
The reason for the partition is that there are basic differences between what an ES must know to
route a packet and what a router must know. In the case of an ES, the router must first know
whether the destination ES is on the same subnet. If the answer is yes, then the data can be
delivered directly using the subnetwork access protocol otherwise the ES must forward the data
to a router attached to the same subntwork and if there is more than one such router, then simply
we have to choose one.
Network layers data is presented by the transport layer to the network layer (t1). This is the form
of data unit consisting of transport protocol header and data from the transport user. This block
of data is received by the packet level protocol of X.25, which appends a packet header (t2) to
form an X.25 packet. The header will include the virtual circuit that connects host A to the
router.
The packet is then passed down to the data link layer protocol which appends a link header and
trailer(t3) and transmits the resulting frame to DCE(t4). At the DCE, the link header and trailer
are removed and the result is passed up to the packet level (t5) . The packet is transferred to the
network through DCE Y to the router which appears as another DTE to the network.
62
UNIT IV: Cloud Computing Basics Cloud Computing
Overview
Cloud Computing provides us means of accessing the applications as utilities over the Internet. It
allows us to create, configure, and customize the applications online. The term Cloud refers to a
Network or Internet. In other words, we can say that Cloud is something, which is present at
remote location. Cloud can provide services over public and private networks, i.e., WAN, LAN
or VPN.
Applications such as e-mail, web conferencing, customer relationship management (CRM)
execute on cloud.
Cloud Computing refers to manipulating, configuring, and accessing the hardware and
software resources remotely. It offers online data storage, infrastructure, and application.
Cloud computing offers platform independency, as the software is not required to be installed
locally on the PC. Hence, the Cloud Computing is making our business
applications mobile and collaborative.
Benefits
63
Cloud Computing has numerous advantages. Some of them are listed below -
One can access applications as utilities, over the Internet.
One can manipulate and configure the applications online at any time.
It does not require to install a software to access or manipulate cloud application.
Cloud Computing offers online development and deployment tools, programming
runtime environment through PaaS model.
Cloud resources are available over the network in a manner that provide platform
independent access to any type of clients.
Cloud Computing offers on-demand self-service. The resources can be used without
interaction with cloud service provider.
Cloud Computing is highly cost effective because it operates at high efficiency with
optimum utilization. It just requires an Internet connection
Cloud Computing offers load balancing that makes it more reliable.
History
64
The origin of the term “Cloud computing” is not clear. The expression cloud is commonly used
in science to describe a large collection of objects that visually appear from a distance as a cloud
and describes any set of things which have not been described in detail.
Cloud computing has evolved through a number of phases which includes Grid computing and
Utility computing, Application Service Provision (ASP), and software as a service (SaaS), but its
evolution started in 1950s with mainframe computing.
The 1950s
Multiple users were capable of accessing a central computer through dumb terminals, whose
only function was to provide access to the mainframe. Because of the costs to buy and maintain
mainframe computers, it was not practical for an organization to buy and maintain one for every
employee.
The typical user doesn‟t need the large storage capacity and processing power which a
mainframe provides. Hence, providing shared access to a single resource on the web was the
solution for this sophisticated technology.
The 1990s
In the 190s, telecommunications companies who basically offered dedicated point-to-point data
circuits began offering virtual private network (VPN) services with comparable quality of
service, but at a lower cost. The network bandwidth could be used more effectively by switching
the traffic.
Cloud computing extends its boundaries to cover all servers as well as the network infrastructure.
As computers became more prevalent, scientists and technologies explored ways to make large-
scale computing power available to most of the users through time-sharing.
Since 2000
In early 2008, Eucalyptus became the first open-source, AWS, API-compatible platform for
deploying private clouds.
By mid-2008, cloud computing was used “to shape the relationship among consumers of IT
services, those who use IT services and those who sell them” and organizations switched from
company-owned hardware and software assets to per-use services-based models.
In July 2020, Rackspace Hosting and NASA jointly launched an open-source cloud-software
initiative known as OpenStack. The OpenStack project is intended to help the organizations offer
cloud-computing services running on standard hardware.
On March 1, 2011, IBM announced the IBM SmartCloud framework to support Smarter Planet,
Among the various components of the Smarter Computing foundation, cloud computing is also a
major component.
On June 7, 2012, Oracle announced the Oracle Cloud. This cloud offers accesses to an
integrated set of IT solutions including the Applications (Saas), Platform(PaaS) and
Infrastructure (IaaS) layers.
65
Characteristics/Capabilities of Clouds
Characteristics/capabilities associated with clouds that are considered essential and relevant and
distinguished as non-functional, economic and technological capabilities.
Non-Functional
1. Elasticity: It is a core feature of cloud systems and restricts the capability of the underlying
infrastructure to adapt to changing, non-functional requirements. For example amount and size of
data supported by an application, number of concurrent users etc.
2. Reliability: It is essential for all cloud systems in order to support today‟s data centric
applications. Reliability denotes the capability to ensure constant operation of the system without
disruption i.e. no loss of data.
3. Quality of Service (QoS) support: It is a relevant capability that is essential in many use cases
where specific requirements have to be met by the outsourced services and resources. In business
cases, basic QoS metrics like response time, throughput etc. must be guaranteed at least, so as to
ensure that the quality guarantees of the cloud user are met.
4. Agility and adaptability: These are essential features of cloud systems related to the elastic
capabilities. It includes on-time reaction to change the amount of request and size of resources,
but also adaptation to changes in the environmental conditions e.g. require different type of
resources, different quality of different routes etc.
5. Availability of services and data: It is an essential capability of cloud systems. It lies in the
ability to introduce redundancy for services and data so that the failures can be masked
transparently.
Economical
1. Cost reduction; it is one of the factors used to build up a cloud system that can adapt to
consumer behavior and reduce the cost for infrastructure maintenance. Scalability and Pay per
use are essential aspects in this issue. Setting up a cloud system needs additional costs.
2. Pay per use: The capability to build up cost according to the consumption of resources on the
cloud is a relevant feature of cloud systems. Pay as per use relates to the quality of services
supported, where some requirements should be met by the system.
3. Improved time to market: It is essential for small to medium enterprises that want to sell their
services quickly and easily with little delays caused by acquiring and setting up the
infrastructure. Cloud can support larger enterprises by providing infrastructures dedicated to
specific use and thus reduce time to market.
4. Return of Investment (ROI): It is also essential for all investors and cannot always be
guaranted- in fact some cloud systems currently fail to achieve this aspect. Employing a cloud
system must ensure that the cost and efforts put into it is balanced by its benefits to be
commercially viable- this may need direct (e.g. more customers) and indirect (e.g. benefits form
advertisements) ROI.
5. Going Green: It is relevant not only to reduce additional costs of energy consumption, but also
to reduce the carbon footprint.
66
Technological
2. Multi-tenancy: It a highly essential issues in cloud systems, where the location of code or data
is unknown and the same resource may be assigned to multiple users. This affects infrastructure
resources as well as data / applications / services that are hosted.
3. Security, Privacy and Compliance: It is obviously essential in all systems dealing with
sensitive data and code.
4. Data Management: It is for storage cloud where data is flexibly distributed across multiple
resources. Data consistency needs to be maintained over a wide distribution of replicated data
sources. At the same time, the system always needs to be aware of the data location.
5. APIs or Programming Enhancements: These are essential to exploit the cloud features,
common programming models require that the developer should take care of the scalability and
automatic capabilities himself, whereas a cloud environment provides the features by using
which the user the user can leave such management to the sytem.
Cloud Components
Cloud computing solutions is made up of several elements like clients, datacenter and distributed
servers as shown in following figure. Each element has a purpose and plays a specific role in
delivering a functional cloud based applications.
67
Fig. Components of the cloud computing
1. Clients: Clients are the computers that just sit on your desk and also laptops, tablet computers,
mobile phones, or PDAs (personal digital assistant)all big drivers for cloud computing because
of their mobility. Clients are the devices that the end users interact with to manage their
information on the cloud.
a. Mobile devices: Mobile devices include PDAs or smartphones, like a Balck berry, windows
mobile smartphone or an iPhone.
b. Thin Clients: These are the computers that do not have internal hard drives, but rather let the
server do all the work, but then display the information.
c. Thick Clients: This type of client is a regular computer, using a web browser like Firefox or
Internet Explorer to connect to the cloud.
Lower hardware costs: Thin clients are cheaper than thick clients because they do not
contain as much hardware. They also last longer before they need to be upgraded or become
obsolete.
Lower IT costs: Thin clients are managed at the server and there are fewer points of failure.
Security: Since the processing takes place on the server and there is no hand drive, there‟s
less chance of malware invading the device. Also, since thin clients don‟t work without a
server, there‟s less chance of them being physically stolen.
Data security: Since data is stored on the server, there‟s less chance for data to be lost if the
client computer crashes or is stolen.
Less power consumption: Thin clients consume less power than thick clients. This means
you will pay less to power them.
2. Datacenter: The datacenter is the collection of servers where the application to which you
subscribe is housed. It cloud be a large room in the basement of your building or a room full of
68
servers on the other side of the world that you access via the Internet. A growing trend in the IT
world is virtualizing servers. That is, software can be installed allowing multiple instances of
virtual servers to be used. In this way, you can have half a dozen virtual servers running on one
physical server.
3. Distributed Servers: But the servers don‟t all have to be housed in the same location. Often,
servers are in geographically disparate locations. But to you, the cloud subscriber, these servers
act as if they‟re humming away right next to each other.
This gives the service provider more flexibility in options and security. For instance, Amazon
has their cloud solution in servers all over the world. If something were to happen at one site,
causing a failure, the service would still be accessed through another site.
1. Amazon : Amazon was one of the first companies to offer cloud services to the public.
Amazon offers a number of cloud services, including
Elastic Compute Cloud (EC2) offers virtual machines and extra CPU cycles for your
organization.
Simple Storage Service (S3) allows you to store items up to 5GB in size in Amazon‟s
virtual storage service.
Simple Queue Service (SQS) allows your machines to talk to each other using this
message-passing API
SimpleDB A Web service for running queries on structured data in real time.
2. Google: Contrast to Amazon‟s offerings is the Google‟s App Engine. On Amazon you get not
privileges, but on App Engine, you can‟t write a file in your own directory. Google removed the
file write feature out of Python as a security measure, and to store data you must use Google‟s
database.
Google offers online documents and spreadsheets, and encourages developers to build features
for those and other online software, using its Google App Engine. Google also offers handy
debugging features.
Windows Azure provides service hosting and management and low-level scalable storage,
computation, and networking
69
Microsoft SQL services provides database services and reporting
Microsoft .NET Services provides service-based implementations of .NET Framework
concepts such as workflow.
Live Services used to share, store, and synchronize documents, photos, and files across PCs,
phones, PC applications and web sites.
Microsoft Share Point Services and Microsoft Dynamic CRM Services used for business
content, collaboration, and solution development in the cloud.
Virtualization
Fig. Virtualization
Virtualization is one of the core technologies used by the cloud computing. Virtualization has
been around for more than 40 years, but its application has always been limited. Virtualization is
a technology which creates different computing environments. These environments are termed as
virtual machines. Virtualization is a key technology used in datacenters to optimize resources. As
IT needs continue to evolve, virtualization can no longer be regarded as an isolated technology to
solve a single problem.
Virtualization technologies are also used for replicating runtime environment for programs. In
this case the programs are not executed by the OS itself but the virtual machine.
70
Front End
The front end is used by the client. It contains client-side interfaces and applications that are
required to access the cloud computing platforms. The front end includes web servers (including
Chrome, Firefox, internet explorer, etc.), thin & fat clients, tablets, and mobile devices.
Back End
The back end is used by the service provider. It manages all the resources that are required to
provide cloud computing services. It includes a huge amount of data storage, security
mechanism, virtual machines, deploying models, servers, traffic control mechanisms, etc.
1. Client Infrastructure
Client Infrastructure is a Front end component. It provides GUI (Graphical User Interface) to
interact with the cloud.
2. Application
The application may be any software or platform that a client wants to access.
3. Service
A Cloud Services manages that which type of service you access according to the client‟s
requirement.
Cloud computing offers the following three type of services:
i. Software as a Service (SaaS) – It is also known as cloud application services. Mostly, SaaS
applications run directly through the web browser means we do not require to download and
install these applications. Some important example of SaaS is given below –
Example: Google Apps, Salesforce Dropbox, Slack, Hubspot, Cisco WebEx.
ii. Platform as a Service (PaaS) – It is also known as cloud platform services. It is quite
similar to SaaS, but the difference is that PaaS provides a platform for software creation, but
using SaaS, we can access software over the internet without the need of any platform.
Example: Windows Azure, Force.com, Magento Commerce Cloud, OpenShift.
iii. Infrastructure as a Service (IaaS) – It is also known as cloud infrastructure services. It is
responsible for managing applications data, middleware, and runtime environments.
71
Example: Amazon Web Services (AWS) EC2, Google Compute Engine (GCE), Cisco Metapod.
4. Runtime Cloud
Runtime Cloud provides the execution and runtime environment to the virtual machines.
5. Storage
Storage is one of the most important components of cloud computing. It provides a huge amount
of storage capacity in the cloud to store and manage data.
6. Infrastructure
It provides services on the host level, application level, and network level. Cloud infrastructure
includes hardware and software components such as servers, storage, network devices,
virtualization software, and other storage resources that are needed to support the cloud
computing model.
7. Management
Management is used to manage components such as application, service, runtime cloud, storage,
infrastructure, and other security issues in the backend and establish coordination between them.
8. Security
Security is an in-built back end component of cloud computing. It implements a security
mechanism in the back end.
9. Internet
The Internet is medium through which front end and back end can interact and communicate
with each other.
72
workload. The customer can deploy his own software on the infrastructure. Some common
example of the companies which provide IaaS are Amazon, GoGrid. 3Tera etc.
Public Cloud
1) public clouds are owned and operated by third parties.
2) Public cloud gives each individuals client an attractive low-cast, “Pay-as-you-go” model.
3) All the customers share the same infrastructure pool with limited configuration, security
protection, and availability variances. These are managed and supported by the cloud
providers.
4) Public cloud (also known as external cloud), is the traditional way where services are
provided by a third part via Internet, and they are visible to everybody.
5) One of the advantage of a Public cloud is ;that they are larger in size and hence providers
the ability to scale faultlessly on demand.
Private Cloud
1) Private clouds are built execlusively for a single enterprise.
2) They aim to address concerns on data security and offer greater control which is lacking
in a public clouds.
3) There are two variation to a private cloud:-
74
maintenance of large volumes of data is possible. Sudden work load spikes are also managed
effectively and efficiently.
Disadvantages:
No longer in control: When we are using the cloud, it means you are handling over your data
and information to someone on the cloud. Hence there are chances of the data being
misused.
No Redundancy: A cloud server is not redundant nor it is backed up. Whatever data is on the
cloud no backup is maintained. Therefore it is recommended to not to rely 100% on the
cloud and maintain the redundant data.
Compatibility: Not every existing tool, software and computer is compatible with the web
based service, platform or infrastructure.
Unpredicted Costs: The cloud can reduce staff and hardware costs, but the price colud end
up being more than you bargained for.
Security concerns
1. Data Loss: Data loss is the most common cloud security risks of cloud computing. It is also
known as data leakage. Data loss is the process in which data is being deleted, corrupted, and
unreadable by a user, software, or application. In a cloud computing environment, data loss
occurs when our sensitive data is somebody else's hands, one or more data elements can not be
utilized by the data owner, hard disk is not working properly, and software is not updated.
2. Hacked Interfaces and Insecure APIs :As we all know, cloud computing is completely
depends on Internet, so it is compulsory to protect interfaces and (Application Programming
Interface) APIs that are used by external users. APIs are the easiest way to communicate with
most of the cloud services. In cloud computing, few services are available in the public domain.
These services can be accessed by third parties, so there may be a chance that these services
easily harmed and hacked by hackers.
3. Data Breach: Data Breach is the process in which the confidential data is viewed, accessed,
or stolen by the third party without any authorization, so organization's data is hacked by the
hackers.
4. Vendor lock-in: Vendor lock-in is the of the biggest security risks in cloud computing.
Organizations may face problems when transferring their services from one vendor to another.
As different vendors provide different platforms, that can cause difficulty moving one cloud to
another.
5. Increased complexity strains IT staff: Migrating, integrating, and operating the cloud
services is complex for the IT staff. IT staff must require the extra capability and skills to
manage, integrate, and maintain the data to the cloud.
6. Spectre & Meltdown: Spectre & Meltdown allows programs to view and steal data which is
currently processed on computer. It can run on personal computers, mobile devices, and in the
cloud. It can store the password, your personal information such as images, emails, and business
documents in the memory of other running programs.
75
7. Denial of Service (DoS) attacks: Denial of service (DoS) attacks occur when the system
receives too much traffic to buffer the server. Mostly, DoS attackers target web servers of large
organizations such as banking sectors, media companies, and government organizations. To
recover the lost data, DoS attackers charge a great deal of time and money to handle the data.
8. Account hijacking: Account hijacking is a serious security risk in cloud computing. It is the
process in which individual user's or organization's cloud account (bank account, e-mail account,
and social media account) is stolen by hackers. The hackers use the stolen account to perform
unauthorized activities.
Benefits
1. Reduced IT costs: Moving to cloud computing may reduce the cost of managing and
maintaining your IT systems. Rather than purchasing expensive systems and equipment for your
business, you can reduce your costs by using the resources of your cloud computing service
provider. You may be able to reduce your operating costs because:
the cost of system upgrades, new hardware and software may be included in your
contract
you no longer need to pay wages for expert staff
your energy consumption costs may be reduced
there are fewer time delays.
2. Scalability: Your business can scale up or scale down your operation and storage needs
quickly to suit your situation, allowing flexibility as your needs change. Rather than purchasing
and installing expensive upgrades yourself, your cloud computer service provider can handle this
for you. Using the cloud frees up your time so you can get on with running your business.
3. Business continuity: Protecting your data and systems is an important part of business
continuity planning. Whether you experience a natural disaster, power failure or other crisis,
having your data stored in the cloud ensures it is backed up and protected in a secure and safe
location. Being able to access your data again quickly allows you to conduct business as usual,
minimising any downtime and loss of productivity.
5. Flexibility of work practices: Cloud computing allows employees to be more flexible in their
work practices. For example, you have the ability to access data from home, on holiday, or via
the commute to and from work (providing you have an internet connection). If you need access
to your data while you are off-site, you can connect to your virtual office, quickly and easily.
6. Access to automatic updates: Access to automatic updates for your IT requirements may be
included in your service fee. Depending on your cloud computing service provider, your system
will regularly be updated with the latest technology. This could include up-to-date versions of
software, as well as upgrades to servers and computer processing power.
76
Cloud Environment Roles
SaaS(Software as a Service)
In this model, a complete application is offered to the customer, as a service on demand.
A single instance of the service runs ;on the cloud and multiple end users are serviced.
On the customers side, there is no need for upfront investment in servers or software
licenses, while for the provider, the costs are lowered, since only a single application needs
to be hosted and maintained.
SaaS is offered by many companies such as Google, Salesforce, Microsoft, Zoho etc.
PaaS(Platform as a Service)
A layers of software or the development environment is encapsulated and offered as a
service to the customer on demand upon which other higher levels of service can be built.
The customers has the freedon to built his own application which can then run on the
providers infrastructure.
To meet manageability and scalability requirements of the application, Paas providers offer
a predefined combination of OS and application servers, such as LAMP platform (Linux,
Apache,MySql and PHP) etc.
Google app Engine, Force.com etc. are some of the popular PaaS examples.
IaaS(Infrastructure as a Service)
IaaS providers basic storage and computing capabilities on the network. Servers, storage system,
networking equipment, datacenter space etc. are pooled and are made available to handle the
workload. The customer can deploy his own software on the infrastructure. Some common
example of the companies which provide IaaS are Amazon, GoGrid. 3Tera etc.
77
costs Openness
Increased Scalability Transparency
Increased Availability and Reliability Scalability
Types Public Clouds Distributed Computing
Private Clouds Systems
Community Clouds Distributed Information
Hybrid Clouds Systems
Distributed Pervasive
Systems
Characteristics It provides a shared pool of configurable A task is distributed
computing resources amongst different machine for
An on-demand network model is used to the computation job at the
provide access same time.
The clouds are provisioned by the service Technologies such as remote
providers procedure calls and remote
method invocation are used to
It provides broad network access
construct distributed
computations.
Disadvantages More elasticity means less control Higher level of failure of
especially in the case of public clouds nodes than a dedicated
Restrictions on available services may be parallel machine
faced as it depends upon the cloud provider Few of the algorithms are
not able to match with slow
networks
Nature of the computing job
may present too much
overhead.
In determining strategic approach, suppliers and business customers should carefully consider
the following key data protection legal issues.
1. Liability: Cloud providers can be held liable for the illegal data they may be hosting. Escape
routes bear no liability for services that “consist of” the storage of electronic information under
the condition that the provider has no knowledge or awareness of its illegal nature and removes
or blocks illegal data when it does gain knowledge or become aware of illegal nature.
2. Law: Laws or regulations typically specify who within an enterprise should be held
responsible and accountable for data accuracy and security.
78
3. Compliance: the intermediary should follow the below mentioned duties:
(a) The intermediary shall publish the rules and regulations, privacy policy and user agreement
for access or usage of the intermediary‟s computer resources by any person.
(b) Such rules and regulations, terms and conditions or user agreement shall inform the users of
computer resource not to host, display, upload, modify, publish, transmit, update or share any
information that, if such hosting reported, then action to be taken in 36 hours.
4. Data Portability: Data Portability can be loosely described as the free flow of people‟s
personal information across the Internet, within their control. It has now become a standard term
in the internet industry in the context of cloud computing, open standards and privacy. Examples
cloud include, being able to import all your social network connections, ability to reuse your
health records while visiting different doctors, etc.
79