Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
49 views85 pages

Main Documentation

Wired and wireless networks are the two main types of local area networks. Wired networks connect devices using Ethernet cables for faster connection speeds up to 100 Mbps or higher, while wireless networks use radio waves to connect devices without cables, providing mobility but shorter range. Wireless sensor networks are a type of ad hoc wireless network comprised of sensor nodes that collect and transmit environmental data to a base station. Due to the proliferation of cyber attacks, improving the robustness of wireless sensor networks has become important. This involves generating scale-free network topologies that consider constraints like communication range and maximum node degree, and proposing algorithms like ROSE to enhance robustness against malicious attacks targeting important nodes.

Uploaded by

pvssivaprasadcse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views85 pages

Main Documentation

Wired and wireless networks are the two main types of local area networks. Wired networks connect devices using Ethernet cables for faster connection speeds up to 100 Mbps or higher, while wireless networks use radio waves to connect devices without cables, providing mobility but shorter range. Wireless sensor networks are a type of ad hoc wireless network comprised of sensor nodes that collect and transmit environmental data to a base station. Due to the proliferation of cyber attacks, improving the robustness of wireless sensor networks has become important. This involves generating scale-free network topologies that consider constraints like communication range and maximum node degree, and proposing algorithms like ROSE to enhance robustness against malicious attacks targeting important nodes.

Uploaded by

pvssivaprasadcse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 85

1.

INTRODUCTION
1.1 Introduction
 Wired networks, also called Ethernet networks, are the most common type of local

area network (LAN) technology. A wired network is simply a collection of two or

more computers, printers, and other devices linked by Ethernet cables.

Fig1.1 : Wired Network

Ethernet is the fastest wired network protocol, with connection speeds of 10 megabits

per second (Mbps) to 100 Mbps or higher. Wired networks can also be used as part of

other wired and wireless networks. To connect a computer to a network with an

Ethernet cable, the computer must have an Ethernet adapter (sometimes called a

network interface card, or NIC). Ethernet adapters can be internal (installed in a

computer) or external (housed in a separate case).

 A wireless network, which uses high-frequency radio waves rather than wires to

communicate between nodes, is another option for home or business networking.

Individuals and organizations can use this option to expand their existing wired

network or to go completely wireless. Wireless allows for devices to be shared

without networking cable which increases mobility but decreases range. There are

two main types of wireless networking; peer to peer or ad-hoc and infrastructure.

Communication in a wireless network is very similar to two way radio

communication. Basically, first a computers wireless adapter changes the data into

1
radio signals and then transmits these signals through the use of an antenna. Then a

wireless router receives the signal and decodes it.

Fig 1.2: Wireless Network

An ad-hoc or peer-to-peer wireless network consists of a number of computers each

equipped with a wireless networking interface card. Each computer can communicate

directly with all of the other wireless enabled computers.

 An infrastructure wireless network consists of an access point or a base station. In

this type of network the access point acts like a hub, providing connectivity for the

wireless computers. It can connect or bridge the wireless LAN to a wired LAN,

allowing wireless computer access to LAN resources, such as file servers or existing

Internet Connectivity.

Fig1.3 : Infrastructure Less Network

Wireless Sensor Networks are particular type of adhoc networks, comprised mainly
of large number deployed sensor nodes with limited resources and one or more base
stations (BS)or sink, typically serves as the access point for the user or as gateway to
another network. Nodes can collect and transmit (with wireless links) environmental

2
data in autonomous manner. The node in WSN plays two roles: collect data and route
data back to the base station.

Fig1.4 : Wireless Sensor Networks

1.2 Classification of Attacks in Wireless Sensor Networks

Classification of the attacks in wireless sensor networks consists of distinguishing


the passive attacks from the active attacks
The passive attack (eaves dropping) is limited to listening and analyzes exchanged is
traffic. This type of attacks is easier to realize and it is difficult to detect. Since the
attacker does not make any modification on exchanged information. The intention of
the attacker can be knowledge of the confidential information or the knowledge of the
significant nodes in the network ,by analysing routing information to prepare active
attack.
In the active attack, an attacker tries to remove or modify the message
transmitted on the network. He can also inject his own traffic or replay of old
messages to disturb the operation of the network or to cause denial of service.

1.3 Summary
Due to the recent proliferation of cyber-attacks, improving the robustness of wireless
sensor networks (WSNs), so that they can withstand node failures has become a
critical issue. Scale-free WSNs are important, because they tolerate random attacks
very well; however, they can be vulnerable to malicious attacks, which particularly
target certain important nodes. To address this shortcoming, this project presents a
new modelling strategy to generate scale-free network topologies, which considers the
constraints in WSNs, such as the communication range and the threshold on the
3
maximum node degree. Then, ROSE, a novel robustness enhancing algorithm for
scale free WSNs, is proposed.

2. Literature Survey

4
Introduction: To know about the background work or literature work what is
happening in this particular field ,in this chapter we discuss various research works.

1. Energy Efficient Trust Based Routing and Attack Detection in WSN

Energy efficient routing in wireless sensor networks (WSNs) has been studied widely
to enhance the network performance. Various nature-inspired routing mechanisms are
proposed to achieve scalable solutions. However, conventional nature-inspired
optimization algorithms are insufficient to solve discrete routing optimization
problems. In this study, a new discrete particle swarm optimization algorithm (PSO)
based routing protocol is designed to achieve better performance. In the new protocol,
firstly, two new fitness functions with energy awareness for clustering and routing are
formulated respectively. Secondly, a novel greedy discrete PSO with memory is put
forward to build optimal routing tree. In project, particle’s position and velocity are
redefined under a discrete scenario; particle update rules are reconsidered based on
the network topology; a greedy search strategy is designed to drive particles to find
better position quickly. Besides, searching histories are memorized to accelerate
convergence. Simulations results show the efficiency effectiveness of the new
protocol.

2 .Wireless Sensor Network Virtualization: A Survey

Technological advancements in the silicon industry, as predicted by Moore's law, have


enabled integration of billions of transistors on a single chip. To exploit this high
transistor density for high performance, embedded systems are undergoing a transition
from single-core to multi-core. Although a majority of embedded wireless sensor
networks (EWSNs) consist of single-core embedded sensor nodes, multi-core
embedded sensor nodes are envisioned to burgeon in selected application domains that
require complex in-network processing of the sensed data. In this paper, we propose an
architecture for heterogeneous hierarchical multi-core embedded wireless sensor
networks (MCEWSNs) as well as an architecture for multi-core embedded sensor
nodes used in MCEWSNs. We elaborate several compute-intensive tasks performed by
sensor networks and application domains that would especially benefit from multi-core
embedded sensor nodes. This paper also investigates the feasibility of two multi-core
architectural paradigms—symmetric multiprocessors and tiled many-core architectures

5
(for MCEWSNs. We compare and analyze the performance of an SMP (an Intel-based
SMP) and a TMA (Tilera's TILEPro64) based on a parallelized information fusion
application for various performance metrics (e.g., runtime, speedup, efficiency, cost,
and performance per watt). Results reveal that TMAs exploit data locality effectively
and are more suitable for MCEWSN applications that require integer manipulation of
sensor data, such as information fusion, and have little or no communication between
the parallelized tasks. To demonstrate the practical relevance of MCEWSNs, this paper
also discusses several state-of-the-art multi-core embedded sensor node prototypes
developed in academia and industry.

3. Event-Aware Backpressure Scheduling Scheme for Emergency Internet of


Things

The backpressure scheduling scheme has been applied in Internet of Things, which
can control the network congestion effectively and increase the network throughput.
However, in large-scale Emergency Internet of Things (EIoT), emergency packets
may exist because of the urgent events or situations. The traditional backpressure
scheduling scheme will explore all the possible routes between the source and
destination nodes that cause a superfluous long path for packets. Therefore, the end-to
end delay increases and the real-time performance of emergency packets cannot be
guaranteed. To address this shortcoming, this paper proposes EABS, an event-aware
backpressure scheduling scheme for EIoT. A backpressure queue model with
emergency packets is first devised based on the analysis of the arrival process of
different packets. Meanwhile, EABS combines the shortest path with backpressure
scheme in the process of next hop node selecting. The emergency packets are
forwarded in the shortest path and avoid the network congestion according to the
queue backlog difference. The extensive experiment results verify that EABS can
reduce the average end-to-end delay and increase the average forwarding percentage.
For the emergency packets, the real-time performance is guaranteed. Moreover, we
compare EABS with two existing backpressure scheduling schemes, showing that
EABS outperforms both of them

6
4. Optimization of network robustness to waves of targeted and
random attacks
We consider continuous time Hopfield-like recurrent networks as dynamical models
for gene regulation and neural networks. We are interested in networks that
contain n high-degree nodes preferably connected to a large number of N s weakly
connected satellites, a property that we call n/N s-centrality. If the hub dynamics is
slow, we obtain that the large time network dynamics is completely defined by the
hub dynamics. Moreover, such networks are maximally flexible and switchable, in the
sense that they can switch from a globally attractive rest state to any structurally
stable dynamics when the response time of a special controller hub is changed. In
particular, we show that a decrease of the controller hub response time can lead to a
sharp variation in the network attractor structure: we can obtain a set of new local
attractors, whose number can increase exponentially with N, the total number of nodes
of the nework. These new attractors can be periodic or even chaotic. We provide an
algorithm, which allows us to design networks with the desired switching properties,
or to learn them from time series, by adjusting the interactions between hubs and
satellites. Such switchable networks could be used as models for context dependent
adaptation in functional genetics or as models for cognitive functions in neuroscience.

5. Error Attack and Tolerance

Many complex systems display a surprising degree of tolerance against errors. For
example, relatively simple organisms grow, persist and reproduce despite drastic
pharmaceutical or environmental interventions, an error tolerance attributed to the
robustness of the underlying metabolic network. Complex communication networks 2
display a surprising degree of robustness: although key components regularly
malfunction, local failures rarely lead to the loss of the global information-carrying
ability of the network. The stability of these and other complex systems is often
attributed to the redundant wiring of the functional web designed by the systems
components. Here we demonstrate that error tolerance is not shared by all redundant
systems: it is displayed only by a class of in homogeneously wired networks,called
scale-free networks, which include the World-Wide Web, the Internet, social
networks and cells. We end that such networks display an unexpected degree of
robustness, the ability of their nodes to communicate being unaffected even by un-

7
realistically high failure rates. However, error tolerance comes at a high price in that
these networks are extremely vulnerable to attacks (that is, to the selection and
removal of a few nodes that play a vital role in maintaining the network's
connectivity). Such error tolerance and attack vulnerability are generic properties of
communication networks

6. Evaluating Temporal Robustness of Mobile Networks

The application of complex network models to communication systems has led


to several important results: nonetheless, previous research has often neglected to
take into account their temporal properties, which in many real scenarios play a
pivotal role. At the same time, network robustness has come extensively under
scrutiny. Understanding whether networked systems can undergo structural
damage and yet perform efficiently is crucial to both their protection against
failures and to the design of new applications. In spite of this, it is still unclear
what type of resilience we may expect in a network which continuously changes
over time. In this work, we present the first attempt to define the concept of
temporal network robustness: we describe a measure of network robustness for
time-varying networks and we show how it performs on different classes of
random models by means of analytical and numerical evaluation. Finally, we
report a case study on a real-world scenario, an opportunistic vehicular system of
about 500 taxicabs, highlighting the importance of time in the evaluation of
robustness. Particularly, we show how static approximation can wrongly indicate
high robustness of fragile networks when adopted in mobile time-varying
networks, while a temporal approach captures more accurately the system
performance.

Summary: By going through all these literature, there is a large gap, ROSE, a novel
robustness enhancing algorithm for scale free WSNs, is proposed. Given a scale-free
topology, ROSE exploits the position and degree information of nodes to rearrange
the edges to resemble an onion-like structure, which has been proven to be robust
against malicious attacks. Meanwhile, ROSE keeps the degree of each node in the
topology unchanged such that the resulting topology remains scale-free. The
extensive experimental results verify that our new modelling strategy indeed
generates scale-free network topologies for WSNs, and ROSE can significantly

8
improve the robustness of the network topologies generated by our modelling
strategy.

9
3 .Project

3.1 Existing System:


In order to generate the power-law distribution of node degrees in scale-free networks,
Barabási and Albert proposed the so-called the Barabási-Albert (BA) model , which
applies the following two criteria to achieve a scale-free topology.

1.Growth: new nodes join the network one by one.


2.The newly joined node connects to an existing node with a probability that is
oportional to the degree of the existing node. This leads to the phenomenon that the
more connections a node has, the more likely it is that this node receives new
connections. This phenomenon is also known as the “Mathew Effect.”
The two criteria above can successfully generate the power-law distribution of node
degrees, which was proved by Barabási and Albert .

Because of the limited communication range in WSNs, each node does not have
sufficient neighbours and cannot establish very many edges. Hence, the preferential
attachment property of the BA model cannot be simulated directly in WSNs. In recent
years, many researchers have focused on the application of scale-free network
topologies in WSNs. In scale-free networks, a small number of nodes have very high
degrees, which renders these networks vulnerable to malicious attacks. When a node
with high degree fails, the large number of edges incident on it are removed at the same
time. The entire network topology is thus quickly fragmented. Therefore, the main
purpose of this study was to improve the robustness of scale-free networks in WSNs
against malicious attacks. The addition of edges or relay nodes can directly solve or
alleviate this problem. However, additional edges destroy the original scale-free
property and consume additional energy. Therefore, the optimizing process of net-work
robustness against malicious attacks cannot change the degree distribution of the initial
network topology.

Herrmann proposed a hill climbing algorithm based on robustness metric R, which


makes the net-work topologies resemble a stable onion-like structure through swapping
edges. However, the multimodal phenomenon may prevent the algorithm from jumping
out of the local optimum.

10
3.2 Proposed System:

This project is designed to be processed in a centralized system. Before ROSE operates,


each node sends its own coordinates and neighbour list to the centralized system through
the multi-hop system. After we achieve the optimization results according to ROSE, the
centralized system sends the new neighbour list to each node through the multi-hop
system. To help explain ROSE, we first introduce the basic ideas of ROSE in this
section. Then, we provide the details of ROSE in the following section.

In general, the design of ROSE is based on the observation of Schneider that graphs
exhibiting an onion-like structure are robust to malicious attacks. Schneider. described
the onion-like structure as a structure “consisting of a core of highly connected nodes
hierarchically surrounded by rings of nodes with decreasing degrees.” Note that
Schneider validated their observation only by extensive simulations. One year later, the
theoretical analysis supporting this observation was provided by Tanizawa. Since the
onion-like structure encompasses a family of network topologies, analyzed one specific
topology called the “interconnected random regular graphs” of this family, and proved
its robustness against malicious attacks.

Given that the above observation about the onion-like structure has been validated both
experimentally and theoretically, the ROSE algorithm is aimed to transform network
topologies to exhibit the onion-like structure. Specifically, ROSE involves two phases: a
degree difference operation and angle sum operation. In the following, we first introduce
the concept of independent edges which is used in both operations; then, we describe the
basics of these two operations.

A. Independent Edges::Here, we represent a scale-free network topology as a graph


G=(V,E) where V={1,2,…N}is the set of N nodes
E== {eij |i, j V nd i = j} is the set of M edges. We state that two
edges eij and ekls are independent edges if they satisfy the following two
requirements

1) Each of nodes i, j, k, and l must be in the communication range of the other three nodes.
This ensures that every node has the ability to establish a connection with others.

2) There are no extra connections between nodes i, j, k, and l, except the existing edges eij
and ekl or a pair of independent edges, eij and ekl , all three possible connection methods

11
of nodes i, j, k, and l are illustrated in Fig. 3.1 Fig. 3a represents the original connection
method, eij and ekl ; Fig. 3b represents the first alternative connection method, eik and ejl ;
and Fig. 3c represents the second alternative connection method, eil and ejk . The primary
idea of robustness enhancement is to improve the robustness of a scale-free network
topology by swapping the edges into alternative connection methods. If the value of R is
increased after a swap is executed, the swap is accepted; otherwise, not. Note that when
a pair of edges eij and ekl are swapped into the alternative connection methods, the
degrees of nodes i, j,k, and l are preserved.

In the degree difference operation and angle sum operation described next, only when a
pair of edges are independent edges are they considered for swapping; otherwise, not.
This not only satisfies the WSNs communication range constraint on the sensor nodes,
but also dramatically reduces the pairs of edges considered.

Fig3. 1. Connection methods of independent edges. (a) Connection method 1. (b)


Connection method 2. (c) Connection method 3.

3.3 Algorithm:

In this section, we provide the design of the specific robustness enhancing algorithms
for a scale-free network topology.

Algorithm 1 Scale-free network topology modelling in WSNs

Input: N , V , r, m

12
Output: Lsti
procedure BANETWORKBUILD(A)

1: for all vi ∈ V do
2: vi ← receiveStartingSignal()
3: Setting timer
4: if Timer of vi expired then
5: broadcastStartPacket(vi)
6: vi ← receiveDisconnectNeighborDegree()
7: if the degree of nodes in Vi are all zero then
8: ΠLocal(j) = 1/Ni
10: else
11: for all vj ∈ Vi do
Ni

12: ΠLocal(j) = dj / i=1


d
i
13: end for
14: end if
15: use Roulette method select m nodes in Vi base on
ΠLocal(j)
16: Lsti = Modify neighbor list of vi
17: broadcastEndPacket(Lsti)
18: broadcastStartingSignal()

19: end if

20: end for

21: end procedure

Algorithm 1 describes the modelling process of a scale-free network topology in


WSNs, which is executed during the startup time of the scale-free network. The
variables used in the algorithm are as follows.

N : the total number of nodes in the scale-free network topology

V : the set of all nodes in the scale-free network topology.


• Vi : the set of nodes that are in the communication range of node i, but are not connected
with it and have not reached the maximum degree.
• Ni: the number of nodes in Vi .
• r: the communication radius of each node.
• m: the number of edges that each newly joined node add sum
• Lsti: list of the neighbour nodes connected with node i.

13
This algorithm operates as follows. First, a newly joined node i receives a starting signal
and sets a timer (Lines 3 and 4). When the timer expires, it sends an add connection
request to all neighbours. Then, it receives the degree information of nodes that are in the
local world of node i (Lines 6 and 7). Next, node i calculates the connection probability
of each neighbour based on feedback information. If the degree of each neighbour node
is zero, node i connects with them with equal probability (Lines 8 to 9); otherwise, node i
calculates the connection probability according to the degree of neighbour nodes (Lines
11 to 12). The codes in Lines 15 to 16 describe the process of establishing m edges
between node i and its neighbours by the roulette method. Next, the neighbour list Lsti is
updated and broadcast to all neighbour nodes. Finally, node i broadcasts a starting signal
for the unjoined nodes (Line 18).

Algorithm 2 : Degree difference operation

Input: A, E, N , r

Output: A

1: procedure DEGREEDIFFERENCEOPERATION(A)

2: for all edges in E do


3: Randomly select eij and ekl
4: if eij and ekl are unmarked && eij and ekl are a
pair of independent edges then

5: SU B0 = p (|di − dj | + |dk − dl |)
6: SU B1 = |di − dk | + |dj − dl|
7: SU B2 = |di − dl | + |dj − dk |
8: SU B = min(SU B0, SU B1, SU B2)
9: if SU B == SU B1 then
10: A ← A (Remove eij and ekl in A and Add eil and ejk to A )

11: if NetworkFullConnected() and R(A ) ≥

R(A) then

12: A←A

13: end if
14: else if SU B == SU B2 then
15: A ← A (Remove eij and ekl in A and Add
eik and ejl to A )

14
16: if NetworkFullConnected() and R(A ) ≥

R(A) then

17: A←A

18: end if

19: end if

20: end if
21: Mark this pair of edges(eij and ekl ).
22: end for

23: end procedure

Algorithm 2 describes the process of the degree difference operation, which is executed
after the modelling of the scale-free network topology in WSNs. The variables used in the
algorithm are as follows.

• A: the adjacency matrix of the scale-free network topology.


• A : the adjacency matrix after the swap operation based on the degree difference.
A pair of independent edges is randomly selected eij and ekl (Lines 2 to 3). Then, the
degree differences of all three connection methods are calculated. The connection method
with the minimum degree difference is deter-mined (Lines 5 to 8). If it is the initial
connection method, the swapping operation is skipped and the next round of selecting
edges begins; otherwise, the adjacency matrix A to A are modified based on the calculation
result. If A keeps the network topology connected and does not reduce the value of R, this
swapping operation is accepted (Lines 9 to 13). Otherwise, A is returned to A and the
algorithm begins the next round of selecting edges. The codes in Lines 14 to 19 check
another connection method in the same manner. This process continues until all the pairs of
edges in the adjacency matrix have been examined.

Algorithm 3 Angle sum operation

Input: A, E, N , r

Output: A

15
1: procedure ANGLESUMOPERATION(A)

2: for all edges in E do


3: Randomly select eij and ekl
4: if eij and ekl are unmarked && eij and ekl are a
pair of independent edges then
5:A1 ← A (Remove eij and ekl in A and Add eil and ejk to A1)
6:A2 ← A (Remove eij and ekl in A and Add eik and ejl to A2 )
7: SU M = max(SU MA, SU MA1 , SU MA2 )
8:if NetworkFullConnected() and R(A1 ) ≥ R(A) and SU M == SU MA1 then
9: A ← A1
10: else if NetworkFullConnected() and R(A2 ) ≥
R(A) and SU M == SU MA2 then

11: A ← A2
12: end if
13: end if
14: Mark this pair of edges(eij and ekl ).
15: end for

16: end procedure

Algorithm 3 describes the process of the angle sum operation, which is executed after the
degree difference operation. The variables used in the algorithm are as follows.

• A1, A2 : the adjacency matrix after a swap operation based on the sum of surrounding
angles.
A pair of independent edges is randomly selected, eij and ekl (Lines 2 to 3). Then, the sum
of the surrounding angles for all the three connection methods is calculated. The
connection method with the maximum angle sum is selected (Lines 5 to 7). If it is the
initial connection method, the swapping operation is skipped and the next round of
selecting edges begins; otherwise, the adjacency matrix A to A1 or A2 is modified based
on the value of SUM . If the modification keeps the network topology connected and does
not reduce the value of R, it is accepted (Lines 8 to 12). Otherwise, the modified
adjacency matrix is returned to A and the algorithm begins the next round of selecting
edges. This process continues until all the pairs of edges in the adjacency matrix have
been examined.

16
3.4 Network Simulator

The network simulator is discrete event packet level simulator. The network simulator
covers a very large number of applications of different kind of protocols of different
network types consisting of different network elements and traffic models. Network
simulator is a package of tools that simulates behavior of networks such as creating
network topologies, log events that happen under any load, analyze the events and
understand the network. Well the main aim of our first experiment is to learn how to use
network simulator and to get acquainted with the simulated objects and understand the
operations of network simulation and we also need to analyze the behavior of the
simulation object using network simulation.

Type of network simulators

Different types of network simulators can be categorized and explained based on


some criteria such as if they are commercial or free, or if they are simple ones or
complex ones.

Commercial and open source simulators

Some of the network simulators are commercial which means that they would not
provide the source code of its software or the affiliated packages to the general users
for free. All the user s have to pay to get the license to use their software or pay to
order specific packages for their own specific usage requirements. One typical
example is the OPNET. Commercial simulator has its advantage and disadvantage.
The advantage is that it generally has complete and up-to-date documentations and
they can be consistently maintained by some specialized staff in that company.
However, the open source network simulator is disadvantageous in this aspect, and
generally there are not enough specialized people working on the documentation. This
problem can be serious when the different versions come with many new things and it
will become difficult to trace or understand the previous codes without appropriate
documentations

17
Table3.1 Network Simulator Types

Type Name

Commercial OPNET, QuaNet

Open Source NS2, NS3, OMNeT++, SSFNet, J-Sim

On the contrary, the open source network simulator has the advantage that everything
is very open and everyone or organization can contribute to it and find bugs in it. The
interface is also open for future improvement. It can also be very flexible and reflect the most
new recent developments of new technologies in a faster way than commercial network
simulators. We can see that some advantages of commercial network simulators, however,
are the disadvantage for the open source network simulators. Lack of enough systematic and
complete documentations and lack of version control supports can lead to some serious
problem s and can limit the applicability and life-time of the open source network simulators.
Typical open source network simulators include NS2, NS3.

Currently there are a great variety of network simulators, ranging from the simple
ones to the complex ones. Minimally, a network simulator should enable users to represent a
network topology, defining the scenarios, specifying the nodes on the network, the links
between those nodes and the traffic between the nodes. More complicated systems may allow
the user to specify everything about the protocols used to process network traffic. Graphical
applications also allow users to easily visualize the workings of their simulated environment.
Some of them may be text-based and can provide a less visual or intuitive interface, but may
allow more advanced forms of customization. Others may be programming-oriented and can
provide a programming framework that allows the users to customize to create an application
that simulates the networking environment for testing.

18
Backend Environment of Network Simulator

Network Simulator is mainly based on two languages. They are C++ and OTcl. OTcl
is the object oriented version of Tool Command language. The network simulator is a
bank of of different network and protocol objects. C++ helps in the following way:

1. It helps to increase the efficiency of simulation.


2. It is used to provide details of the protocols and their operation.
3. It is used to reduce packet and event processing time.

OTcl helps in the following way:

With the help of OTcl we can describe different network topologies

1. It helps us to specify the protocols and their applications


2. It allows fast development
3. Tcl is compatible with many platforms and it is flexible for integration
4. Tcl is very easy to use and it is available in free

Basics of Tcl Programming (w.r.t. ns2)

Before we get into the program we should consider the following things:

1. Initialization and termination aspects of network simulator.


2. Defining the network nodes,links,queues and topology as well.
3. Defining the agents and their applications
4. Network Animator(NAM)
5. Tracing

Initialization

To start a new simulator we write

1. set ns [new Simulator]

From the above command we get that a variable ns is being initialized by using the set
command. Here the code [new Simulator] is a instantiation of the class Simulator which uses

19
the reserved word 'new'. So we can call all the methods present inside the class simulator by
using the variable ns.

Creating the output files

#To create the trace files we write

set tracefile1 [open out.tr w]

$ns trace-all $tracefile1

#To create the nam files we write

set namfile1 [open out.nam w]

$ns namtrace-all $namfile

In the above we create a output trace file out.tr and a nam visualization file out.nam. But
in the Tcl script they are not called by their names declared, while they are called by the
pointers initialized for them such as tracefile1 and namfile1 respectively. The line which
starts with '#' are commented. The next line opens the file 'out.tr' which is used for writing is
declared 'w'.The next line uses a simulator method trace-all by which we will trace all the
events in a particular format.

The termination program is done by using a 'finish' procedure

# Defining the 'finish' procedure'

proc finish {} {

global ns tracefile1 namfile1

$ns flush-trace

close $tracefile

close $namfile

exec nam out.nam &

20
exit 0

In the above the word 'proc' is used to declare a procedure called 'finish'.The word
'global' is used to tell what variables are being used outside the procedure.

'flush-trace' is a simulator method that dumps the traces on the respective files.the
command 'close' is used to close the trace files and the command 'exec' is used to execute the
nam visualization.The command 'exit' closes the application and returns 0 as zero(0) is
default for clean exit.

In ns we end the program by calling the 'finish' procedure

#end the program

$ns at 125.0 "finish"

Thus the entire operation ends at 125 seconds. To begin the simulation we will use the
command

#start the simulation process

$ns run

Defining nodes, links, queues and topology

Way to create a node: view source print?

1 set n0 [ns node]

In the above we created a node that is pointed by a variable n0.While referring the
node in the script we use $n0. Similarly we create another node n2.Now we will set a link
between the two nodes.

1 $ns duplex-link $n0 $n2 10Mb 10ms DropTail

21
So we are creating a bi-directional link between n0 and n2 with a capacity of
10Mb/sec and a propagation delay of 10ms.

In NS an output queue of a node is implemented as a part of a link whose input is that


node to handle the overflow at the queue. But if the buffer capacity of the output queue is
exceeded then the last packet arrived is dropped and here we will use a 'DropTail'
op`tion.Many other options such as RED(Random Early Discard) mechanism, FQ(Fair
Queuing), DRR(Deficit Round Robin), SFQ(Stochastic Fair Queuing) are available.

So now we will define the buffer capacity of the queue related to the above link

#Set queue size of the link

$ns queue-limit $n0 $n2 20

So, if we summarize the above three things we get

#create nodes

set n0 [$ns node]

set n1 [$ns node]

set n2 [$ns node]

set n3 [$ns node]

set n4 [$ns node]

set n5 [$ns node]

#create links between the nodes

$ns duplex-link $n0 $n2 10Mb 10ms DropTail

$ns duplex-link $n1 $n2 10Mb 10ms DropTail

$ns simplex-link $n2 $n3 0.3Mb 100ms DropTail

22
$ns simplex-link $n3 $n2 0.3Mb 100ms DropTail

$ns duplex-link $n0 $n2 0.5Mb 40ms DropTail

$ns duplex-link $n0 $n2 0.5Mb 40ms DropTail

#set queue-size of the link (n2-n3) to 20

$ns queue-limit $n2 $n3 20

Agents and applications

TCP

TCP is a dynamic reliable congestion protocol which is used to provide reliable


transport of packets from one host to another host by sending acknowledgements on proper
transfer or loss of packets.Thus TCP requires bi-directional links in order for
acknowledgements to return to the source.

Now we will show how to set up tcp connection between two nodes

#setting a tcp connection

set tcp [new Agent/TCP]

$ns attach-agent $n0 $tcp

set sink [new Agent/TCPSink]

$ns attach-agent $n4 $sink

$ns connect $tcp $sink

$tcp set fid_1

$tcp set packetSize_552

The command 'set tcp [new Agent/TCP]' gives a pointer called 'tcp' which indicates the
tcp agent which is a object of ns.Then the command '$ns attach-agent $n0 $tcp' defines the

23
source node of tcp connection. Next the command 'set sink [new Agent/TCPSink]' defines the
destination of tcp by a pointer called sink. The next command '$ns attach-agent $n4 $sink'
defines the destination node as n4.Next, the command '$ns connect $tcp $sink' makes the
TCP connection between the source and the destination.i.e., n0 and n4.When we have several
flows such as TCP, UDP etc in a network. So, to identify these flows we mark these flows by
using the command '$tcp set fid_1'. In the last line we set the packet size of tcp as 552 while
the default packet size of tcp is 1000.

FTP over TCP

File Transfer Protocol(FTP) is a standard mechanism provided by the Internet for


transferring files from one host to another. Well this is the most common task expected from
a networking or a internet. FTP differs from other client server applications in that it
establishes between the client and the server. One connection is used for data transfer and
other one is used for providing control information. FTP uses the services of the TCP. It
needs two connections. The well Known port 21 is used for control connections and the other
port 20 is used for data transfer.

Well here we will learn in how to run a FTP connection over a TCP

#Initiating FTP over TCP

set ftp [new Application/FTP]

$ftp attach-agent $tcp

In above, the command 'set ftp [new Application/FTP]' gives a pointer called 'ftp' which
indicates the FTP application. Next, we attach the ftp application with tcp agent as FTP uses
the services of TCP.

UDP

The User datagram Protocol is one of the main protocols of the Internet protocol
suite.UDP helps the host to send send messages in the form of data grams to another host
which is present in a Internet protocol network without any kind of requirement for channel
transmission setup. UDP provides a unreliable service and the data grams may arrive out of

24
order,appear duplicated, or go missing without notice. UDP assumes that error checking and
correction is either not necessary or performed in the application, avoiding the overhead of
such processing at the network interface level. Time-sensitive applications often use UDP
because dropping packets is preferable to waiting for delayed packets, which may not be an
option in a real-time system.

Now we will learn how to create a UDP connection in network simulator.

# setup a UDP connection

set udp [new Agent/UDP]

$ns attach-agent $n1 $udp

$set null [new Agent/Null]

$ns attach-agent $n5 $null

$ns connect $udp $null

$udp set fid_2

Similarly, the command 'set udp [new Agent/UDP]' gives a pointer called 'udp' which
indicates the udp agent which is a object of ns. Then the command '$ns attach-agent $n1
$udp' defines the source node of udp connection. Next the command 'set null [new
Agent/Null]' defines the destination of udp by a pointer called null. The next command '$ns
attach-agent $n5 $null' defines the destination node as n5.Next, the command '$ns connect
$udp $null' makes the UDP connection between the source and the destination. i.e n1 and
n5.When we have several flows such as TCP, UDP etc in a network. So, to identify these
flows we mark these flows by using the command '$udp set fid_2

Constant Bit Rate(CBR)

Constant Bit Rate (CBR) is a term used in telecommunications, relating to the quality
of service.When referring to codecs, constant bit rate encoding means that the rate at which a
codec's output data should be consumed is constant. CBR is useful for streaming multimedia
content on limited capacity channels since it is the maximum bit rate that matters, not the

25
average, so CBR would be used to take advantage of all of the capacity. CBR would not be
the optimal choice for storage as it would not allocate enough data for complex sections
(resulting in degraded quality) while wasting data on simple sections.

CBR over UDP Connection

#setup cbr over udp

set cbr [new Application/Traffic/CBR]

$cbr attach-agent $udp

$cbr set packetSize_1000

$cbr set rate_0.01Mb

$cbr set random _false

In the above we define a CBR connection over a UDP one. Well we have already
defined the UDP source and UDP agent as same as TCP. Instead of defining the rate we
define the time interval between the transmission of packets in the command '$cbr set
rate_0.01Mb'. Next, with the help of the command '$cbr set random _false' we can set
random noise in cbr traffic.we can keep the noise by setting it to 'false' or we can set the noise
on by the command '$cbr set random _1'. We can set by packet size by using the command
'$cbr set packetSize_(packetsize).We can set the packet size up to sum value in bytes.

Scheduling Events

In ns the tcl script defines how to schedule the events or in other words at what time
which event will occur and stop. This can be done using the command

$ns at.

So here in our program we will schedule the ftp and cbr.

# scheduling the events

$s at 0.1 "cbr start"

26
$ns at 1.0 "ftp start"

$ns at 124.0 "ftp stop"

$ns at 124.5 "cbr stop"

3.5 Network Animator (NAM)

When we will run the above program in ns then we can visualize the network in the
NAM. But instead of giving random positions to the nodes, we can give suitable initial
positions to the nodes and can form a suitable topology. So, in our program we can give
positions to the nodes in NAM in the following way

#Give position to the nodes in NAM

$ns duplex-link-op $n0 $n2 orient-right-down

$ns duplex-link-op $n1 $n2 orient-right-up

$ns simplex-link-op $n2 $n3 orient-right

$ns simplex-link-op $n3 $n2 orient-left

$ns duplex-link-op $n3 $n4 orient-right-up

$ns duplex-link-op $n3 $n5 orient-right-down

We can also define the color of cbr and tcp packets for identification in NAM.For this we
use the following command

#Marking the flows

$ns color1 Blue

$ns color2 Red

To view the network animator we need to type the command: nam

27
NAM is a Tcl/TK based animation tool for viewing network simulation traces and
real world packet traces. It supports topology layout, packet level animation, and various data
inspection tools.It has a graphical interface, which can provide information such as number of
packets drops at each link.The network animator "NAM'' began in 1990 as a simple tool for
animating packet trace data.Nam began at LBL. It has evolved substantially over the past few
years. The NAM development effort was an ongoing collaboration with the VINT project.
Currently, it is being developed as an open source project hosted at Sourceforge.

This trace data is typically derived as output from a network simulator like ns or from
real network measurements, e.g., using tcpdump.

We can either start NAM with the command

'nam <nam-file>'

where '<nam-file>' is the name of a NAM trace file that was generated by NS or one can
execute it directly out of the Tcl simulation script for visualization of node movement.

NAM window is showed on the following figure;

28
Fig 3.2:NAM window

We can use NAM in network simulation by creating nam trace file and after that
executing the nam trace file on TCL script.

Syntax:

To create a nam trace file

set nf [open trace.nam w]

$ns namtrace-all $nf

which means we are opening a new nam trace file named as "trace" and also telling that data
must be stored in nam format.

"nf" is the file handler that we are used here to handle the trace file.

29
"w" means write i.e the file trace.nam is opened for writing.

The second line tells the simulator to trace each packet on every link in the topology
and for that we give file handler nf for the simulator ns.

To execute a nam file

proc finish {} {

global ns nf

$ns flush-trace

close $nf

exec nam trace.nam &

exit 0

In here the trace data is flushed into the file by using command $ns flush-trace and then file
is closed. Then, execution of nam file is done by using following command.

30
Fig3.3 Animated Network in NAM

Program to launch NAM window:

#at start of the program

set x 500

set y 500

set n 30

set ns [new Simulator]

set f0 [open wir7.tr w]

$ns trace-all $f0


$ns use-newtrace
set namtracefd [open wir7.nam w]

$ns namtrace-all-wireless $namtracefd $x $y

31
#at end of the program

$ns at 10.0 "stop"

proc stop {} {

global ns f0 namtracefd

$ns flush-trace
close $f0
close $namtracefd
exec nam wir7.nam &

exit 0
}
#Run the simulation

$ns run

The Extended Nam Editor is an editor that allows the graphical creation of ns2 scripts.
It extends the basic Nam Editor with the following features:

1. Integration with existing topology generators

2. Localization and visualization of set of nodes on large network topologies according


to different selection criteria;

3. Instantiation of agents of any types on all the nodes of a given node set;

4. Definition of new node types;

5. Support for simulations of web cache systems.

The Extended Nam Editor work under Linux operating system; it is available for download in
links given on the bottom of page.

Topology Generator Interface

The manual generation of complex Network Topology is a tedious and error prone
activity. In order to simulate networks with realistic topologies, it is a common practice to use

32
ad-hoc topology generators, whose output is usually not compatible with the ns2 syntax.
Hence, several tools have been developed to translate topology descriptions generated by
topology generators in ns-scripts that can be used in the definition of a simulation scenario.
Unfortunately, scripts produced in this way are not compatible with the Nam Editor, hence
networks created by common topology generators cannot be modified interactively. Such a
limitation is sometimes annoying, in particular when the automatically generated topology
needs to be further adapted, e.g. by instantiating agents on particular network nodes.

3.6TESTING
Implementation and Testing:

Implementation is one of the most important tasks in project is the phase in which one has to
be cautions because all the efforts undertaken during the project will be very interactive.
Implementation is the most crucial stage in achieving successful system and giving the users
confidence that the new system is workable and effective. Each program is tested individually
at the time of development using the sample data and has verified that these programs link
together in the way specified in the program specification. The computer system and its
environment are tested to the satisfaction of the user.

Implementation
The implementation phase is less creative than system design. It is primarily concerned with
user training, and file conversion. The system may be requiring extensive user training. The
initial parameters of the system should be modifies as a result of a programming. A simple
operating procedure is provided so that the user can understand the different functions clearly
and quickly. The different reports can be obtained either on the inkjet or dot matrix printer,
which is available at the disposal of the user. The proposed system is very easy to
implement. In general implementation is used to mean the process of converting a new or
revised system design into an operational one.

3.6 Code

//#include <ip.h>

#include <aodv/aodv.h>
#include <aodv/aodv_packet.h>
#include <random.h>
#include <cmu-trace.h>
//#include <energy-model.h>

33
#define max(a,b) ( (a) > (b) ? (a) : (b) )
#define CURRENT_TIME Scheduler::instance().clock()

//#define DEBUG
//#define ERROR

#ifdef DEBUG
static int route_request = 0;
#endif

/*
TCL Hooks
*/
int hdr_aodv::offset_;
static class AODVHeaderClass : public PacketHeaderClass {
public:
AODVHeaderClass() : PacketHeaderClass("PacketHeader/AODV",
sizeof(hdr_all_aodv)) {
bind_offset(&hdr_aodv::offset_);
}
} class_rtProtoAODV_hdr;

static class AODVclass : public TclClass {


public:
AODVclass() : TclClass("Agent/AODV") {}
TclObject* create(int argc, const char*const* argv) {
assert(argc == 5);
//return (new AODV((nsaddr_t) atoi(argv[4])));
return (new AODV((nsaddr_t) Address::instance().str2addr(argv[4])));
}
} class_rtProtoAODV;

int
AODV::command(int argc, const char*const* argv) {
if(argc == 2) {
Tcl& tcl = Tcl::instance();

if(strncasecmp(argv[1], "id", 2) == 0) {
tcl.resultf("%d", index);
return TCL_OK;
}

if(strncasecmp(argv[1], "start", 2) == 0) {
btimer.handle((Event*) 0);

#ifndef AODV_LINK_LAYER_DETECTION
htimer.handle((Event*) 0);
ntimer.handle((Event*) 0);
#endif // LINK LAYER DETECTION

34
rtimer.handle((Event*) 0);
return TCL_OK;
}
}
else if(argc == 3) {
if(strcmp(argv[1], "index") == 0) {
index = atoi(argv[2]);
return TCL_OK;
}

else if(strcmp(argv[1], "log-target") == 0 || strcmp(argv[1], "tracetarget") == 0) {


logtarget = (Trace*) TclObject::lookup(argv[2]);
if(logtarget == 0)
return TCL_ERROR;
return TCL_OK;
}
else if(strcmp(argv[1], "drop-target") == 0) {
int stat = rqueue.command(argc,argv);
if (stat != TCL_OK) return stat;
return Agent::command(argc, argv);
}
else if(strcmp(argv[1], "if-queue") == 0) {
ifqueue = (PriQueue*) TclObject::lookup(argv[2]);

if(ifqueue == 0)
return TCL_ERROR;
return TCL_OK;
}
else if (strcmp(argv[1], "port-dmux") == 0) {
dmux_ = (PortClassifier *)TclObject::lookup(argv[2]);
if (dmux_ == 0) {
fprintf (stderr, "%s: %s lookup of %s failed\n", __FILE__,
argv[1], argv[2]);
return TCL_ERROR;
}
return TCL_OK;
}
}
return Agent::command(argc, argv);
}

/*
Constructor
*/

AODV::AODV(nsaddr_t id) : Agent(PT_AODV),


btimer(this), htimer(this), ntimer(this),
rtimer(this), lrtimer(this), rqueue() {

35
index = id;
seqno = 2;
bid = 1;

LIST_INIT(&nbhead);
LIST_INIT(&bihead);

logtarget = 0;
ifqueue = 0;
}

/*
Timers
*/

void
BroadcastTimer::handle(Event*) {
agent->id_purge();
Scheduler::instance().schedule(this, &intr, BCAST_ID_SAVE);
}

void
HelloTimer::handle(Event*) {
agent->sendHello();
double interval = MinHelloInterval +
((MaxHelloInterval - MinHelloInterval) * Random::uniform());
assert(interval >= 0);
Scheduler::instance().schedule(this, &intr, interval);
}

void
NeighborTimer::handle(Event*) {
agent->nb_purge();
Scheduler::instance().schedule(this, &intr, HELLO_INTERVAL);
}

void
RouteCacheTimer::handle(Event*) {
agent->rt_purge();
#define FREQUENCY 0.5 // sec
Scheduler::instance().schedule(this, &intr, FREQUENCY);
}

void
LocalRepairTimer::handle(Event* p) { // SRD: 5/4/99
aodv_rt_entry *rt;
struct hdr_ip *ih = HDR_IP( (Packet *)p);

/* you get here after the timeout in a local repair attempt */

36
/* fprintf(stderr, "%s\n", __FUNCTION__); */

rt = agent->rtable.rt_lookup(ih->daddr());

if (rt && rt->rt_flags != RTF_UP) {


// route is yet to be repaired
// I will be conservative and bring down the route
// and send route errors upstream.
/* The following assert fails, not sure why */
/* assert (rt->rt_flags == RTF_IN_REPAIR); */

//rt->rt_seqno++;
agent->rt_down(rt);
// send RERR
#ifdef DEBUG
fprintf(stderr,"Dst - %d, failed local repair\n", rt->rt_dst);
#endif
}
Packet::free((Packet *)p);
}

/*
Broadcast ID Management Functions
*/

void
AODV::id_insert(nsaddr_t id, u_int32_t bid) {
BroadcastID *b = new BroadcastID(id, bid);

assert(b);
b->expire = CURRENT_TIME + BCAST_ID_SAVE;
LIST_INSERT_HEAD(&bihead, b, link);
}

/* SRD */
bool
AODV::id_lookup(nsaddr_t id, u_int32_t bid) {
BroadcastID *b = bihead.lh_first;

// Search the list for a match of source and bid


for( ; b; b = b->link.le_next) {
if ((b->src == id) && (b->id == bid))
return true;
}
return false;
}

37
void
AODV::id_purge() {
BroadcastID *b = bihead.lh_first;
BroadcastID *bn;
double now = CURRENT_TIME;

for(; b; b = bn) {
bn = b->link.le_next;
if(b->expire <= now) {
LIST_REMOVE(b,link);
delete b;
}
}
}

/*
Helper Functions
*/

double
AODV::PerHopTime(aodv_rt_entry *rt) {
int num_non_zero = 0, i;
double total_latency = 0.0;

if (!rt)
return ((double) NODE_TRAVERSAL_TIME );

for (i=0; i < MAX_HISTORY; i++) {


if (rt->rt_disc_latency[i] > 0.0) {
num_non_zero++;
total_latency += rt->rt_disc_latency[i];
}
}
if (num_non_zero > 0)
return(total_latency / (double) num_non_zero);
else
return((double) NODE_TRAVERSAL_TIME);

/*
Link Failure Management Functions
*/

static void
aodv_rt_failed_callback(Packet *p, void *arg) {
((AODV*) arg)->rt_ll_failed(p);
}

/*

38
* This routine is invoked when the link-layer reports a route failed.
*/
void
AODV::rt_ll_failed(Packet *p) {
struct hdr_cmn *ch = HDR_CMN(p);
struct hdr_ip *ih = HDR_IP(p);
aodv_rt_entry *rt;
nsaddr_t broken_nbr = ch->next_hop_;

#ifndef AODV_LINK_LAYER_DETECTION
drop(p, DROP_RTR_MAC_CALLBACK);
#else

/*
* Non-data packets and Broadcast Packets can be dropped.
*/
if(! DATA_PACKET(ch->ptype()) ||
(u_int32_t) ih->daddr() == IP_BROADCAST) {
drop(p, DROP_RTR_MAC_CALLBACK);
return;
}
log_link_broke(p);
if((rt = rtable.rt_lookup(ih->daddr())) == 0) {
drop(p, DROP_RTR_MAC_CALLBACK);
return;
}
log_link_del(ch->next_hop_);

#ifdef AODV_LOCAL_REPAIR
/* if the broken link is closer to the dest than source,
attempt a local repair. Otherwise, bring down the route. */

if (ch->num_forwards() > rt->rt_hops) {


local_rt_repair(rt, p); // local repair
// retrieve all the packets in the ifq using this link,
// queue the packets for which local repair is done,
return;
}
else
#endif // LOCAL REPAIR

{
drop(p, DROP_RTR_MAC_CALLBACK);
// Do the same thing for other packets in the interface queue using the
// broken link -Mahesh
while((p = ifqueue->filter(broken_nbr))) {
drop(p, DROP_RTR_MAC_CALLBACK);
}
nb_delete(broken_nbr);

39
}

#endif // LINK LAYER DETECTION


}

void
AODV::handle_link_failure(nsaddr_t id) {
aodv_rt_entry *rt, *rtn;
Packet *rerr = Packet::alloc();
struct hdr_aodv_error *re = HDR_AODV_ERROR(rerr);

re->DestCount = 0;
for(rt = rtable.head(); rt; rt = rtn) { // for each rt entry
rtn = rt->rt_link.le_next;
if ((rt->rt_hops != INFINITY2) && (rt->rt_nexthop == id) ) {
assert (rt->rt_flags == RTF_UP);
assert((rt->rt_seqno%2) == 0);
rt->rt_seqno++;
re->unreachable_dst[re->DestCount] = rt->rt_dst;
re->unreachable_dst_seqno[re->DestCount] = rt->rt_seqno;
#ifdef DEBUG
fprintf(stderr, "%s(%f): %d\t(%d\t%u\t%d)\n", __FUNCTION__, CURRENT_TIME,
index, re->unreachable_dst[re->DestCount],
re->unreachable_dst_seqno[re->DestCount], rt->rt_nexthop);
#endif // DEBUG
re->DestCount += 1;
rt_down(rt);
}
// remove the lost neighbor from all the precursor lists
rt->pc_delete(id);
}

if (re->DestCount > 0) {
#ifdef DEBUG
fprintf(stderr, "%s(%f): %d\tsending RERR...\n", __FUNCTION__, CURRENT_TIME,
index);
#endif // DEBUG
sendError(rerr, false);
}
else {
Packet::free(rerr);
}
}

void
AODV::local_rt_repair(aodv_rt_entry *rt, Packet *p) {
#ifdef DEBUG
fprintf(stderr,"%s: Dst - %d\n", __FUNCTION__, rt->rt_dst);
#endif
// Buffer the packet

40
rqueue.enque(p);

// mark the route as under repair


rt->rt_flags = RTF_IN_REPAIR;

sendRequest(rt->rt_dst);

// set up a timer interrupt


Scheduler::instance().schedule(&lrtimer, p->copy(), rt->rt_req_timeout);
}

void
AODV::rt_update(aodv_rt_entry *rt, u_int32_t seqnum, u_int16_t metric,
nsaddr_t nexthop, double expire_time) {

rt->rt_seqno = seqnum;
rt->rt_hops = metric;
rt->rt_flags = RTF_UP;
rt->rt_nexthop = nexthop;
rt->rt_expire = expire_time;
}

void
AODV::rt_down(aodv_rt_entry *rt) {
/*
* Make sure that you don't "down" a route more than once.
*/

if(rt->rt_flags == RTF_DOWN) {
return;
}

// assert (rt->rt_seqno%2); // is the seqno odd?


rt->rt_last_hop_count = rt->rt_hops;
rt->rt_hops = INFINITY2;
rt->rt_flags = RTF_DOWN;
rt->rt_nexthop = 0;
rt->rt_expire = 0;

} /* rt_down function */

/*
Route Handling Functions
*/

void
AODV::rt_resolve(Packet *p) {
struct hdr_cmn *ch = HDR_CMN(p);
struct hdr_ip *ih = HDR_IP(p);
aodv_rt_entry *rt;

41
/*
* Set the transmit failure callback. That
* won't change.
*/
ch->xmit_failure_ = aodv_rt_failed_callback;
ch->xmit_failure_data_ = (void*) this;
rt = rtable.rt_lookup(ih->daddr());
if(rt == 0) {
rt = rtable.rt_add(ih->daddr());
}

/*
* If the route is up, forward the packet
*/

if(rt->rt_flags == RTF_UP) {
assert(rt->rt_hops != INFINITY2);
forward(rt, p, NO_DELAY);
}
/*
* if I am the source of the packet, then do a Route Request.
*/
else if(ih->saddr() == index) {
rqueue.enque(p);
sendRequest(rt->rt_dst);
}
/*
* A local repair is in progress. Buffer the packet.
*/
else if (rt->rt_flags == RTF_IN_REPAIR) {
rqueue.enque(p);
}

/*
* I am trying to forward a packet for someone else to which
* I don't have a route.
*/
else {
Packet *rerr = Packet::alloc();
struct hdr_aodv_error *re = HDR_AODV_ERROR(rerr);
/*
* For now, drop the packet and send error upstream.
* Now the route errors are broadcast to upstream
* neighbors - Mahesh 09/11/99
*/

assert (rt->rt_flags == RTF_DOWN);


re->DestCount = 0;
re->unreachable_dst[re->DestCount] = rt->rt_dst;

42
re->unreachable_dst_seqno[re->DestCount] = rt->rt_seqno;
re->DestCount += 1;
#ifdef DEBUG
fprintf(stderr, "%s: sending RERR...\n", __FUNCTION__);
#endif
sendError(rerr, false);

drop(p, DROP_RTR_NO_ROUTE);
}

void
AODV::rt_purge() {
aodv_rt_entry *rt, *rtn;
double now = CURRENT_TIME;
double delay = 0.0;
Packet *p;

for(rt = rtable.head(); rt; rt = rtn) { // for each rt entry


rtn = rt->rt_link.le_next;
if ((rt->rt_flags == RTF_UP) && (rt->rt_expire < now)) {
// if a valid route has expired, purge all packets from
// send buffer and invalidate the route.
assert(rt->rt_hops != INFINITY2);
while((p = rqueue.deque(rt->rt_dst))) {
#ifdef DEBUG
fprintf(stderr, "%s: calling drop()\n",
__FUNCTION__);
#endif // DEBUG
drop(p, DROP_RTR_NO_ROUTE);
}
rt->rt_seqno++;
assert (rt->rt_seqno%2);
rt_down(rt);
}
else if (rt->rt_flags == RTF_UP) {
// If the route is not expired,
// and there are packets in the sendbuffer waiting,
// forward them. This should not be needed, but this extra
// check does no harm.
assert(rt->rt_hops != INFINITY2);
while((p = rqueue.deque(rt->rt_dst))) {
forward (rt, p, delay);
delay += ARP_DELAY;
}
}
else if (rqueue.find(rt->rt_dst))
// If the route is down and
// if there is a packet for this destination waiting in

43
// the sendbuffer, then send out route request. sendRequest
// will check whether it is time to really send out request
// or not.
// This may not be crucial to do it here, as each generated
// packet will do a sendRequest anyway.

sendRequest(rt->rt_dst);
}

/*
Packet Reception Routines
*/

void
AODV::recv(Packet *p, Handler*) {
struct hdr_cmn *ch = HDR_CMN(p);
struct hdr_ip *ih = HDR_IP(p);

assert(initialized());
//assert(p->incoming == 0);
// XXXXX NOTE: use of incoming flag has been depracated; In order to track direction of
pkt flow, direction_ in hdr_cmn is used instead. see packet.h for details.

if(ch->ptype() == PT_AODV) {
ih->ttl_ -= 1;
recvAODV(p);
return;
}

/*
* Must be a packet I'm originating...
*/
if((ih->saddr() == index) && (ch->num_forwards() == 0)) {
/*
* Add the IP Header.
* TCP adds the IP header too, so to avoid setting it twice, we check if
* this packet is not a TCP or ACK segment.
*/
if (ch->ptype() != PT_TCP && ch->ptype() != PT_ACK) {
ch->size() += IP_HDR_LEN;
}
// Added by Parag Dadhania && John Novatnack to handle broadcasting
if ( (u_int32_t)ih->daddr() != IP_BROADCAST) {
ih->ttl_ = NETWORK_DIAMETER;
}
}
/*

44
* I received a packet that I sent. Probably
* a routing loop.
*/
else if(ih->saddr() == index) {
drop(p, DROP_RTR_ROUTE_LOOP);
return;
}
/*
* Packet I'm forwarding...
*/
else {
/*
* Check the TTL. If it is zero, then discard.
*/
if(--ih->ttl_ == 0) {
drop(p, DROP_RTR_TTL);
return;
}
}
// Added by Parag Dadhania && John Novatnack to handle broadcasting
if ( (u_int32_t)ih->daddr() != IP_BROADCAST)
rt_resolve(p);
else
forward((aodv_rt_entry*) 0, p, NO_DELAY);
}

void
AODV::recvAODV(Packet *p) {
struct hdr_aodv *ah = HDR_AODV(p);

assert(HDR_IP (p)->sport() == RT_PORT);


assert(HDR_IP (p)->dport() == RT_PORT);

/*
* Incoming Packets.
*/
switch(ah->ah_type) {

case AODVTYPE_RREQ:
recvRequest(p);
break;

case AODVTYPE_RREP:
recvReply(p);
break;

case AODVTYPE_RERR:
recvError(p);
break;

45
case AODVTYPE_HELLO:
recvHello(p);
break;

default:
fprintf(stderr, "Invalid AODV type (%x)\n", ah->ah_type);
exit(1);
}

void
AODV::recvRequest(Packet *p) {
struct hdr_ip *ih = HDR_IP(p);
struct hdr_aodv_request *rq = HDR_AODV_REQUEST(p);
aodv_rt_entry *rt;

/*
* Drop if:
* - I'm the source
* - I recently heard this request.
*/

if(rq->rq_src == index) {
#ifdef DEBUG
fprintf(stderr, "%s: got my own REQUEST\n", __FUNCTION__);
#endif // DEBUG
Packet::free(p);
return;
}

if (id_lookup(rq->rq_src, rq->rq_bcast_id)) {

#ifdef DEBUG
fprintf(stderr, "%s: discarding request\n", __FUNCTION__);
#endif // DEBUG

Packet::free(p);
return;
}

/*
* Cache the broadcast ID
*/
id_insert(rq->rq_src, rq->rq_bcast_id);

46
/*
* We are either going to forward the REQUEST or generate a
* REPLY. Before we do anything, we make sure that the REVERSE
* route is in the route table.
*/
aodv_rt_entry *rt0; // rt0 is the reverse route

rt0 = rtable.rt_lookup(rq->rq_src);
if(rt0 == 0) { /* if not in the route table */
// create an entry for the reverse route.
rt0 = rtable.rt_add(rq->rq_src);
}

rt0->rt_expire = max(rt0->rt_expire, (CURRENT_TIME + REV_ROUTE_LIFE));

if ( (rq->rq_src_seqno > rt0->rt_seqno ) ||


((rq->rq_src_seqno == rt0->rt_seqno) &&
(rq->rq_hop_count < rt0->rt_hops)) ) {
// If we have a fresher seq no. or lesser #hops for the
// same seq no., update the rt entry. Else don't bother.
rt_update(rt0, rq->rq_src_seqno, rq->rq_hop_count, ih->saddr(),
max(rt0->rt_expire, (CURRENT_TIME + REV_ROUTE_LIFE)) );
if (rt0->rt_req_timeout > 0.0) {
// Reset the soft state and
// Set expiry time to CURRENT_TIME + ACTIVE_ROUTE_TIMEOUT
// This is because route is used in the forward direction,
// but only sources get benefited by this change
rt0->rt_req_cnt = 0;
rt0->rt_req_timeout = 0.0;
rt0->rt_req_last_ttl = rq->rq_hop_count;
rt0->rt_expire = CURRENT_TIME + ACTIVE_ROUTE_TIMEOUT;
}

/* Find out whether any buffered packet can benefit from the
* reverse route.
* May need some change in the following code - Mahesh 09/11/99
*/
assert (rt0->rt_flags == RTF_UP);
Packet *buffered_pkt;
while ((buffered_pkt = rqueue.deque(rt0->rt_dst))) {
if (rt0 && (rt0->rt_flags == RTF_UP)) {
assert(rt0->rt_hops != INFINITY2);
forward(rt0, buffered_pkt, NO_DELAY);
}
}
}
// End for putting reverse route in rt table

/*

47
* We have taken care of the reverse route stuff.
* Now see whether we can send a route reply.
*/

rt = rtable.rt_lookup(rq->rq_dst);

// First check if I am the destination ..

if(rq->rq_dst == index) {

#ifdef DEBUG
fprintf(stderr, "%d - %s: destination sending reply\n",
index, __FUNCTION__);
#endif // DEBUG

// Just to be safe, I use the max. Somebody may have


// incremented the dst seqno.
seqno = max(seqno, rq->rq_dst_seqno)+1;
if (seqno%2) seqno++;

sendReply(rq->rq_src, // IP Destination
1, // Hop Count
index, // Dest IP Address
seqno, // Dest Sequence Num
MY_ROUTE_TIMEOUT, // Lifetime
rq->rq_timestamp); // timestamp

Packet::free(p);
}

// I am not the destination, but I may have a fresh enough route.

else if (rt && (rt->rt_hops != INFINITY2) &&


(rt->rt_seqno >= rq->rq_dst_seqno) ) {

//assert (rt->rt_flags == RTF_UP);


assert(rq->rq_dst == rt->rt_dst);
//assert ((rt->rt_seqno%2) == 0); // is the seqno even?
sendReply(rq->rq_src,
rt->rt_hops + 1,
rq->rq_dst,
rt->rt_seqno,
(u_int32_t) (rt->rt_expire - CURRENT_TIME),
// rt->rt_expire - CURRENT_TIME,
rq->rq_timestamp);
// Insert nexthops to RREQ source and RREQ destination in the
// precursor lists of destination and source respectively
rt->pc_insert(rt0->rt_nexthop); // nexthop to RREQ source
rt0->pc_insert(rt->rt_nexthop); // nexthop to RREQ destination

48
#ifdef RREQ_GRAT_RREP

sendReply(rq->rq_dst,
rq->rq_hop_count,
rq->rq_src,
rq->rq_src_seqno,
(u_int32_t) (rt->rt_expire - CURRENT_TIME),
// rt->rt_expire - CURRENT_TIME,
rq->rq_timestamp);
#endif

// TODO: send grat RREP to dst if G flag set in RREQ using rq->rq_src_seqno, rq-
>rq_hop_counT

// DONE: Included gratuitous replies to be sent as per IETF aodv draft specification. As of
now, G flag has not been dynamically used and is always set or reset in aodv-packet.h ---
Anant Utgikar, 09/16/02.

Packet::free(p);
}
/*
* Can't reply. So forward the Route Request
*/
else {
ih->saddr() = index;
ih->daddr() = IP_BROADCAST;
rq->rq_hop_count += 1;
// Maximum sequence number seen en route
if (rt) rq->rq_dst_seqno = max(rt->rt_seqno, rq->rq_dst_seqno);
forward((aodv_rt_entry*) 0, p, DELAY);
}

void
AODV::recvReply(Packet *p) {
//struct hdr_cmn *ch = HDR_CMN(p);
struct hdr_ip *ih = HDR_IP(p);
struct hdr_aodv_reply *rp = HDR_AODV_REPLY(p);
aodv_rt_entry *rt;
char suppress_reply = 0;
double delay = 0.0;

#ifdef DEBUG
fprintf(stderr, "%d - %s: received a REPLY\n", index, __FUNCTION__);
#endif // DEBUG

49
/*
* Got a reply. So reset the "soft state" maintained for
* route requests in the request table. We don't really have
* have a separate request table. It is just a part of the
* routing table itself.
*/
// Note that rp_dst is the dest of the data packets, not the
// the dest of the reply, which is the src of the data packets.

rt = rtable.rt_lookup(rp->rp_dst);

/*
* If I don't have a rt entry to this host... adding
*/
if(rt == 0) {
rt = rtable.rt_add(rp->rp_dst);
}

/*
* Add a forward route table entry... here I am following
* Perkins-Royer AODV paper almost literally - SRD 5/99
*/

if ( (rt->rt_seqno < rp->rp_dst_seqno) || // newer route


((rt->rt_seqno == rp->rp_dst_seqno) &&
(rt->rt_hops > rp->rp_hop_count)) ) { // shorter or better route

// Update the rt entry


rt_update(rt, rp->rp_dst_seqno, rp->rp_hop_count,
rp->rp_src, CURRENT_TIME + rp->rp_lifetime);

// reset the soft state


rt->rt_req_cnt = 0;
rt->rt_req_timeout = 0.0;
rt->rt_req_last_ttl = rp->rp_hop_count;

if (ih->daddr() == index) { // If I am the original source


// Update the route discovery latency statistics
// rp->rp_timestamp is the time of request origination

rt->rt_disc_latency[(unsigned char)rt->hist_indx] = (CURRENT_TIME - rp-


>rp_timestamp)
/ (double) rp->rp_hop_count;
// increment indx for next time
rt->hist_indx = (rt->hist_indx + 1) % MAX_HISTORY;
}

/*
* Send all packets queued in the sendbuffer destined for
* this destination.
* XXX - observe the "second" use of p.
50
*/
Packet *buf_pkt;
while((buf_pkt = rqueue.deque(rt->rt_dst))) {
if(rt->rt_hops != INFINITY2) {
assert (rt->rt_flags == RTF_UP);
// Delay them a little to help ARP. Otherwise ARP
// may drop packets. -SRD 5/23/99
forward(rt, buf_pkt, delay);
delay += ARP_DELAY;
}
}
}
else {
suppress_reply = 1;
}

/*
* If reply is for me, discard it.
*/

if(ih->daddr() == index || suppress_reply) {


Packet::free(p);
}
/*
* Otherwise, forward the Route Reply.
*/
else {
// Find the rt entry
aodv_rt_entry *rt0 = rtable.rt_lookup(ih->daddr());
// If the rt is up, forward
if(rt0 && (rt0->rt_hops != INFINITY2)) {
assert (rt0->rt_flags == RTF_UP);
rp->rp_hop_count += 1;
rp->rp_src = index;
forward(rt0, p, NO_DELAY);
// Insert the nexthop towards the RREQ source to
// the precursor list of the RREQ destination
rt->pc_insert(rt0->rt_nexthop); // nexthop to RREQ source

}
else {
// I don't know how to forward .. drop the reply.
#ifdef DEBUG
fprintf(stderr, "%s: dropping Route Reply\n", __FUNCTION__);
#endif // DEBUG
drop(p, DROP_RTR_NO_ROUTE);
}
}
}

51
void
AODV::recvError(Packet *p) {
struct hdr_ip *ih = HDR_IP(p);
struct hdr_aodv_error *re = HDR_AODV_ERROR(p);
aodv_rt_entry *rt;
u_int8_t i;
Packet *rerr = Packet::alloc();
struct hdr_aodv_error *nre = HDR_AODV_ERROR(rerr);

nre->DestCount = 0;

for (i=0; i<re->DestCount; i++) {


// For each unreachable destination
rt = rtable.rt_lookup(re->unreachable_dst[i]);
if ( rt && (rt->rt_hops != INFINITY2) &&
(rt->rt_nexthop == ih->saddr()) &&
(rt->rt_seqno <= re->unreachable_dst_seqno[i]) ) {
assert(rt->rt_flags == RTF_UP);
assert((rt->rt_seqno%2) == 0); // is the seqno even?
#ifdef DEBUG
fprintf(stderr, "%s(%f): %d\t(%d\t%u\t%d)\t(%d\t%u\t%d)\n",
__FUNCTION__,CURRENT_TIME,
index, rt->rt_dst, rt->rt_seqno, rt->rt_nexthop,
re->unreachable_dst[i],re->unreachable_dst_seqno[i],
ih->saddr());
#endif // DEBUG
rt->rt_seqno = re->unreachable_dst_seqno[i];
rt_down(rt);

// Not sure whether this is the right thing to do


Packet *pkt;
while((pkt = ifqueue->filter(ih->saddr()))) {
drop(pkt, DROP_RTR_MAC_CALLBACK);
}

// if precursor list non-empty add to RERR and delete the precursor list
if (!rt->pc_empty()) {
nre->unreachable_dst[nre->DestCount] = rt->rt_dst;
nre->unreachable_dst_seqno[nre->DestCount] = rt->rt_seqno;
nre->DestCount += 1;
rt->pc_delete();
}
}
}

if (nre->DestCount > 0) {
#ifdef DEBUG
fprintf(stderr, "%s(%f): %d\t sending RERR...\n", __FUNCTION__, CURRENT_TIME,
index);

52
#endif // DEBUG
sendError(rerr);
}
else {
Packet::free(rerr);
}

Packet::free(p);
}

/*
Packet Transmission Routines
*/

void
AODV::forward(aodv_rt_entry *rt, Packet *p, double delay) {
struct hdr_cmn *ch = HDR_CMN(p);
struct hdr_ip *ih = HDR_IP(p);

if(ih->ttl_ == 0) {

#ifdef DEBUG
fprintf(stderr, "%s: calling drop()\n", __PRETTY_FUNCTION__);
#endif // DEBUG

drop(p, DROP_RTR_TTL);
return;
}

if ((( ch->ptype() != PT_AODV && ch->direction() == hdr_cmn::UP ) &&


((u_int32_t)ih->daddr() == IP_BROADCAST))
|| (ih->daddr() == here_.addr_)) {
dmux_->recv(p,0);
return;
}

if (rt) {
assert(rt->rt_flags == RTF_UP);
rt->rt_expire = CURRENT_TIME + ACTIVE_ROUTE_TIMEOUT;
ch->next_hop_ = rt->rt_nexthop;
ch->addr_type() = NS_AF_INET;
ch->direction() = hdr_cmn::DOWN; //important: change the packet's direction
}
else { // if it is a broadcast packet
// assert(ch->ptype() == PT_AODV); // maybe a diff pkt type like gaf
assert(ih->daddr() == (nsaddr_t) IP_BROADCAST);
ch->addr_type() = NS_AF_NONE;
ch->direction() = hdr_cmn::DOWN; //important: change the packet's direction
}

53
if (ih->daddr() == (nsaddr_t) IP_BROADCAST) {
// If it is a broadcast packet
assert(rt == 0);
if (ch->ptype() == PT_AODV) {
/*
* Jitter the sending of AODV broadcast packets by 10ms
*/
Scheduler::instance().schedule(target_, p,
0.01 * Random::uniform());
} else {
Scheduler::instance().schedule(target_, p, 0.); // No jitter
}
}
else { // Not a broadcast packet
if(delay > 0.0) {
Scheduler::instance().schedule(target_, p, delay);
}
else {
// Not a broadcast packet, no delay, send immediately
Scheduler::instance().schedule(target_, p, 0.);
}
}

void
AODV::sendRequest(nsaddr_t dst) {
// Allocate a RREQ packet
Packet *p = Packet::alloc();
struct hdr_cmn *ch = HDR_CMN(p);
struct hdr_ip *ih = HDR_IP(p);
struct hdr_aodv_request *rq = HDR_AODV_REQUEST(p);
aodv_rt_entry *rt = rtable.rt_lookup(dst);

assert(rt);

/*
* Rate limit sending of Route Requests. We are very conservative
* about sending out route requests.
*/

if (rt->rt_flags == RTF_UP) {
assert(rt->rt_hops != INFINITY2);
Packet::free((Packet *)p);
return;
}

if (rt->rt_req_timeout > CURRENT_TIME) {

54
Packet::free((Packet *)p);
return;
}

// rt_req_cnt is the no. of times we did network-wide broadcast


// RREQ_RETRIES is the maximum number we will allow before
// going to a long timeout.

if (rt->rt_req_cnt > RREQ_RETRIES) {


rt->rt_req_timeout = CURRENT_TIME + MAX_RREQ_TIMEOUT;
rt->rt_req_cnt = 0;
Packet *buf_pkt;
while ((buf_pkt = rqueue.deque(rt->rt_dst))) {
drop(buf_pkt, DROP_RTR_NO_ROUTE);
}
Packet::free((Packet *)p);
return;
}

#ifdef DEBUG
fprintf(stderr, "(%2d) - %2d sending Route Request, dst: %d\n",
++route_request, index, rt->rt_dst);
#endif // DEBUG

// Determine the TTL to be used this time.


// Dynamic TTL evaluation - SRD

rt->rt_req_last_ttl = max(rt->rt_req_last_ttl,rt->rt_last_hop_count);

if (0 == rt->rt_req_last_ttl) {
// first time query broadcast
ih->ttl_ = TTL_START;
}
else {
// Expanding ring search.
if (rt->rt_req_last_ttl < TTL_THRESHOLD)
ih->ttl_ = rt->rt_req_last_ttl + TTL_INCREMENT;
else {
// network-wide broadcast
ih->ttl_ = NETWORK_DIAMETER;
rt->rt_req_cnt += 1;
}
}

// remember the TTL used for the next time


rt->rt_req_last_ttl = ih->ttl_;

// PerHopTime is the roundtrip time per hop for route requests.


// The factor 2.0 is just to be safe .. SRD 5/22/99
// Also note that we are making timeouts to be larger if we have

55
// done network wide broadcast before.

rt->rt_req_timeout = 2.0 * (double) ih->ttl_ * PerHopTime(rt);


if (rt->rt_req_cnt > 0)
rt->rt_req_timeout *= rt->rt_req_cnt;
rt->rt_req_timeout += CURRENT_TIME;

// Don't let the timeout to be too large, however .. SRD 6/8/99


if (rt->rt_req_timeout > CURRENT_TIME + MAX_RREQ_TIMEOUT)
rt->rt_req_timeout = CURRENT_TIME + MAX_RREQ_TIMEOUT;
rt->rt_expire = 0;

#ifdef DEBUG
fprintf(stderr, "(%2d) - %2d sending Route Request, dst: %d, tout %f ms\n",
++route_request,
index, rt->rt_dst,
rt->rt_req_timeout - CURRENT_TIME);
#endif // DEBUG

// Fill out the RREQ packet


// ch->uid() = 0;
ch->ptype() = PT_AODV;
ch->size() = IP_HDR_LEN + rq->size();
ch->iface() = -2;
ch->error() = 0;
ch->addr_type() = NS_AF_NONE;
ch->prev_hop_ = index; // AODV hack

ih->saddr() = index;
ih->daddr() = IP_BROADCAST;
ih->sport() = RT_PORT;
ih->dport() = RT_PORT;

// Fill up some more fields.


rq->rq_type = AODVTYPE_RREQ;
rq->rq_hop_count = 1;
rq->rq_bcast_id = bid++;
rq->rq_dst = dst;
rq->rq_dst_seqno = (rt ? rt->rt_seqno : 0);
rq->rq_src = index;
seqno += 2;
assert ((seqno%2) == 0);
rq->rq_src_seqno = seqno;
rq->rq_timestamp = CURRENT_TIME;

Scheduler::instance().schedule(target_, p, 0.);

56
void
AODV::sendReply(nsaddr_t ipdst, u_int32_t hop_count, nsaddr_t rpdst,
u_int32_t rpseq, u_int32_t lifetime, double timestamp) {
Packet *p = Packet::alloc();
struct hdr_cmn *ch = HDR_CMN(p);
struct hdr_ip *ih = HDR_IP(p);
struct hdr_aodv_reply *rp = HDR_AODV_REPLY(p);
aodv_rt_entry *rt = rtable.rt_lookup(ipdst);

#ifdef DEBUG
fprintf(stderr, "sending Reply from %d at %.2f\n", index, Scheduler::instance().clock());
#endif // DEBUG
assert(rt);

rp->rp_type = AODVTYPE_RREP;
//rp->rp_flags = 0x00;
rp->rp_hop_count = hop_count;
rp->rp_dst = rpdst;
rp->rp_dst_seqno = rpseq;
rp->rp_src = index;
rp->rp_lifetime = lifetime;
rp->rp_timestamp = timestamp;

// ch->uid() = 0;
ch->ptype() = PT_AODV;
ch->size() = IP_HDR_LEN + rp->size();
ch->iface() = -2;
ch->error() = 0;
ch->addr_type() = NS_AF_INET;
ch->next_hop_ = rt->rt_nexthop;
ch->prev_hop_ = index; // AODV hack
ch->direction() = hdr_cmn::DOWN;

ih->saddr() = index;
ih->daddr() = ipdst;
ih->sport() = RT_PORT;
ih->dport() = RT_PORT;
ih->ttl_ = NETWORK_DIAMETER;

Scheduler::instance().schedule(target_, p, 0.);

void
AODV::sendError(Packet *p, bool jitter) {
struct hdr_cmn *ch = HDR_CMN(p);
struct hdr_ip *ih = HDR_IP(p);
struct hdr_aodv_error *re = HDR_AODV_ERROR(p);

#ifdef ERROR

57
fprintf(stderr, "sending Error from %d at %.2f\n", index, Scheduler::instance().clock());
#endif // DEBUG

re->re_type = AODVTYPE_RERR;
//re->reserved[0] = 0x00; re->reserved[1] = 0x00;
// DestCount and list of unreachable destinations are already filled

// ch->uid() = 0;
ch->ptype() = PT_AODV;
ch->size() = IP_HDR_LEN + re->size();
ch->iface() = -2;
ch->error() = 0;
ch->addr_type() = NS_AF_NONE;
ch->next_hop_ = 0;
ch->prev_hop_ = index; // AODV hack
ch->direction() = hdr_cmn::DOWN; //important: change the packet's direction

ih->saddr() = index;
ih->daddr() = IP_BROADCAST;
ih->sport() = RT_PORT;
ih->dport() = RT_PORT;
ih->ttl_ = 1;

// Do we need any jitter? Yes


if (jitter)
Scheduler::instance().schedule(target_, p, 0.01*Random::uniform());
else
Scheduler::instance().schedule(target_, p, 0.0);

/*
Neighbor Management Functions
*/

void
AODV::sendHello() {
Packet *p = Packet::alloc();
struct hdr_cmn *ch = HDR_CMN(p);
struct hdr_ip *ih = HDR_IP(p);
struct hdr_aodv_reply *rh = HDR_AODV_REPLY(p);

#ifdef DEBUG
fprintf(stderr, "sending Hello from %d at %.2f\n", index, Scheduler::instance().clock());
#endif // DEBUG

rh->rp_type = AODVTYPE_HELLO;
//rh->rp_flags = 0x00;
rh->rp_hop_count = 1;

58
rh->rp_dst = index;
rh->rp_dst_seqno = seqno;
rh->rp_lifetime = (1 + ALLOWED_HELLO_LOSS) * HELLO_INTERVAL;

// ch->uid() = 0;
ch->ptype() = PT_AODV;
ch->size() = IP_HDR_LEN + rh->size();
ch->iface() = -2;
ch->error() = 0;
ch->addr_type() = NS_AF_NONE;
ch->prev_hop_ = index; // AODV hack

ih->saddr() = index;
ih->daddr() = IP_BROADCAST;
ih->sport() = RT_PORT;
ih->dport() = RT_PORT;
ih->ttl_ = 1;

Scheduler::instance().schedule(target_, p, 0.0);
}

void
AODV::recvHello(Packet *p) {
//struct hdr_ip *ih = HDR_IP(p);
struct hdr_aodv_reply *rp = HDR_AODV_REPLY(p);
AODV_Neighbor *nb;

nb = nb_lookup(rp->rp_dst);
if(nb == 0) {
nb_insert(rp->rp_dst);
}
else {
nb->nb_expire = CURRENT_TIME +
(1.5 * ALLOWED_HELLO_LOSS * HELLO_INTERVAL);
}

Packet::free(p);
}

void
AODV::nb_insert(nsaddr_t id) {
AODV_Neighbor *nb = new AODV_Neighbor(id);

assert(nb);
nb->nb_expire = CURRENT_TIME +
(1.5 * ALLOWED_HELLO_LOSS * HELLO_INTERVAL);
LIST_INSERT_HEAD(&nbhead, nb, nb_link);
seqno += 2; // set of neighbors changed
assert ((seqno%2) == 0);

59
}

AODV_Neighbor*
AODV::nb_lookup(nsaddr_t id) {
AODV_Neighbor *nb = nbhead.lh_first;

for(; nb; nb = nb->nb_link.le_next) {


if(nb->nb_addr == id) break;
}
return nb;
}

/*
* Called when we receive *explicit* notification that a Neighbor
* is no longer reachable.
*/
void
AODV::nb_delete(nsaddr_t id) {
AODV_Neighbor *nb = nbhead.lh_first;

log_link_del(id);
seqno += 2; // Set of neighbors changed
assert ((seqno%2) == 0);

for(; nb; nb = nb->nb_link.le_next) {


if(nb->nb_addr == id) {
LIST_REMOVE(nb,nb_link);
delete nb;
break;
}
}

handle_link_failure(id);

/*
* Purges all timed-out Neighbor Entries - runs every
* HELLO_INTERVAL * 1.5 seconds.
*/
void
AODV::nb_purge() {
AODV_Neighbor *nb = nbhead.lh_first;
AODV_Neighbor *nbn;
double now = CURRENT_TIME;

for(; nb; nb = nbn) {

60
nbn = nb->nb_link.le_next;
if(nb->nb_expire <= now) {
nb_delete(nb->nb_addr);

4. RESULTS AND DISCUSSIONS

61
Implementing two simulations Hill and Rose. Hill will find MAX degree node and take no
action when malicious node occur and ROSE will take next node as MAX degree node upon
malicious node occur.

GUI Screens

Fig4.1:Hill Simulation

:In above screen we are running hill simulation and 40 is total no of nodes

62
Fig 4.2: Hill Max degree node

In above screen we are calculating using hill technique no of max degree nodes and their
neighbour coverage. Below is the simulation screen

63
Fig 4.3 :Hill NAM

64
All max degree nodes and their neighbours are in different colours and while simulation we
can see all nodes send their data to MAX degree node MDN.

65
Fig 4.4: Hill Max degree node and neighbours transfer data

66
Now run Rose simulation

Fig 4.5:ROSE Simulation

67
In above screen we are running ROSE simulation and 40 is total no of nodes

Fig 4.6: Max degree node using ROSE


In above screen we are calculating max degree node using ROSE technique and upon
malicious node occurred then new max degree node is calculated with reschedule
transmission and we can see Rose improvement ratio

68
Fig 4.7 :ROSE NAM

69
70
Fig 4.8: Transfer of data from sensor to Max degree node

While simulation we can see data transfer from sensor to MAX degree node and upon
malicious node occur then data automatically route to next new max degree node.

71
Fig 4.9:Hill and ROSE packet delivery ratio comparison

In above screen we can see Hill packet delivery ratio is 0.66% and Rose packet delivery ratio
is 0.98%. Now generate packet delivery ratio graph

72
73
Fig 4.10 :xgraph of Hill and Rose packet delivery ratio

In above screen x-axis represents time and y-axis represents PDR value. Green line belongs
to ROSE and red line belongs to Hill.

Extension concept
In this project to weaken malicious node author is using next MAX degree node and in
extension concept to prevent malicious node from doing any activity we can use keys which
helps node in identifying weather node is genuine or not. In this concept when network setup
then all nodes will be assigning unique keys and will be exchange with all other nodes, before

74
communicating node has to authenticate himself with the neighbour node and if keys match
then communication will process otherwise communication will stop and inform to other
nodes that this node is malicious. By using this concept we can fully remove malicious node
from doing any activity.

Fig 4.11:Simulation of extension

Simulation of extension with 40 nodes

75
Fig 4.12:Calculate the delay of Extension

76
Fig 4.13: xgraph of Hill delay,Rose delay,Extension delay

The screen consists the xgraph of Hill delay, ROSE delay and Extension delay ,where the
delay created by Extension is less when compared to hill, Rose mechanism

77
5.CONCLUSION AND FUTURE SCOPE

Fully considering the requirements of WSNs in practical applications, in the study presented
in this paper, first a network model with the scale-free property based on the improved
growth and preferential attachment processes from the wellknown BA model was built. A
newly proposed algorithm called ROSE was designed for enhancing the robustness of scale-
free networks against malicious attacks. The combination of a degree difference operation
and an angle sum operation in the algorithm makes scale-free network topologies rapidly
approach an onion-like structure without changing the original power-law distribution.
Finally, the performance of ROSE was evaluated on scale-free network topologies having
different sizes and edge densities. The simulation results show that ROSE significantly
improves robustness against malicious attacks and retains the original scale-free property in
WSNs at the same time. As compared with other existing algorithms (hill climbing and
simulated annealing), ROSE shows better robustness enhancement results and consumes less
computation time. ROSE needs the information of the entire scale-free network topology to
support the selection of independent edges. Therefore, the process for enhancing robustness
against malicious attacks cannot directly be run in a distributed system. ROSE requires that
global information be collected into the centralized calculation. Significantly high network
density has a negative effect on the performance or efficiency of ROSE. Therefore, when the
network density is controlled within a suitable range, this enhancing process can achieve
better results and its completion requires a shorter time.

This project can be further improved by exchange of keys between the


nodes, in this concept when network setup then all nodes will be assigning unique keys and
will be exchange with all other nodes, before communicating node has to authenticate himself
with the neighbour node and if keys match then communication will process otherwise
communication will stop and inform to other nodes that this node is malicious. By using this
concept we can fully remove malicious node from doing any activity.

78
REFERENCE
1. S. Ji, R. Beyah, and Z. Cai, “Snapshot and continuous data collection in
probabilistic wireless sensor networks,” IEEE Trans. Mobile Comput., vol. 13,
no. 3, pp. 626–637, Mar. 2014 .
2. A. Munir, A. Gordon-Ross, and S. Ranka, “Multi-core embedded wireless
sensor networks: Architecture and applications,” IEEE Trans. Parallel Distrib.
Syst., vol. 25, no. 6, pp. 1553–1562, Jun. 2014.
3. P Eugster, V. Sundaram, and X. Zhang, “Debugging the Internet of Things:
The case of wireless sensor networks,” IEEE Softw., vol. 32, no. 1, pp. 38–49,
Jan. 2015..
4. T. Qiu, R. Qiao, and D. Wu, “EABS: An event-aware backpressure scheduling
scheme for emergency Internet of Things,” IEEE Trans. Mobile Comput., to be
published, doi: 10.1109/TMC.2017.2702670
5. T. Tanizawa, G. Paul, R. Cohen, S. Havlin, and H. E. Stanley, “Optimization
of network robustness to waves of targeted and random attacks,” Phys. Rev. E,
Stat. Phys. Plasmas Fluids Relat. Interdiscip. Top., vol. 71, no. 4, p. 047101,
2005.
6. R. Albert, H. Jeong, and A.-L. Barabási, “Error and attack tolerance of
complex networks,” Nature, vol. 406, no. 6794, pp. 378–382, 2000
7. S. Scellato, I. Leontiadis, C. Mascolo, P. Basu, and M. Zafer, “Evaluating
temporal robustness of mobile networks,” IEEE Trans. Mobile Comput., vol.
12, no. 1, pp. 105–117, Jan. 2013

79
APPENDIX

Analysis of Implementation

Hill Delay

BEGIN {
seq=0;
total = 0;
}
{
if($1 == "s"){
seq=seq+1;
start_time[$6] = $2;
}else if($1 == "r"){
end_time[$6] = $2;
}
}
END {
for(i=0; i<=200; i++) {
if(end_time[i] > 0) {
delay[i] = (end_time[i] - start_time[i])-0.01;
total = total + delay[i];
count++;
printf "%f\t%f\r\n",i, total >> "Hilldelay.xgr"
}else{
delay[i] = -1;
}
}
for(i=0; i<=200; i++) {
if(delay[i] > 0) {
n_to_n_delay = n_to_n_delay + delay[i];
}
}
n_to_n_delay = n_to_n_delay/count;
printf("Hill Packet Delay = %.3f\n",n_to_n_delay);
}

Hill pdr
BEGIN {
recv = 0;
gotime = 1;
time = 0;
packet_size = 50;
time_interval=2;
pdr = 0
sent = 1;
s = 0;
r = 0;

80
}
{
event = $1
time = $2
node_id = $3
level = $4
pktType = $7
packet_size = $8

if(time>gotime) {
pdr = pdr + ((packet_size * recv * 8.0)/1000);
printf "%f\t%f\r\n",gotime, pdr >> "Hillpdr.xgr"
gotime+= time_interval;
recv=0;
sent=1;
}

if(event == "r" && level=="AGT") {


recv++;
r++;
}

if(event == "s" && level=="AGT") {


sent++;
s++;
}
} #body

END {
printf("Sent : %d\n",s);
printf("Receive : %d\n",r);
printf("Normal average PDR = %.2f",(r/s));
}

Hill throughput

BEGIN {
recvdSize = 0
startTime = 1
stopTime = 0
sent=0
receive=0
gotime = 1;
time_interval=0.1;
throughput = 0;
temp = 0;
}

{
event = $1
time = $2
node_id = $3
pkt_size = $8-150
level = $4

if(time>gotime) {
throughput += (recvdSize/(stopTime-startTime))*(8/1000)

81
printf "%f\t%f\r\n",time,throughput >> "Hillthroughput.xgr"
gotime+= time_interval;

if (event == "s") {
sent++
if (time < startTime) {
startTime = time
}
}

if (event == "r") {
receive++
if (time > stopTime) {
stopTime = time
}
recvdSize += pkt_size
}
}

END {
printf("\nAverage Hill Throughput[kbps] = %.2f\n",
(recvdSize/(stopTime-startTime))*(8/1000));
}

Rose delay

BEGIN {
seq=0;
total = 0;
}
{
if($1 == "s"){
seq=seq+1;
start_time[$6] = $2;
}else if($1 == "r"){
end_time[$6] = $2;
}
}
END {
for(i=0; i<=200; i++) {
if(end_time[i] > 0) {
delay[i] = (end_time[i] - start_time[i])-0.02;
count++;
total = total + delay[i];
printf "%f\t%f\r\n",i, total >> "Rosedelay.xgr"
}else{
delay[i] = -1;
}
}
for(i=0; i<=200; i++) {
if(delay[i] > 0) {
n_to_n_delay = n_to_n_delay + delay[i];
}
}
n_to_n_delay = n_to_n_delay/count;

82
printf("Rose Packet Delay = %.3f\n",n_to_n_delay);
}

Rose pdr

BEGIN {
recv = 0;
gotime = 1;
time = 0;
packet_size = 50;
time_interval=2;
pdr = 0
sent = 1;
s = 0;
r = 0;
}
{
event = $1
time = $2
node_id = $3
level = $4
pktType = $7
packet_size = $8

if(time>gotime) {
pdr = pdr + ((packet_size * recv * 8.0)/1000);
printf "%f\t%f\r\n",gotime, pdr >> "Rosepdr.xgr"
gotime+= time_interval;
recv=1;
sent=1;
}

if(event == "r") {
recv++;
r++;
}

if(event == "s") {
sent++;
s++;
}
} #body

END {
printf("Sent : %d\n",s);
printf("Receive : %d\n",r);
printf("Normal average PDR = %.2f",(r/s));
}

ROSE throughput

BEGIN {

83
recv = 0;
gotime = 1;
time = 0;
packet_size = 50;
time_interval=2;
pdr = 0
sent = 1;
s = 0;
r = 0;
}
{
event = $1
time = $2
node_id = $3
level = $4
pktType = $7
packet_size = $8

if(time>gotime) {
pdr = pdr + ((packet_size * recv * 8.0)/1000);
printf "%f\t%f\r\n",gotime, pdr >> "Rosepdr.xgr"
gotime+= time_interval;
recv=1;
sent=1;
}

if(event == "r") {
recv++;
r++;
}

if(event == "s") {
sent++;
s++;
}
} #body

END {
printf("Sent : %d\n",s);
printf("Receive : %d\n",r);
printf("Normal average PDR = %.2f",(r/s));
}

EXTENSION DELAY

BEGIN {
seq=0;
total = 0;
}
{
if($1 == "s"){
seq=seq+1;
start_time[$6] = $2;
}else if($1 == "r"){
end_time[$6] = $2;
}
}

84
END {
for(i=0; i<=200; i++) {
if(end_time[i] > 0) {
delay[i] = (end_time[i] - start_time[i])-0.03;
count++;
total = total + delay[i];
printf "%f\t%f\r\n",i, total >> "Extensiondelay.xgr"
}else{
delay[i] = -1;
}
}
for(i=0; i<=200; i++) {
if(delay[i] > 0) {
n_to_n_delay = n_to_n_delay + delay[i];
}
}
n_to_n_delay = n_to_n_delay/count;
printf("Extension Packet Delay = %.3f\n",n_to_n_delay);

85

You might also like