Main Documentation
Main Documentation
INTRODUCTION
1.1 Introduction
Wired networks, also called Ethernet networks, are the most common type of local
Ethernet is the fastest wired network protocol, with connection speeds of 10 megabits
per second (Mbps) to 100 Mbps or higher. Wired networks can also be used as part of
Ethernet cable, the computer must have an Ethernet adapter (sometimes called a
A wireless network, which uses high-frequency radio waves rather than wires to
Individuals and organizations can use this option to expand their existing wired
without networking cable which increases mobility but decreases range. There are
two main types of wireless networking; peer to peer or ad-hoc and infrastructure.
communication. Basically, first a computers wireless adapter changes the data into
1
radio signals and then transmits these signals through the use of an antenna. Then a
equipped with a wireless networking interface card. Each computer can communicate
this type of network the access point acts like a hub, providing connectivity for the
wireless computers. It can connect or bridge the wireless LAN to a wired LAN,
allowing wireless computer access to LAN resources, such as file servers or existing
Internet Connectivity.
Wireless Sensor Networks are particular type of adhoc networks, comprised mainly
of large number deployed sensor nodes with limited resources and one or more base
stations (BS)or sink, typically serves as the access point for the user or as gateway to
another network. Nodes can collect and transmit (with wireless links) environmental
2
data in autonomous manner. The node in WSN plays two roles: collect data and route
data back to the base station.
1.3 Summary
Due to the recent proliferation of cyber-attacks, improving the robustness of wireless
sensor networks (WSNs), so that they can withstand node failures has become a
critical issue. Scale-free WSNs are important, because they tolerate random attacks
very well; however, they can be vulnerable to malicious attacks, which particularly
target certain important nodes. To address this shortcoming, this project presents a
new modelling strategy to generate scale-free network topologies, which considers the
constraints in WSNs, such as the communication range and the threshold on the
3
maximum node degree. Then, ROSE, a novel robustness enhancing algorithm for
scale free WSNs, is proposed.
2. Literature Survey
4
Introduction: To know about the background work or literature work what is
happening in this particular field ,in this chapter we discuss various research works.
Energy efficient routing in wireless sensor networks (WSNs) has been studied widely
to enhance the network performance. Various nature-inspired routing mechanisms are
proposed to achieve scalable solutions. However, conventional nature-inspired
optimization algorithms are insufficient to solve discrete routing optimization
problems. In this study, a new discrete particle swarm optimization algorithm (PSO)
based routing protocol is designed to achieve better performance. In the new protocol,
firstly, two new fitness functions with energy awareness for clustering and routing are
formulated respectively. Secondly, a novel greedy discrete PSO with memory is put
forward to build optimal routing tree. In project, particle’s position and velocity are
redefined under a discrete scenario; particle update rules are reconsidered based on
the network topology; a greedy search strategy is designed to drive particles to find
better position quickly. Besides, searching histories are memorized to accelerate
convergence. Simulations results show the efficiency effectiveness of the new
protocol.
5
(for MCEWSNs. We compare and analyze the performance of an SMP (an Intel-based
SMP) and a TMA (Tilera's TILEPro64) based on a parallelized information fusion
application for various performance metrics (e.g., runtime, speedup, efficiency, cost,
and performance per watt). Results reveal that TMAs exploit data locality effectively
and are more suitable for MCEWSN applications that require integer manipulation of
sensor data, such as information fusion, and have little or no communication between
the parallelized tasks. To demonstrate the practical relevance of MCEWSNs, this paper
also discusses several state-of-the-art multi-core embedded sensor node prototypes
developed in academia and industry.
The backpressure scheduling scheme has been applied in Internet of Things, which
can control the network congestion effectively and increase the network throughput.
However, in large-scale Emergency Internet of Things (EIoT), emergency packets
may exist because of the urgent events or situations. The traditional backpressure
scheduling scheme will explore all the possible routes between the source and
destination nodes that cause a superfluous long path for packets. Therefore, the end-to
end delay increases and the real-time performance of emergency packets cannot be
guaranteed. To address this shortcoming, this paper proposes EABS, an event-aware
backpressure scheduling scheme for EIoT. A backpressure queue model with
emergency packets is first devised based on the analysis of the arrival process of
different packets. Meanwhile, EABS combines the shortest path with backpressure
scheme in the process of next hop node selecting. The emergency packets are
forwarded in the shortest path and avoid the network congestion according to the
queue backlog difference. The extensive experiment results verify that EABS can
reduce the average end-to-end delay and increase the average forwarding percentage.
For the emergency packets, the real-time performance is guaranteed. Moreover, we
compare EABS with two existing backpressure scheduling schemes, showing that
EABS outperforms both of them
6
4. Optimization of network robustness to waves of targeted and
random attacks
We consider continuous time Hopfield-like recurrent networks as dynamical models
for gene regulation and neural networks. We are interested in networks that
contain n high-degree nodes preferably connected to a large number of N s weakly
connected satellites, a property that we call n/N s-centrality. If the hub dynamics is
slow, we obtain that the large time network dynamics is completely defined by the
hub dynamics. Moreover, such networks are maximally flexible and switchable, in the
sense that they can switch from a globally attractive rest state to any structurally
stable dynamics when the response time of a special controller hub is changed. In
particular, we show that a decrease of the controller hub response time can lead to a
sharp variation in the network attractor structure: we can obtain a set of new local
attractors, whose number can increase exponentially with N, the total number of nodes
of the nework. These new attractors can be periodic or even chaotic. We provide an
algorithm, which allows us to design networks with the desired switching properties,
or to learn them from time series, by adjusting the interactions between hubs and
satellites. Such switchable networks could be used as models for context dependent
adaptation in functional genetics or as models for cognitive functions in neuroscience.
Many complex systems display a surprising degree of tolerance against errors. For
example, relatively simple organisms grow, persist and reproduce despite drastic
pharmaceutical or environmental interventions, an error tolerance attributed to the
robustness of the underlying metabolic network. Complex communication networks 2
display a surprising degree of robustness: although key components regularly
malfunction, local failures rarely lead to the loss of the global information-carrying
ability of the network. The stability of these and other complex systems is often
attributed to the redundant wiring of the functional web designed by the systems
components. Here we demonstrate that error tolerance is not shared by all redundant
systems: it is displayed only by a class of in homogeneously wired networks,called
scale-free networks, which include the World-Wide Web, the Internet, social
networks and cells. We end that such networks display an unexpected degree of
robustness, the ability of their nodes to communicate being unaffected even by un-
7
realistically high failure rates. However, error tolerance comes at a high price in that
these networks are extremely vulnerable to attacks (that is, to the selection and
removal of a few nodes that play a vital role in maintaining the network's
connectivity). Such error tolerance and attack vulnerability are generic properties of
communication networks
Summary: By going through all these literature, there is a large gap, ROSE, a novel
robustness enhancing algorithm for scale free WSNs, is proposed. Given a scale-free
topology, ROSE exploits the position and degree information of nodes to rearrange
the edges to resemble an onion-like structure, which has been proven to be robust
against malicious attacks. Meanwhile, ROSE keeps the degree of each node in the
topology unchanged such that the resulting topology remains scale-free. The
extensive experimental results verify that our new modelling strategy indeed
generates scale-free network topologies for WSNs, and ROSE can significantly
8
improve the robustness of the network topologies generated by our modelling
strategy.
9
3 .Project
Because of the limited communication range in WSNs, each node does not have
sufficient neighbours and cannot establish very many edges. Hence, the preferential
attachment property of the BA model cannot be simulated directly in WSNs. In recent
years, many researchers have focused on the application of scale-free network
topologies in WSNs. In scale-free networks, a small number of nodes have very high
degrees, which renders these networks vulnerable to malicious attacks. When a node
with high degree fails, the large number of edges incident on it are removed at the same
time. The entire network topology is thus quickly fragmented. Therefore, the main
purpose of this study was to improve the robustness of scale-free networks in WSNs
against malicious attacks. The addition of edges or relay nodes can directly solve or
alleviate this problem. However, additional edges destroy the original scale-free
property and consume additional energy. Therefore, the optimizing process of net-work
robustness against malicious attacks cannot change the degree distribution of the initial
network topology.
10
3.2 Proposed System:
In general, the design of ROSE is based on the observation of Schneider that graphs
exhibiting an onion-like structure are robust to malicious attacks. Schneider. described
the onion-like structure as a structure “consisting of a core of highly connected nodes
hierarchically surrounded by rings of nodes with decreasing degrees.” Note that
Schneider validated their observation only by extensive simulations. One year later, the
theoretical analysis supporting this observation was provided by Tanizawa. Since the
onion-like structure encompasses a family of network topologies, analyzed one specific
topology called the “interconnected random regular graphs” of this family, and proved
its robustness against malicious attacks.
Given that the above observation about the onion-like structure has been validated both
experimentally and theoretically, the ROSE algorithm is aimed to transform network
topologies to exhibit the onion-like structure. Specifically, ROSE involves two phases: a
degree difference operation and angle sum operation. In the following, we first introduce
the concept of independent edges which is used in both operations; then, we describe the
basics of these two operations.
1) Each of nodes i, j, k, and l must be in the communication range of the other three nodes.
This ensures that every node has the ability to establish a connection with others.
2) There are no extra connections between nodes i, j, k, and l, except the existing edges eij
and ekl or a pair of independent edges, eij and ekl , all three possible connection methods
11
of nodes i, j, k, and l are illustrated in Fig. 3.1 Fig. 3a represents the original connection
method, eij and ekl ; Fig. 3b represents the first alternative connection method, eik and ejl ;
and Fig. 3c represents the second alternative connection method, eil and ejk . The primary
idea of robustness enhancement is to improve the robustness of a scale-free network
topology by swapping the edges into alternative connection methods. If the value of R is
increased after a swap is executed, the swap is accepted; otherwise, not. Note that when
a pair of edges eij and ekl are swapped into the alternative connection methods, the
degrees of nodes i, j,k, and l are preserved.
In the degree difference operation and angle sum operation described next, only when a
pair of edges are independent edges are they considered for swapping; otherwise, not.
This not only satisfies the WSNs communication range constraint on the sensor nodes,
but also dramatically reduces the pairs of edges considered.
3.3 Algorithm:
In this section, we provide the design of the specific robustness enhancing algorithms
for a scale-free network topology.
Input: N , V , r, m
12
Output: Lsti
procedure BANETWORKBUILD(A)
1: for all vi ∈ V do
2: vi ← receiveStartingSignal()
3: Setting timer
4: if Timer of vi expired then
5: broadcastStartPacket(vi)
6: vi ← receiveDisconnectNeighborDegree()
7: if the degree of nodes in Vi are all zero then
8: ΠLocal(j) = 1/Ni
10: else
11: for all vj ∈ Vi do
Ni
19: end if
13
This algorithm operates as follows. First, a newly joined node i receives a starting signal
and sets a timer (Lines 3 and 4). When the timer expires, it sends an add connection
request to all neighbours. Then, it receives the degree information of nodes that are in the
local world of node i (Lines 6 and 7). Next, node i calculates the connection probability
of each neighbour based on feedback information. If the degree of each neighbour node
is zero, node i connects with them with equal probability (Lines 8 to 9); otherwise, node i
calculates the connection probability according to the degree of neighbour nodes (Lines
11 to 12). The codes in Lines 15 to 16 describe the process of establishing m edges
between node i and its neighbours by the roulette method. Next, the neighbour list Lsti is
updated and broadcast to all neighbour nodes. Finally, node i broadcasts a starting signal
for the unjoined nodes (Line 18).
Input: A, E, N , r
Output: A
1: procedure DEGREEDIFFERENCEOPERATION(A)
5: SU B0 = p (|di − dj | + |dk − dl |)
6: SU B1 = |di − dk | + |dj − dl|
7: SU B2 = |di − dl | + |dj − dk |
8: SU B = min(SU B0, SU B1, SU B2)
9: if SU B == SU B1 then
10: A ← A (Remove eij and ekl in A and Add eil and ejk to A )
R(A) then
12: A←A
13: end if
14: else if SU B == SU B2 then
15: A ← A (Remove eij and ekl in A and Add
eik and ejl to A )
14
16: if NetworkFullConnected() and R(A ) ≥
R(A) then
17: A←A
18: end if
19: end if
20: end if
21: Mark this pair of edges(eij and ekl ).
22: end for
Algorithm 2 describes the process of the degree difference operation, which is executed
after the modelling of the scale-free network topology in WSNs. The variables used in the
algorithm are as follows.
Input: A, E, N , r
Output: A
15
1: procedure ANGLESUMOPERATION(A)
11: A ← A2
12: end if
13: end if
14: Mark this pair of edges(eij and ekl ).
15: end for
Algorithm 3 describes the process of the angle sum operation, which is executed after the
degree difference operation. The variables used in the algorithm are as follows.
• A1, A2 : the adjacency matrix after a swap operation based on the sum of surrounding
angles.
A pair of independent edges is randomly selected, eij and ekl (Lines 2 to 3). Then, the sum
of the surrounding angles for all the three connection methods is calculated. The
connection method with the maximum angle sum is selected (Lines 5 to 7). If it is the
initial connection method, the swapping operation is skipped and the next round of
selecting edges begins; otherwise, the adjacency matrix A to A1 or A2 is modified based
on the value of SUM . If the modification keeps the network topology connected and does
not reduce the value of R, it is accepted (Lines 8 to 12). Otherwise, the modified
adjacency matrix is returned to A and the algorithm begins the next round of selecting
edges. This process continues until all the pairs of edges in the adjacency matrix have
been examined.
16
3.4 Network Simulator
The network simulator is discrete event packet level simulator. The network simulator
covers a very large number of applications of different kind of protocols of different
network types consisting of different network elements and traffic models. Network
simulator is a package of tools that simulates behavior of networks such as creating
network topologies, log events that happen under any load, analyze the events and
understand the network. Well the main aim of our first experiment is to learn how to use
network simulator and to get acquainted with the simulated objects and understand the
operations of network simulation and we also need to analyze the behavior of the
simulation object using network simulation.
Some of the network simulators are commercial which means that they would not
provide the source code of its software or the affiliated packages to the general users
for free. All the user s have to pay to get the license to use their software or pay to
order specific packages for their own specific usage requirements. One typical
example is the OPNET. Commercial simulator has its advantage and disadvantage.
The advantage is that it generally has complete and up-to-date documentations and
they can be consistently maintained by some specialized staff in that company.
However, the open source network simulator is disadvantageous in this aspect, and
generally there are not enough specialized people working on the documentation. This
problem can be serious when the different versions come with many new things and it
will become difficult to trace or understand the previous codes without appropriate
documentations
17
Table3.1 Network Simulator Types
Type Name
On the contrary, the open source network simulator has the advantage that everything
is very open and everyone or organization can contribute to it and find bugs in it. The
interface is also open for future improvement. It can also be very flexible and reflect the most
new recent developments of new technologies in a faster way than commercial network
simulators. We can see that some advantages of commercial network simulators, however,
are the disadvantage for the open source network simulators. Lack of enough systematic and
complete documentations and lack of version control supports can lead to some serious
problem s and can limit the applicability and life-time of the open source network simulators.
Typical open source network simulators include NS2, NS3.
Currently there are a great variety of network simulators, ranging from the simple
ones to the complex ones. Minimally, a network simulator should enable users to represent a
network topology, defining the scenarios, specifying the nodes on the network, the links
between those nodes and the traffic between the nodes. More complicated systems may allow
the user to specify everything about the protocols used to process network traffic. Graphical
applications also allow users to easily visualize the workings of their simulated environment.
Some of them may be text-based and can provide a less visual or intuitive interface, but may
allow more advanced forms of customization. Others may be programming-oriented and can
provide a programming framework that allows the users to customize to create an application
that simulates the networking environment for testing.
18
Backend Environment of Network Simulator
Network Simulator is mainly based on two languages. They are C++ and OTcl. OTcl
is the object oriented version of Tool Command language. The network simulator is a
bank of of different network and protocol objects. C++ helps in the following way:
Before we get into the program we should consider the following things:
Initialization
From the above command we get that a variable ns is being initialized by using the set
command. Here the code [new Simulator] is a instantiation of the class Simulator which uses
19
the reserved word 'new'. So we can call all the methods present inside the class simulator by
using the variable ns.
In the above we create a output trace file out.tr and a nam visualization file out.nam. But
in the Tcl script they are not called by their names declared, while they are called by the
pointers initialized for them such as tracefile1 and namfile1 respectively. The line which
starts with '#' are commented. The next line opens the file 'out.tr' which is used for writing is
declared 'w'.The next line uses a simulator method trace-all by which we will trace all the
events in a particular format.
proc finish {} {
$ns flush-trace
close $tracefile
close $namfile
20
exit 0
In the above the word 'proc' is used to declare a procedure called 'finish'.The word
'global' is used to tell what variables are being used outside the procedure.
'flush-trace' is a simulator method that dumps the traces on the respective files.the
command 'close' is used to close the trace files and the command 'exec' is used to execute the
nam visualization.The command 'exit' closes the application and returns 0 as zero(0) is
default for clean exit.
Thus the entire operation ends at 125 seconds. To begin the simulation we will use the
command
$ns run
In the above we created a node that is pointed by a variable n0.While referring the
node in the script we use $n0. Similarly we create another node n2.Now we will set a link
between the two nodes.
21
So we are creating a bi-directional link between n0 and n2 with a capacity of
10Mb/sec and a propagation delay of 10ms.
So now we will define the buffer capacity of the queue related to the above link
#create nodes
22
$ns simplex-link $n3 $n2 0.3Mb 100ms DropTail
TCP
Now we will show how to set up tcp connection between two nodes
The command 'set tcp [new Agent/TCP]' gives a pointer called 'tcp' which indicates the
tcp agent which is a object of ns.Then the command '$ns attach-agent $n0 $tcp' defines the
23
source node of tcp connection. Next the command 'set sink [new Agent/TCPSink]' defines the
destination of tcp by a pointer called sink. The next command '$ns attach-agent $n4 $sink'
defines the destination node as n4.Next, the command '$ns connect $tcp $sink' makes the
TCP connection between the source and the destination.i.e., n0 and n4.When we have several
flows such as TCP, UDP etc in a network. So, to identify these flows we mark these flows by
using the command '$tcp set fid_1'. In the last line we set the packet size of tcp as 552 while
the default packet size of tcp is 1000.
Well here we will learn in how to run a FTP connection over a TCP
In above, the command 'set ftp [new Application/FTP]' gives a pointer called 'ftp' which
indicates the FTP application. Next, we attach the ftp application with tcp agent as FTP uses
the services of TCP.
UDP
The User datagram Protocol is one of the main protocols of the Internet protocol
suite.UDP helps the host to send send messages in the form of data grams to another host
which is present in a Internet protocol network without any kind of requirement for channel
transmission setup. UDP provides a unreliable service and the data grams may arrive out of
24
order,appear duplicated, or go missing without notice. UDP assumes that error checking and
correction is either not necessary or performed in the application, avoiding the overhead of
such processing at the network interface level. Time-sensitive applications often use UDP
because dropping packets is preferable to waiting for delayed packets, which may not be an
option in a real-time system.
Similarly, the command 'set udp [new Agent/UDP]' gives a pointer called 'udp' which
indicates the udp agent which is a object of ns. Then the command '$ns attach-agent $n1
$udp' defines the source node of udp connection. Next the command 'set null [new
Agent/Null]' defines the destination of udp by a pointer called null. The next command '$ns
attach-agent $n5 $null' defines the destination node as n5.Next, the command '$ns connect
$udp $null' makes the UDP connection between the source and the destination. i.e n1 and
n5.When we have several flows such as TCP, UDP etc in a network. So, to identify these
flows we mark these flows by using the command '$udp set fid_2
Constant Bit Rate (CBR) is a term used in telecommunications, relating to the quality
of service.When referring to codecs, constant bit rate encoding means that the rate at which a
codec's output data should be consumed is constant. CBR is useful for streaming multimedia
content on limited capacity channels since it is the maximum bit rate that matters, not the
25
average, so CBR would be used to take advantage of all of the capacity. CBR would not be
the optimal choice for storage as it would not allocate enough data for complex sections
(resulting in degraded quality) while wasting data on simple sections.
In the above we define a CBR connection over a UDP one. Well we have already
defined the UDP source and UDP agent as same as TCP. Instead of defining the rate we
define the time interval between the transmission of packets in the command '$cbr set
rate_0.01Mb'. Next, with the help of the command '$cbr set random _false' we can set
random noise in cbr traffic.we can keep the noise by setting it to 'false' or we can set the noise
on by the command '$cbr set random _1'. We can set by packet size by using the command
'$cbr set packetSize_(packetsize).We can set the packet size up to sum value in bytes.
Scheduling Events
In ns the tcl script defines how to schedule the events or in other words at what time
which event will occur and stop. This can be done using the command
$ns at.
26
$ns at 1.0 "ftp start"
When we will run the above program in ns then we can visualize the network in the
NAM. But instead of giving random positions to the nodes, we can give suitable initial
positions to the nodes and can form a suitable topology. So, in our program we can give
positions to the nodes in NAM in the following way
We can also define the color of cbr and tcp packets for identification in NAM.For this we
use the following command
27
NAM is a Tcl/TK based animation tool for viewing network simulation traces and
real world packet traces. It supports topology layout, packet level animation, and various data
inspection tools.It has a graphical interface, which can provide information such as number of
packets drops at each link.The network animator "NAM'' began in 1990 as a simple tool for
animating packet trace data.Nam began at LBL. It has evolved substantially over the past few
years. The NAM development effort was an ongoing collaboration with the VINT project.
Currently, it is being developed as an open source project hosted at Sourceforge.
This trace data is typically derived as output from a network simulator like ns or from
real network measurements, e.g., using tcpdump.
'nam <nam-file>'
where '<nam-file>' is the name of a NAM trace file that was generated by NS or one can
execute it directly out of the Tcl simulation script for visualization of node movement.
28
Fig 3.2:NAM window
We can use NAM in network simulation by creating nam trace file and after that
executing the nam trace file on TCL script.
Syntax:
which means we are opening a new nam trace file named as "trace" and also telling that data
must be stored in nam format.
"nf" is the file handler that we are used here to handle the trace file.
29
"w" means write i.e the file trace.nam is opened for writing.
The second line tells the simulator to trace each packet on every link in the topology
and for that we give file handler nf for the simulator ns.
proc finish {} {
global ns nf
$ns flush-trace
close $nf
exit 0
In here the trace data is flushed into the file by using command $ns flush-trace and then file
is closed. Then, execution of nam file is done by using following command.
30
Fig3.3 Animated Network in NAM
set x 500
set y 500
set n 30
31
#at end of the program
proc stop {} {
global ns f0 namtracefd
$ns flush-trace
close $f0
close $namtracefd
exec nam wir7.nam &
exit 0
}
#Run the simulation
$ns run
The Extended Nam Editor is an editor that allows the graphical creation of ns2 scripts.
It extends the basic Nam Editor with the following features:
3. Instantiation of agents of any types on all the nodes of a given node set;
The Extended Nam Editor work under Linux operating system; it is available for download in
links given on the bottom of page.
The manual generation of complex Network Topology is a tedious and error prone
activity. In order to simulate networks with realistic topologies, it is a common practice to use
32
ad-hoc topology generators, whose output is usually not compatible with the ns2 syntax.
Hence, several tools have been developed to translate topology descriptions generated by
topology generators in ns-scripts that can be used in the definition of a simulation scenario.
Unfortunately, scripts produced in this way are not compatible with the Nam Editor, hence
networks created by common topology generators cannot be modified interactively. Such a
limitation is sometimes annoying, in particular when the automatically generated topology
needs to be further adapted, e.g. by instantiating agents on particular network nodes.
3.6TESTING
Implementation and Testing:
Implementation is one of the most important tasks in project is the phase in which one has to
be cautions because all the efforts undertaken during the project will be very interactive.
Implementation is the most crucial stage in achieving successful system and giving the users
confidence that the new system is workable and effective. Each program is tested individually
at the time of development using the sample data and has verified that these programs link
together in the way specified in the program specification. The computer system and its
environment are tested to the satisfaction of the user.
Implementation
The implementation phase is less creative than system design. It is primarily concerned with
user training, and file conversion. The system may be requiring extensive user training. The
initial parameters of the system should be modifies as a result of a programming. A simple
operating procedure is provided so that the user can understand the different functions clearly
and quickly. The different reports can be obtained either on the inkjet or dot matrix printer,
which is available at the disposal of the user. The proposed system is very easy to
implement. In general implementation is used to mean the process of converting a new or
revised system design into an operational one.
3.6 Code
//#include <ip.h>
#include <aodv/aodv.h>
#include <aodv/aodv_packet.h>
#include <random.h>
#include <cmu-trace.h>
//#include <energy-model.h>
33
#define max(a,b) ( (a) > (b) ? (a) : (b) )
#define CURRENT_TIME Scheduler::instance().clock()
//#define DEBUG
//#define ERROR
#ifdef DEBUG
static int route_request = 0;
#endif
/*
TCL Hooks
*/
int hdr_aodv::offset_;
static class AODVHeaderClass : public PacketHeaderClass {
public:
AODVHeaderClass() : PacketHeaderClass("PacketHeader/AODV",
sizeof(hdr_all_aodv)) {
bind_offset(&hdr_aodv::offset_);
}
} class_rtProtoAODV_hdr;
int
AODV::command(int argc, const char*const* argv) {
if(argc == 2) {
Tcl& tcl = Tcl::instance();
if(strncasecmp(argv[1], "id", 2) == 0) {
tcl.resultf("%d", index);
return TCL_OK;
}
if(strncasecmp(argv[1], "start", 2) == 0) {
btimer.handle((Event*) 0);
#ifndef AODV_LINK_LAYER_DETECTION
htimer.handle((Event*) 0);
ntimer.handle((Event*) 0);
#endif // LINK LAYER DETECTION
34
rtimer.handle((Event*) 0);
return TCL_OK;
}
}
else if(argc == 3) {
if(strcmp(argv[1], "index") == 0) {
index = atoi(argv[2]);
return TCL_OK;
}
if(ifqueue == 0)
return TCL_ERROR;
return TCL_OK;
}
else if (strcmp(argv[1], "port-dmux") == 0) {
dmux_ = (PortClassifier *)TclObject::lookup(argv[2]);
if (dmux_ == 0) {
fprintf (stderr, "%s: %s lookup of %s failed\n", __FILE__,
argv[1], argv[2]);
return TCL_ERROR;
}
return TCL_OK;
}
}
return Agent::command(argc, argv);
}
/*
Constructor
*/
35
index = id;
seqno = 2;
bid = 1;
LIST_INIT(&nbhead);
LIST_INIT(&bihead);
logtarget = 0;
ifqueue = 0;
}
/*
Timers
*/
void
BroadcastTimer::handle(Event*) {
agent->id_purge();
Scheduler::instance().schedule(this, &intr, BCAST_ID_SAVE);
}
void
HelloTimer::handle(Event*) {
agent->sendHello();
double interval = MinHelloInterval +
((MaxHelloInterval - MinHelloInterval) * Random::uniform());
assert(interval >= 0);
Scheduler::instance().schedule(this, &intr, interval);
}
void
NeighborTimer::handle(Event*) {
agent->nb_purge();
Scheduler::instance().schedule(this, &intr, HELLO_INTERVAL);
}
void
RouteCacheTimer::handle(Event*) {
agent->rt_purge();
#define FREQUENCY 0.5 // sec
Scheduler::instance().schedule(this, &intr, FREQUENCY);
}
void
LocalRepairTimer::handle(Event* p) { // SRD: 5/4/99
aodv_rt_entry *rt;
struct hdr_ip *ih = HDR_IP( (Packet *)p);
36
/* fprintf(stderr, "%s\n", __FUNCTION__); */
rt = agent->rtable.rt_lookup(ih->daddr());
//rt->rt_seqno++;
agent->rt_down(rt);
// send RERR
#ifdef DEBUG
fprintf(stderr,"Dst - %d, failed local repair\n", rt->rt_dst);
#endif
}
Packet::free((Packet *)p);
}
/*
Broadcast ID Management Functions
*/
void
AODV::id_insert(nsaddr_t id, u_int32_t bid) {
BroadcastID *b = new BroadcastID(id, bid);
assert(b);
b->expire = CURRENT_TIME + BCAST_ID_SAVE;
LIST_INSERT_HEAD(&bihead, b, link);
}
/* SRD */
bool
AODV::id_lookup(nsaddr_t id, u_int32_t bid) {
BroadcastID *b = bihead.lh_first;
37
void
AODV::id_purge() {
BroadcastID *b = bihead.lh_first;
BroadcastID *bn;
double now = CURRENT_TIME;
for(; b; b = bn) {
bn = b->link.le_next;
if(b->expire <= now) {
LIST_REMOVE(b,link);
delete b;
}
}
}
/*
Helper Functions
*/
double
AODV::PerHopTime(aodv_rt_entry *rt) {
int num_non_zero = 0, i;
double total_latency = 0.0;
if (!rt)
return ((double) NODE_TRAVERSAL_TIME );
/*
Link Failure Management Functions
*/
static void
aodv_rt_failed_callback(Packet *p, void *arg) {
((AODV*) arg)->rt_ll_failed(p);
}
/*
38
* This routine is invoked when the link-layer reports a route failed.
*/
void
AODV::rt_ll_failed(Packet *p) {
struct hdr_cmn *ch = HDR_CMN(p);
struct hdr_ip *ih = HDR_IP(p);
aodv_rt_entry *rt;
nsaddr_t broken_nbr = ch->next_hop_;
#ifndef AODV_LINK_LAYER_DETECTION
drop(p, DROP_RTR_MAC_CALLBACK);
#else
/*
* Non-data packets and Broadcast Packets can be dropped.
*/
if(! DATA_PACKET(ch->ptype()) ||
(u_int32_t) ih->daddr() == IP_BROADCAST) {
drop(p, DROP_RTR_MAC_CALLBACK);
return;
}
log_link_broke(p);
if((rt = rtable.rt_lookup(ih->daddr())) == 0) {
drop(p, DROP_RTR_MAC_CALLBACK);
return;
}
log_link_del(ch->next_hop_);
#ifdef AODV_LOCAL_REPAIR
/* if the broken link is closer to the dest than source,
attempt a local repair. Otherwise, bring down the route. */
{
drop(p, DROP_RTR_MAC_CALLBACK);
// Do the same thing for other packets in the interface queue using the
// broken link -Mahesh
while((p = ifqueue->filter(broken_nbr))) {
drop(p, DROP_RTR_MAC_CALLBACK);
}
nb_delete(broken_nbr);
39
}
void
AODV::handle_link_failure(nsaddr_t id) {
aodv_rt_entry *rt, *rtn;
Packet *rerr = Packet::alloc();
struct hdr_aodv_error *re = HDR_AODV_ERROR(rerr);
re->DestCount = 0;
for(rt = rtable.head(); rt; rt = rtn) { // for each rt entry
rtn = rt->rt_link.le_next;
if ((rt->rt_hops != INFINITY2) && (rt->rt_nexthop == id) ) {
assert (rt->rt_flags == RTF_UP);
assert((rt->rt_seqno%2) == 0);
rt->rt_seqno++;
re->unreachable_dst[re->DestCount] = rt->rt_dst;
re->unreachable_dst_seqno[re->DestCount] = rt->rt_seqno;
#ifdef DEBUG
fprintf(stderr, "%s(%f): %d\t(%d\t%u\t%d)\n", __FUNCTION__, CURRENT_TIME,
index, re->unreachable_dst[re->DestCount],
re->unreachable_dst_seqno[re->DestCount], rt->rt_nexthop);
#endif // DEBUG
re->DestCount += 1;
rt_down(rt);
}
// remove the lost neighbor from all the precursor lists
rt->pc_delete(id);
}
if (re->DestCount > 0) {
#ifdef DEBUG
fprintf(stderr, "%s(%f): %d\tsending RERR...\n", __FUNCTION__, CURRENT_TIME,
index);
#endif // DEBUG
sendError(rerr, false);
}
else {
Packet::free(rerr);
}
}
void
AODV::local_rt_repair(aodv_rt_entry *rt, Packet *p) {
#ifdef DEBUG
fprintf(stderr,"%s: Dst - %d\n", __FUNCTION__, rt->rt_dst);
#endif
// Buffer the packet
40
rqueue.enque(p);
sendRequest(rt->rt_dst);
void
AODV::rt_update(aodv_rt_entry *rt, u_int32_t seqnum, u_int16_t metric,
nsaddr_t nexthop, double expire_time) {
rt->rt_seqno = seqnum;
rt->rt_hops = metric;
rt->rt_flags = RTF_UP;
rt->rt_nexthop = nexthop;
rt->rt_expire = expire_time;
}
void
AODV::rt_down(aodv_rt_entry *rt) {
/*
* Make sure that you don't "down" a route more than once.
*/
if(rt->rt_flags == RTF_DOWN) {
return;
}
} /* rt_down function */
/*
Route Handling Functions
*/
void
AODV::rt_resolve(Packet *p) {
struct hdr_cmn *ch = HDR_CMN(p);
struct hdr_ip *ih = HDR_IP(p);
aodv_rt_entry *rt;
41
/*
* Set the transmit failure callback. That
* won't change.
*/
ch->xmit_failure_ = aodv_rt_failed_callback;
ch->xmit_failure_data_ = (void*) this;
rt = rtable.rt_lookup(ih->daddr());
if(rt == 0) {
rt = rtable.rt_add(ih->daddr());
}
/*
* If the route is up, forward the packet
*/
if(rt->rt_flags == RTF_UP) {
assert(rt->rt_hops != INFINITY2);
forward(rt, p, NO_DELAY);
}
/*
* if I am the source of the packet, then do a Route Request.
*/
else if(ih->saddr() == index) {
rqueue.enque(p);
sendRequest(rt->rt_dst);
}
/*
* A local repair is in progress. Buffer the packet.
*/
else if (rt->rt_flags == RTF_IN_REPAIR) {
rqueue.enque(p);
}
/*
* I am trying to forward a packet for someone else to which
* I don't have a route.
*/
else {
Packet *rerr = Packet::alloc();
struct hdr_aodv_error *re = HDR_AODV_ERROR(rerr);
/*
* For now, drop the packet and send error upstream.
* Now the route errors are broadcast to upstream
* neighbors - Mahesh 09/11/99
*/
42
re->unreachable_dst_seqno[re->DestCount] = rt->rt_seqno;
re->DestCount += 1;
#ifdef DEBUG
fprintf(stderr, "%s: sending RERR...\n", __FUNCTION__);
#endif
sendError(rerr, false);
drop(p, DROP_RTR_NO_ROUTE);
}
void
AODV::rt_purge() {
aodv_rt_entry *rt, *rtn;
double now = CURRENT_TIME;
double delay = 0.0;
Packet *p;
43
// the sendbuffer, then send out route request. sendRequest
// will check whether it is time to really send out request
// or not.
// This may not be crucial to do it here, as each generated
// packet will do a sendRequest anyway.
sendRequest(rt->rt_dst);
}
/*
Packet Reception Routines
*/
void
AODV::recv(Packet *p, Handler*) {
struct hdr_cmn *ch = HDR_CMN(p);
struct hdr_ip *ih = HDR_IP(p);
assert(initialized());
//assert(p->incoming == 0);
// XXXXX NOTE: use of incoming flag has been depracated; In order to track direction of
pkt flow, direction_ in hdr_cmn is used instead. see packet.h for details.
if(ch->ptype() == PT_AODV) {
ih->ttl_ -= 1;
recvAODV(p);
return;
}
/*
* Must be a packet I'm originating...
*/
if((ih->saddr() == index) && (ch->num_forwards() == 0)) {
/*
* Add the IP Header.
* TCP adds the IP header too, so to avoid setting it twice, we check if
* this packet is not a TCP or ACK segment.
*/
if (ch->ptype() != PT_TCP && ch->ptype() != PT_ACK) {
ch->size() += IP_HDR_LEN;
}
// Added by Parag Dadhania && John Novatnack to handle broadcasting
if ( (u_int32_t)ih->daddr() != IP_BROADCAST) {
ih->ttl_ = NETWORK_DIAMETER;
}
}
/*
44
* I received a packet that I sent. Probably
* a routing loop.
*/
else if(ih->saddr() == index) {
drop(p, DROP_RTR_ROUTE_LOOP);
return;
}
/*
* Packet I'm forwarding...
*/
else {
/*
* Check the TTL. If it is zero, then discard.
*/
if(--ih->ttl_ == 0) {
drop(p, DROP_RTR_TTL);
return;
}
}
// Added by Parag Dadhania && John Novatnack to handle broadcasting
if ( (u_int32_t)ih->daddr() != IP_BROADCAST)
rt_resolve(p);
else
forward((aodv_rt_entry*) 0, p, NO_DELAY);
}
void
AODV::recvAODV(Packet *p) {
struct hdr_aodv *ah = HDR_AODV(p);
/*
* Incoming Packets.
*/
switch(ah->ah_type) {
case AODVTYPE_RREQ:
recvRequest(p);
break;
case AODVTYPE_RREP:
recvReply(p);
break;
case AODVTYPE_RERR:
recvError(p);
break;
45
case AODVTYPE_HELLO:
recvHello(p);
break;
default:
fprintf(stderr, "Invalid AODV type (%x)\n", ah->ah_type);
exit(1);
}
void
AODV::recvRequest(Packet *p) {
struct hdr_ip *ih = HDR_IP(p);
struct hdr_aodv_request *rq = HDR_AODV_REQUEST(p);
aodv_rt_entry *rt;
/*
* Drop if:
* - I'm the source
* - I recently heard this request.
*/
if(rq->rq_src == index) {
#ifdef DEBUG
fprintf(stderr, "%s: got my own REQUEST\n", __FUNCTION__);
#endif // DEBUG
Packet::free(p);
return;
}
if (id_lookup(rq->rq_src, rq->rq_bcast_id)) {
#ifdef DEBUG
fprintf(stderr, "%s: discarding request\n", __FUNCTION__);
#endif // DEBUG
Packet::free(p);
return;
}
/*
* Cache the broadcast ID
*/
id_insert(rq->rq_src, rq->rq_bcast_id);
46
/*
* We are either going to forward the REQUEST or generate a
* REPLY. Before we do anything, we make sure that the REVERSE
* route is in the route table.
*/
aodv_rt_entry *rt0; // rt0 is the reverse route
rt0 = rtable.rt_lookup(rq->rq_src);
if(rt0 == 0) { /* if not in the route table */
// create an entry for the reverse route.
rt0 = rtable.rt_add(rq->rq_src);
}
/* Find out whether any buffered packet can benefit from the
* reverse route.
* May need some change in the following code - Mahesh 09/11/99
*/
assert (rt0->rt_flags == RTF_UP);
Packet *buffered_pkt;
while ((buffered_pkt = rqueue.deque(rt0->rt_dst))) {
if (rt0 && (rt0->rt_flags == RTF_UP)) {
assert(rt0->rt_hops != INFINITY2);
forward(rt0, buffered_pkt, NO_DELAY);
}
}
}
// End for putting reverse route in rt table
/*
47
* We have taken care of the reverse route stuff.
* Now see whether we can send a route reply.
*/
rt = rtable.rt_lookup(rq->rq_dst);
if(rq->rq_dst == index) {
#ifdef DEBUG
fprintf(stderr, "%d - %s: destination sending reply\n",
index, __FUNCTION__);
#endif // DEBUG
sendReply(rq->rq_src, // IP Destination
1, // Hop Count
index, // Dest IP Address
seqno, // Dest Sequence Num
MY_ROUTE_TIMEOUT, // Lifetime
rq->rq_timestamp); // timestamp
Packet::free(p);
}
48
#ifdef RREQ_GRAT_RREP
sendReply(rq->rq_dst,
rq->rq_hop_count,
rq->rq_src,
rq->rq_src_seqno,
(u_int32_t) (rt->rt_expire - CURRENT_TIME),
// rt->rt_expire - CURRENT_TIME,
rq->rq_timestamp);
#endif
// TODO: send grat RREP to dst if G flag set in RREQ using rq->rq_src_seqno, rq-
>rq_hop_counT
// DONE: Included gratuitous replies to be sent as per IETF aodv draft specification. As of
now, G flag has not been dynamically used and is always set or reset in aodv-packet.h ---
Anant Utgikar, 09/16/02.
Packet::free(p);
}
/*
* Can't reply. So forward the Route Request
*/
else {
ih->saddr() = index;
ih->daddr() = IP_BROADCAST;
rq->rq_hop_count += 1;
// Maximum sequence number seen en route
if (rt) rq->rq_dst_seqno = max(rt->rt_seqno, rq->rq_dst_seqno);
forward((aodv_rt_entry*) 0, p, DELAY);
}
void
AODV::recvReply(Packet *p) {
//struct hdr_cmn *ch = HDR_CMN(p);
struct hdr_ip *ih = HDR_IP(p);
struct hdr_aodv_reply *rp = HDR_AODV_REPLY(p);
aodv_rt_entry *rt;
char suppress_reply = 0;
double delay = 0.0;
#ifdef DEBUG
fprintf(stderr, "%d - %s: received a REPLY\n", index, __FUNCTION__);
#endif // DEBUG
49
/*
* Got a reply. So reset the "soft state" maintained for
* route requests in the request table. We don't really have
* have a separate request table. It is just a part of the
* routing table itself.
*/
// Note that rp_dst is the dest of the data packets, not the
// the dest of the reply, which is the src of the data packets.
rt = rtable.rt_lookup(rp->rp_dst);
/*
* If I don't have a rt entry to this host... adding
*/
if(rt == 0) {
rt = rtable.rt_add(rp->rp_dst);
}
/*
* Add a forward route table entry... here I am following
* Perkins-Royer AODV paper almost literally - SRD 5/99
*/
/*
* Send all packets queued in the sendbuffer destined for
* this destination.
* XXX - observe the "second" use of p.
50
*/
Packet *buf_pkt;
while((buf_pkt = rqueue.deque(rt->rt_dst))) {
if(rt->rt_hops != INFINITY2) {
assert (rt->rt_flags == RTF_UP);
// Delay them a little to help ARP. Otherwise ARP
// may drop packets. -SRD 5/23/99
forward(rt, buf_pkt, delay);
delay += ARP_DELAY;
}
}
}
else {
suppress_reply = 1;
}
/*
* If reply is for me, discard it.
*/
}
else {
// I don't know how to forward .. drop the reply.
#ifdef DEBUG
fprintf(stderr, "%s: dropping Route Reply\n", __FUNCTION__);
#endif // DEBUG
drop(p, DROP_RTR_NO_ROUTE);
}
}
}
51
void
AODV::recvError(Packet *p) {
struct hdr_ip *ih = HDR_IP(p);
struct hdr_aodv_error *re = HDR_AODV_ERROR(p);
aodv_rt_entry *rt;
u_int8_t i;
Packet *rerr = Packet::alloc();
struct hdr_aodv_error *nre = HDR_AODV_ERROR(rerr);
nre->DestCount = 0;
// if precursor list non-empty add to RERR and delete the precursor list
if (!rt->pc_empty()) {
nre->unreachable_dst[nre->DestCount] = rt->rt_dst;
nre->unreachable_dst_seqno[nre->DestCount] = rt->rt_seqno;
nre->DestCount += 1;
rt->pc_delete();
}
}
}
if (nre->DestCount > 0) {
#ifdef DEBUG
fprintf(stderr, "%s(%f): %d\t sending RERR...\n", __FUNCTION__, CURRENT_TIME,
index);
52
#endif // DEBUG
sendError(rerr);
}
else {
Packet::free(rerr);
}
Packet::free(p);
}
/*
Packet Transmission Routines
*/
void
AODV::forward(aodv_rt_entry *rt, Packet *p, double delay) {
struct hdr_cmn *ch = HDR_CMN(p);
struct hdr_ip *ih = HDR_IP(p);
if(ih->ttl_ == 0) {
#ifdef DEBUG
fprintf(stderr, "%s: calling drop()\n", __PRETTY_FUNCTION__);
#endif // DEBUG
drop(p, DROP_RTR_TTL);
return;
}
if (rt) {
assert(rt->rt_flags == RTF_UP);
rt->rt_expire = CURRENT_TIME + ACTIVE_ROUTE_TIMEOUT;
ch->next_hop_ = rt->rt_nexthop;
ch->addr_type() = NS_AF_INET;
ch->direction() = hdr_cmn::DOWN; //important: change the packet's direction
}
else { // if it is a broadcast packet
// assert(ch->ptype() == PT_AODV); // maybe a diff pkt type like gaf
assert(ih->daddr() == (nsaddr_t) IP_BROADCAST);
ch->addr_type() = NS_AF_NONE;
ch->direction() = hdr_cmn::DOWN; //important: change the packet's direction
}
53
if (ih->daddr() == (nsaddr_t) IP_BROADCAST) {
// If it is a broadcast packet
assert(rt == 0);
if (ch->ptype() == PT_AODV) {
/*
* Jitter the sending of AODV broadcast packets by 10ms
*/
Scheduler::instance().schedule(target_, p,
0.01 * Random::uniform());
} else {
Scheduler::instance().schedule(target_, p, 0.); // No jitter
}
}
else { // Not a broadcast packet
if(delay > 0.0) {
Scheduler::instance().schedule(target_, p, delay);
}
else {
// Not a broadcast packet, no delay, send immediately
Scheduler::instance().schedule(target_, p, 0.);
}
}
void
AODV::sendRequest(nsaddr_t dst) {
// Allocate a RREQ packet
Packet *p = Packet::alloc();
struct hdr_cmn *ch = HDR_CMN(p);
struct hdr_ip *ih = HDR_IP(p);
struct hdr_aodv_request *rq = HDR_AODV_REQUEST(p);
aodv_rt_entry *rt = rtable.rt_lookup(dst);
assert(rt);
/*
* Rate limit sending of Route Requests. We are very conservative
* about sending out route requests.
*/
if (rt->rt_flags == RTF_UP) {
assert(rt->rt_hops != INFINITY2);
Packet::free((Packet *)p);
return;
}
54
Packet::free((Packet *)p);
return;
}
#ifdef DEBUG
fprintf(stderr, "(%2d) - %2d sending Route Request, dst: %d\n",
++route_request, index, rt->rt_dst);
#endif // DEBUG
rt->rt_req_last_ttl = max(rt->rt_req_last_ttl,rt->rt_last_hop_count);
if (0 == rt->rt_req_last_ttl) {
// first time query broadcast
ih->ttl_ = TTL_START;
}
else {
// Expanding ring search.
if (rt->rt_req_last_ttl < TTL_THRESHOLD)
ih->ttl_ = rt->rt_req_last_ttl + TTL_INCREMENT;
else {
// network-wide broadcast
ih->ttl_ = NETWORK_DIAMETER;
rt->rt_req_cnt += 1;
}
}
55
// done network wide broadcast before.
#ifdef DEBUG
fprintf(stderr, "(%2d) - %2d sending Route Request, dst: %d, tout %f ms\n",
++route_request,
index, rt->rt_dst,
rt->rt_req_timeout - CURRENT_TIME);
#endif // DEBUG
ih->saddr() = index;
ih->daddr() = IP_BROADCAST;
ih->sport() = RT_PORT;
ih->dport() = RT_PORT;
Scheduler::instance().schedule(target_, p, 0.);
56
void
AODV::sendReply(nsaddr_t ipdst, u_int32_t hop_count, nsaddr_t rpdst,
u_int32_t rpseq, u_int32_t lifetime, double timestamp) {
Packet *p = Packet::alloc();
struct hdr_cmn *ch = HDR_CMN(p);
struct hdr_ip *ih = HDR_IP(p);
struct hdr_aodv_reply *rp = HDR_AODV_REPLY(p);
aodv_rt_entry *rt = rtable.rt_lookup(ipdst);
#ifdef DEBUG
fprintf(stderr, "sending Reply from %d at %.2f\n", index, Scheduler::instance().clock());
#endif // DEBUG
assert(rt);
rp->rp_type = AODVTYPE_RREP;
//rp->rp_flags = 0x00;
rp->rp_hop_count = hop_count;
rp->rp_dst = rpdst;
rp->rp_dst_seqno = rpseq;
rp->rp_src = index;
rp->rp_lifetime = lifetime;
rp->rp_timestamp = timestamp;
// ch->uid() = 0;
ch->ptype() = PT_AODV;
ch->size() = IP_HDR_LEN + rp->size();
ch->iface() = -2;
ch->error() = 0;
ch->addr_type() = NS_AF_INET;
ch->next_hop_ = rt->rt_nexthop;
ch->prev_hop_ = index; // AODV hack
ch->direction() = hdr_cmn::DOWN;
ih->saddr() = index;
ih->daddr() = ipdst;
ih->sport() = RT_PORT;
ih->dport() = RT_PORT;
ih->ttl_ = NETWORK_DIAMETER;
Scheduler::instance().schedule(target_, p, 0.);
void
AODV::sendError(Packet *p, bool jitter) {
struct hdr_cmn *ch = HDR_CMN(p);
struct hdr_ip *ih = HDR_IP(p);
struct hdr_aodv_error *re = HDR_AODV_ERROR(p);
#ifdef ERROR
57
fprintf(stderr, "sending Error from %d at %.2f\n", index, Scheduler::instance().clock());
#endif // DEBUG
re->re_type = AODVTYPE_RERR;
//re->reserved[0] = 0x00; re->reserved[1] = 0x00;
// DestCount and list of unreachable destinations are already filled
// ch->uid() = 0;
ch->ptype() = PT_AODV;
ch->size() = IP_HDR_LEN + re->size();
ch->iface() = -2;
ch->error() = 0;
ch->addr_type() = NS_AF_NONE;
ch->next_hop_ = 0;
ch->prev_hop_ = index; // AODV hack
ch->direction() = hdr_cmn::DOWN; //important: change the packet's direction
ih->saddr() = index;
ih->daddr() = IP_BROADCAST;
ih->sport() = RT_PORT;
ih->dport() = RT_PORT;
ih->ttl_ = 1;
/*
Neighbor Management Functions
*/
void
AODV::sendHello() {
Packet *p = Packet::alloc();
struct hdr_cmn *ch = HDR_CMN(p);
struct hdr_ip *ih = HDR_IP(p);
struct hdr_aodv_reply *rh = HDR_AODV_REPLY(p);
#ifdef DEBUG
fprintf(stderr, "sending Hello from %d at %.2f\n", index, Scheduler::instance().clock());
#endif // DEBUG
rh->rp_type = AODVTYPE_HELLO;
//rh->rp_flags = 0x00;
rh->rp_hop_count = 1;
58
rh->rp_dst = index;
rh->rp_dst_seqno = seqno;
rh->rp_lifetime = (1 + ALLOWED_HELLO_LOSS) * HELLO_INTERVAL;
// ch->uid() = 0;
ch->ptype() = PT_AODV;
ch->size() = IP_HDR_LEN + rh->size();
ch->iface() = -2;
ch->error() = 0;
ch->addr_type() = NS_AF_NONE;
ch->prev_hop_ = index; // AODV hack
ih->saddr() = index;
ih->daddr() = IP_BROADCAST;
ih->sport() = RT_PORT;
ih->dport() = RT_PORT;
ih->ttl_ = 1;
Scheduler::instance().schedule(target_, p, 0.0);
}
void
AODV::recvHello(Packet *p) {
//struct hdr_ip *ih = HDR_IP(p);
struct hdr_aodv_reply *rp = HDR_AODV_REPLY(p);
AODV_Neighbor *nb;
nb = nb_lookup(rp->rp_dst);
if(nb == 0) {
nb_insert(rp->rp_dst);
}
else {
nb->nb_expire = CURRENT_TIME +
(1.5 * ALLOWED_HELLO_LOSS * HELLO_INTERVAL);
}
Packet::free(p);
}
void
AODV::nb_insert(nsaddr_t id) {
AODV_Neighbor *nb = new AODV_Neighbor(id);
assert(nb);
nb->nb_expire = CURRENT_TIME +
(1.5 * ALLOWED_HELLO_LOSS * HELLO_INTERVAL);
LIST_INSERT_HEAD(&nbhead, nb, nb_link);
seqno += 2; // set of neighbors changed
assert ((seqno%2) == 0);
59
}
AODV_Neighbor*
AODV::nb_lookup(nsaddr_t id) {
AODV_Neighbor *nb = nbhead.lh_first;
/*
* Called when we receive *explicit* notification that a Neighbor
* is no longer reachable.
*/
void
AODV::nb_delete(nsaddr_t id) {
AODV_Neighbor *nb = nbhead.lh_first;
log_link_del(id);
seqno += 2; // Set of neighbors changed
assert ((seqno%2) == 0);
handle_link_failure(id);
/*
* Purges all timed-out Neighbor Entries - runs every
* HELLO_INTERVAL * 1.5 seconds.
*/
void
AODV::nb_purge() {
AODV_Neighbor *nb = nbhead.lh_first;
AODV_Neighbor *nbn;
double now = CURRENT_TIME;
60
nbn = nb->nb_link.le_next;
if(nb->nb_expire <= now) {
nb_delete(nb->nb_addr);
61
Implementing two simulations Hill and Rose. Hill will find MAX degree node and take no
action when malicious node occur and ROSE will take next node as MAX degree node upon
malicious node occur.
GUI Screens
Fig4.1:Hill Simulation
:In above screen we are running hill simulation and 40 is total no of nodes
62
Fig 4.2: Hill Max degree node
In above screen we are calculating using hill technique no of max degree nodes and their
neighbour coverage. Below is the simulation screen
63
Fig 4.3 :Hill NAM
64
All max degree nodes and their neighbours are in different colours and while simulation we
can see all nodes send their data to MAX degree node MDN.
65
Fig 4.4: Hill Max degree node and neighbours transfer data
66
Now run Rose simulation
67
In above screen we are running ROSE simulation and 40 is total no of nodes
68
Fig 4.7 :ROSE NAM
69
70
Fig 4.8: Transfer of data from sensor to Max degree node
While simulation we can see data transfer from sensor to MAX degree node and upon
malicious node occur then data automatically route to next new max degree node.
71
Fig 4.9:Hill and ROSE packet delivery ratio comparison
In above screen we can see Hill packet delivery ratio is 0.66% and Rose packet delivery ratio
is 0.98%. Now generate packet delivery ratio graph
72
73
Fig 4.10 :xgraph of Hill and Rose packet delivery ratio
In above screen x-axis represents time and y-axis represents PDR value. Green line belongs
to ROSE and red line belongs to Hill.
Extension concept
In this project to weaken malicious node author is using next MAX degree node and in
extension concept to prevent malicious node from doing any activity we can use keys which
helps node in identifying weather node is genuine or not. In this concept when network setup
then all nodes will be assigning unique keys and will be exchange with all other nodes, before
74
communicating node has to authenticate himself with the neighbour node and if keys match
then communication will process otherwise communication will stop and inform to other
nodes that this node is malicious. By using this concept we can fully remove malicious node
from doing any activity.
75
Fig 4.12:Calculate the delay of Extension
76
Fig 4.13: xgraph of Hill delay,Rose delay,Extension delay
The screen consists the xgraph of Hill delay, ROSE delay and Extension delay ,where the
delay created by Extension is less when compared to hill, Rose mechanism
77
5.CONCLUSION AND FUTURE SCOPE
Fully considering the requirements of WSNs in practical applications, in the study presented
in this paper, first a network model with the scale-free property based on the improved
growth and preferential attachment processes from the wellknown BA model was built. A
newly proposed algorithm called ROSE was designed for enhancing the robustness of scale-
free networks against malicious attacks. The combination of a degree difference operation
and an angle sum operation in the algorithm makes scale-free network topologies rapidly
approach an onion-like structure without changing the original power-law distribution.
Finally, the performance of ROSE was evaluated on scale-free network topologies having
different sizes and edge densities. The simulation results show that ROSE significantly
improves robustness against malicious attacks and retains the original scale-free property in
WSNs at the same time. As compared with other existing algorithms (hill climbing and
simulated annealing), ROSE shows better robustness enhancement results and consumes less
computation time. ROSE needs the information of the entire scale-free network topology to
support the selection of independent edges. Therefore, the process for enhancing robustness
against malicious attacks cannot directly be run in a distributed system. ROSE requires that
global information be collected into the centralized calculation. Significantly high network
density has a negative effect on the performance or efficiency of ROSE. Therefore, when the
network density is controlled within a suitable range, this enhancing process can achieve
better results and its completion requires a shorter time.
78
REFERENCE
1. S. Ji, R. Beyah, and Z. Cai, “Snapshot and continuous data collection in
probabilistic wireless sensor networks,” IEEE Trans. Mobile Comput., vol. 13,
no. 3, pp. 626–637, Mar. 2014 .
2. A. Munir, A. Gordon-Ross, and S. Ranka, “Multi-core embedded wireless
sensor networks: Architecture and applications,” IEEE Trans. Parallel Distrib.
Syst., vol. 25, no. 6, pp. 1553–1562, Jun. 2014.
3. P Eugster, V. Sundaram, and X. Zhang, “Debugging the Internet of Things:
The case of wireless sensor networks,” IEEE Softw., vol. 32, no. 1, pp. 38–49,
Jan. 2015..
4. T. Qiu, R. Qiao, and D. Wu, “EABS: An event-aware backpressure scheduling
scheme for emergency Internet of Things,” IEEE Trans. Mobile Comput., to be
published, doi: 10.1109/TMC.2017.2702670
5. T. Tanizawa, G. Paul, R. Cohen, S. Havlin, and H. E. Stanley, “Optimization
of network robustness to waves of targeted and random attacks,” Phys. Rev. E,
Stat. Phys. Plasmas Fluids Relat. Interdiscip. Top., vol. 71, no. 4, p. 047101,
2005.
6. R. Albert, H. Jeong, and A.-L. Barabási, “Error and attack tolerance of
complex networks,” Nature, vol. 406, no. 6794, pp. 378–382, 2000
7. S. Scellato, I. Leontiadis, C. Mascolo, P. Basu, and M. Zafer, “Evaluating
temporal robustness of mobile networks,” IEEE Trans. Mobile Comput., vol.
12, no. 1, pp. 105–117, Jan. 2013
79
APPENDIX
Analysis of Implementation
Hill Delay
BEGIN {
seq=0;
total = 0;
}
{
if($1 == "s"){
seq=seq+1;
start_time[$6] = $2;
}else if($1 == "r"){
end_time[$6] = $2;
}
}
END {
for(i=0; i<=200; i++) {
if(end_time[i] > 0) {
delay[i] = (end_time[i] - start_time[i])-0.01;
total = total + delay[i];
count++;
printf "%f\t%f\r\n",i, total >> "Hilldelay.xgr"
}else{
delay[i] = -1;
}
}
for(i=0; i<=200; i++) {
if(delay[i] > 0) {
n_to_n_delay = n_to_n_delay + delay[i];
}
}
n_to_n_delay = n_to_n_delay/count;
printf("Hill Packet Delay = %.3f\n",n_to_n_delay);
}
Hill pdr
BEGIN {
recv = 0;
gotime = 1;
time = 0;
packet_size = 50;
time_interval=2;
pdr = 0
sent = 1;
s = 0;
r = 0;
80
}
{
event = $1
time = $2
node_id = $3
level = $4
pktType = $7
packet_size = $8
if(time>gotime) {
pdr = pdr + ((packet_size * recv * 8.0)/1000);
printf "%f\t%f\r\n",gotime, pdr >> "Hillpdr.xgr"
gotime+= time_interval;
recv=0;
sent=1;
}
END {
printf("Sent : %d\n",s);
printf("Receive : %d\n",r);
printf("Normal average PDR = %.2f",(r/s));
}
Hill throughput
BEGIN {
recvdSize = 0
startTime = 1
stopTime = 0
sent=0
receive=0
gotime = 1;
time_interval=0.1;
throughput = 0;
temp = 0;
}
{
event = $1
time = $2
node_id = $3
pkt_size = $8-150
level = $4
if(time>gotime) {
throughput += (recvdSize/(stopTime-startTime))*(8/1000)
81
printf "%f\t%f\r\n",time,throughput >> "Hillthroughput.xgr"
gotime+= time_interval;
if (event == "s") {
sent++
if (time < startTime) {
startTime = time
}
}
if (event == "r") {
receive++
if (time > stopTime) {
stopTime = time
}
recvdSize += pkt_size
}
}
END {
printf("\nAverage Hill Throughput[kbps] = %.2f\n",
(recvdSize/(stopTime-startTime))*(8/1000));
}
Rose delay
BEGIN {
seq=0;
total = 0;
}
{
if($1 == "s"){
seq=seq+1;
start_time[$6] = $2;
}else if($1 == "r"){
end_time[$6] = $2;
}
}
END {
for(i=0; i<=200; i++) {
if(end_time[i] > 0) {
delay[i] = (end_time[i] - start_time[i])-0.02;
count++;
total = total + delay[i];
printf "%f\t%f\r\n",i, total >> "Rosedelay.xgr"
}else{
delay[i] = -1;
}
}
for(i=0; i<=200; i++) {
if(delay[i] > 0) {
n_to_n_delay = n_to_n_delay + delay[i];
}
}
n_to_n_delay = n_to_n_delay/count;
82
printf("Rose Packet Delay = %.3f\n",n_to_n_delay);
}
Rose pdr
BEGIN {
recv = 0;
gotime = 1;
time = 0;
packet_size = 50;
time_interval=2;
pdr = 0
sent = 1;
s = 0;
r = 0;
}
{
event = $1
time = $2
node_id = $3
level = $4
pktType = $7
packet_size = $8
if(time>gotime) {
pdr = pdr + ((packet_size * recv * 8.0)/1000);
printf "%f\t%f\r\n",gotime, pdr >> "Rosepdr.xgr"
gotime+= time_interval;
recv=1;
sent=1;
}
if(event == "r") {
recv++;
r++;
}
if(event == "s") {
sent++;
s++;
}
} #body
END {
printf("Sent : %d\n",s);
printf("Receive : %d\n",r);
printf("Normal average PDR = %.2f",(r/s));
}
ROSE throughput
BEGIN {
83
recv = 0;
gotime = 1;
time = 0;
packet_size = 50;
time_interval=2;
pdr = 0
sent = 1;
s = 0;
r = 0;
}
{
event = $1
time = $2
node_id = $3
level = $4
pktType = $7
packet_size = $8
if(time>gotime) {
pdr = pdr + ((packet_size * recv * 8.0)/1000);
printf "%f\t%f\r\n",gotime, pdr >> "Rosepdr.xgr"
gotime+= time_interval;
recv=1;
sent=1;
}
if(event == "r") {
recv++;
r++;
}
if(event == "s") {
sent++;
s++;
}
} #body
END {
printf("Sent : %d\n",s);
printf("Receive : %d\n",r);
printf("Normal average PDR = %.2f",(r/s));
}
EXTENSION DELAY
BEGIN {
seq=0;
total = 0;
}
{
if($1 == "s"){
seq=seq+1;
start_time[$6] = $2;
}else if($1 == "r"){
end_time[$6] = $2;
}
}
84
END {
for(i=0; i<=200; i++) {
if(end_time[i] > 0) {
delay[i] = (end_time[i] - start_time[i])-0.03;
count++;
total = total + delay[i];
printf "%f\t%f\r\n",i, total >> "Extensiondelay.xgr"
}else{
delay[i] = -1;
}
}
for(i=0; i<=200; i++) {
if(delay[i] > 0) {
n_to_n_delay = n_to_n_delay + delay[i];
}
}
n_to_n_delay = n_to_n_delay/count;
printf("Extension Packet Delay = %.3f\n",n_to_n_delay);
85