1.1 Overview of Wireless Networks
1.1 Overview of Wireless Networks
INTRODUCTION
As more wireless and sensor networks are deployed, they will increasingly become
tempting targets for malicious attacks. Due to the openness of wireless and sensor networks, they
are especially vulnerable to eavesdropping attacks where an attacker forges its identity to
masquerade as another device, or even creates multiple illegitimate identities. Eavesdropping
attacks are a serious threat as they represent a form of identity compromise and can facilitate a
variety of traffic injection attacks, such as evil twin access point attacks. It is thus desirable to
detect the presence of eavesdropping and eliminate them from the network.
In this project, we take a different approach by using the physical properties associated
with wireless transmissions to detect eavesdropping. Specifically, we propose a scheme for both
detecting eavesdropping attacks, as well as localizing the positions of the adversaries performing
the attacks. Our approach utilizes the Received Signal Strength (RSS) measured across a set of
access points to perform eavesdropping detection and localization. Our scheme does not add any
overhead to the wireless devices and sensor nodes.
Due to the shared nature of the wireless medium, attackers can gather useful identity
information during passive monitoring and further utilize the identity information to launch
identity-based attacks, in particular, the two most harmful but easy to launch attacks: 1)
eavesdropping attacks and 2) Sybil attacks. In identity-based eavesdropping attacks, an attacker
can forge its identity to masquerade as another device or even create multiple illegitimate
identities in the networks. For instance, in an IEEE 802.11 network, it is easy for an attacker to
modify its Media Access Control (MAC) address of network interface card (NIC) to another
device through vendor-supplied NIC drivers or open-source NIC drivers. In addition, by
masquerading as an authorized wireless access point (AP) or an authorized client, an attacker can
launch denial-of-service (DoS) attacks, bypass access control mechanisms, or falsely advertise
services to wireless clients.
Therefore, identity-based attacks will have a serious impact to the normal operation of
wireless and sensor networks. It is thus desirable to detect the presence of identity-based attacks
and eliminate them from the network. The traditional approach to address identity-based attacks
is to apply cryptographic authentication. However, authentication requires additional
infrastructural overhead and computational power associated with distributing and maintaining
cryptographic keys. Due to the limited power and resources available to the wireless devices and
sensor nodes, it is not always possible to deploy authentication.
Now a days wireless network is the most popular services utilized in industrial and
commercial applications, because of its technical advancement in processor, communication, and
usage of low power embedded computing devices. Sensor nodes are used to monitor
environmental conditions like temperature, pressure, humidity, sound, vibration, position etc. In
many real time applications the sensor nodes are performing different tasks like neighbor node
discovery, smart sensing, data storage and processing, 2 data aggregation, target tracking, control
and monitoring, node localization, synchronization and efficient routing between nodes and base
station.
Wireless sensor nodes are equipped with sensing unit, a processing unit, communication
unit and power unit. Each and every node is capable to perform data gathering, sensing,
processing and communicating with other nodes. The sensing unit senses the environment, the
processing unit computes the confined permutations of the sensed data, and the communication
unit performs exchange of processed information among 3 neighboring sensor nodes.
The sensing unit of sensor nodes integrates different types of sensors like thermal
sensors, magnetic sensors, vibration sensors, chemical sensors, bio sensors, and light sensors.
The measured parameters from the external environment by sensing unit of sensor node are fed
into the processing unit. The analog signal generated by the sensors are digitized by using
Analog to Digital converter (ADC) and sent to controller for further processing. The processing
unit is the important core unit of the sensor node. The processor executes different tasks and
controls the functionality of other components. The required services for the processing unit are
pre-programmed and loaded into the processor of sensor nodes. The energy utilization rate of the
processor varies depending upon the functionality of the nodes. The variation in the performance
of the processor is identified by the evaluating factors like processing speed, data rate, memory
and peripherals supported by the processors
The computations are performed in the processing unit and the acquired result is
transmitted to the base station through the communication unit. In communication unit, a
common transceiver act as a communication unit and it is mainly used to transmit and receive the
information among the nodes and base station and vice versa. There are four states in the
communication unit: transmit, receive, idle and sleep
Sensor networks are used in numerous application domains, such as cyberphysical infrastructure
systems, environmental monitoring, power grids, etc. Data are produced at a large number of
sensor node sources and processed in-network at intermediate hops on their way to a Base
Station (BS) that performs decision-making. The diversity of data sources creates the need to
assure the trustworthiness of data, such that only trustworthy information is considered in the
decision process. Data provenance is an effective method to assess data trustworthiness, since it
summarizes the history of ownership and the actions performed on the data. Recent research
highlighted the key contribution of provenance in systems where the use of untrustworthy data
may lead to catastrophic failures (e.g., SCADA systems). Although provenance modeling,
collection, and querying have been studied extensively for workflows and curate database,
provenance in sensor networks has not been properly addressed. We investigate the problem of
secure and efficient provenance transmission and processing for sensor networks, and we use
provenance to detect packet loss attacks staged by malicious sensor nodes.
In a multi-hop sensor network, data provenance allows the BS to trace the source and
forwarding path of an individual data packet. Provenance must be recorded for each packet, but
important challenges arise due to the tight storage, energy and bandwidth constraints of sensor
nodes. Therefore, it is necessary to devise a light-weight provenance solution with low overhead.
Furthermore, sensors often operate in an untrusted environment, where they may be subject to
attacks. Hence, it is necessary to address security requirements such as confidentiality, integrity
and freshness of provenance. Our goal is to design a provenance encoding and decoding
mechanism that satisfies such security and performance needs. We propose a provenance
encoding strategy whereby each node on the path of a data packet securely embeds provenance
information within a Bloom filter that is transmitted along with the data. Upon receiving the
packet, the BS extracts and verifies the provenance information. We also devise an extension of
the provenance encoding scheme that allows the BS to detect if a packet drop attack was staged
by a malicious node.
As opposed to existing research that employs separate transmission channels for data and
provenance, we only require a single channel for both. Furthermore, traditional provenance
security solutions use intensively cryptography and digital signatures [5], and they employ
append-based data structures to store provenance, leading to prohibitive costs. In contrast, we use
only fast Message Authentication Code (MAC) schemes and Bloom filters (BF), which are
fixed-size data structures that compactly represent provenance. Bloom filters make efficient
usage of bandwidth, and they yield low error rates in practice. Our specific contributions are:
• We formulate the problem of secure provenance transmission in sensor networks, and
identify the challenges specific to this context;
• We propose an in-packet Bloom filter provenance encoding scheme;
• We design efficient techniques for provenance decoding and verification at the base station;
• We extend the secure provenance encoding scheme and devise a mechanism that detects
packet drop attacks staged by malicious forwarding sensor nodes;
• We perform a detailed security analysis and performance evaluation of the proposed
provenance encoding scheme and packet loss detection mechanism.
1.3 SECURITY IN WSN:
Although affected by random noise, environmental bias, and multipath effects, the RSS
measured at a set of landmarks (i.e., reference points with known locations) is closely related to
the transmitter’s physical location and is governed by the distance to the landmarks. The RSS
readings at different locations in physical space are distinctive.
CHAPTER 2
SYSTEM STUDY
Feasibility Study
Feasibility study is a test of proposal to its workability, impact, and ability to meet the user
needs and effective use of resources.
It is both necessary and prudent to evaluate the feasibility of a project at the earliest possible
time. Feasibility and risk analysis are related in many ways. If project risk is great, then
feasibility of producing quality software is reduced. Feasibility study focuses mainly on the
following question.
Is there a better way of performing users work rather than the current method?
what is the cost involved in going for a new system?
what are the benefits the user could gain over the current system?
what is recommended?
Feasibility consideration
The proposed system has been analyzed and found to be feasible for development and
implementation. There are three types of feasibility study such as
Technical Feasibility
Behavioral Feasibility
Economic Feasibility
Technical Feasibility
Technical feasibility centers in probing in existing computer system i.e., the hardware and
software. This involves financial consideration to accommodate technical enhancement.
VB.NET and SQL server are already installed under windows platforms. It was found that
the work could be done with the current equipment, existing software technology and with the
available personnel with the existing hardware and software technology, reliability access rights
and data security is guaranteed. Hence the system was found to be technically feasible.
Behavioral Feasibility
Behavioral Feasibility aims at estimating whether the user is able to cope with the new
system. The project has got sufficient support from management and from the users. This
system was found to be operationally feasible.
Economic Feasibility
This procedure involves the analysis of cost involved to set up the new system. The cost
involved is compared with the benefits that the new system would render. Since VB.NET and
SQL-Server are already in use at the organization, no costs were to be incurred towards the
purchase of software.
The cost of conducting the investigation and development of the system were not high
and were affordable. The system reduces the cost, effort and time of projects that has been
carried out in the organization. The system was found to be economically feasible.
CHAPTER 3
SYSTEM ANALYSIS
3.1 LITERATURE SURVEY
The discussion was made about the properties and the strategies of aggregation in
wireless environment. [Madden S et. al] proposed distributed, low power Tiny Aggregation
Algorithm (TAG) service with increased performance in data aggregation in clustering network
service. The wireless network routing also discussed and analyzed in that paper.
[Intanagonwiwat et al.] Proposed a novel technique called Directed Diffusion for scalable
and robust communication paradigm for wireless sensor networks. In distributed sensor
networks, the communication routing may also explained and it achieve the energy savings by
selecting empirically valid paths and processing the data in network.
The secure communications among the sensor networks make more complex and
challenges. [Murugaboopathi G et. al] analyzed the threats and the vulnerabilities in wireless
sensor networks. The some of the attacks may cause more problems when it was involved in
communication. The intrusion detection system was analyzed with the features of the technique
and usability. The cryptographic techniques are introduced for the attack detection and
prevention in wireless environment.
The enhanced power sensing technique has some of the challenges and security threats.
The security issues such as data confidentiality, integrity, robustness and survivability are
discussed with the analysis [Sushma et. al.] Thus the attacks in the sensor networks are
discussed with their effects.
[Roopak M] described the most important aspects of wireless sensor network security
such as obstacles, requirements, attacks and their defenses. The vulnerabilities of the sensor
networks and issues are analyzed with the large volume of sensor nodes.
[Josang A and Golbeck J] discussed the challenges for robust trust and reputation systems
in wireless sensor networks. The trust and reputation systems used for find the trustworthiness
and robustness in data aggregate node. The TRS system struggled with some of the disabilities
when it works with the aggregator. This paper proposed research agenda for developing the
reliable robustness principles and trust and reputation mechanism.
[Hoffman K et. al] surveyed the attack and defense mechanism in reputation systems in
wireless sensor environment. They presented an framework for analysis that allows for
decomposition of traditional reputation systems. In connection with that, they analyzed several
landmark systems in the peer to peer domains. Their work contributes to understanding the
design components of reputation systems are most vulnerable.
To obtain the trust scores, a cyclic framework which reflects the inter dependency
property was proposed by [Lim H.S. et. al.] The trustworthiness of the reputation systems are
enhanced assessment with the sensor networks. They provide a formal method for computing
trust scores based in the value and provenance similarities of data items. In future, they planned
to enhance with multiple dependant attributes consideration and multi attributes in network
operations.
[Kerchove de C. et. al.] Proposed iterative algorithm for obtain the trustworthiness in
clustered service network with aggregate node. The parameters for iterative algorithm are
introduced for reputation systems. This system allows the clients to efficiently refine the
reputations for evaluated objects from structured data.
[Zhou Y. et. al.] Proposed the ranking algorithm for robust of web based ranking system
has attracted maintny attentions. They proposed correlation based reputation algorithm to solve
the ranking problem of ranking systems. The best ranking algorithm should be robust against
spammer attack. This paper deals with the reputation systems of sensor networks with the refined
sensor sources.
Trust and Reputation Systems are complex to understand and they need an algorithm to
find the trustworthiness. Thus the algorithm should be defined and analyzed in this paper by
[Ayday E. et. al.]. The working mechanism of iterative filtering algorithm was proposed and the
scheme of IF is robust in filtering out the peers who provide unreliable data ratings. Finally they
introduced iterative method for trust and reputation management referred as ITRM.
[Chou C.T. et. al] proposed a computational efficient and effective method to compute a
weight average of sensor measurements; it realizes the sensor node faults and sensor noise into
consideration. It deals with the data fusion centre needs to perform the decompression once to
compute the robust average and reducing the requirements for computational.
It is necessary to obtain the analysis about the threats and challenges in wireless sensor
networks as well as the countermeasure selection is emerging task to mitigate the attack. [Yu Y.
et. al.] Provide analysis about the categorization of various attacks and its flexible
countermeasures related to the trust strategies in WSNs. Based on this analysis, the new trust
mechanisms such as secure routing and secure data was obtained to emphasize the challenges
and issues of trust schemes in WSNs.
The major problem in secure data aggregation vulnerable to security attacks, the collusion attack,
intermediate node attack, spoofing attack, etc., Data aggregation in wireless sensor networks
leads to robustness and needs accuracy of data collected from the various sensor sources. In that
the collusion attacks make more problems when the data may get aggregated and secured.
Collusion attacks or node compromising attacks may highly vulnerable in data aggregation.
Iterative filtering algorithms satisfies the vulnerability the threats such as collusion attacks and
warmhole attacks. This algorithm simultaneously aggregate data from multiple sensor sources
and also furnish the trust assessment of various sensor sources frequently in a form of
appropriate weight factors substituted to the data furnished by each sensor source. Trust and
reputation systems have been recently affordable as cost and effective security mechanism for
Wireless Sensor Networks. Although sensor networks are being increasingly deployed in many
application domains, assessing trustworthiness of reported data from distributed sensors has
remained a challenging issue. Sensors are in hostile environments may related to the node
compromising attacks by the adversaries and also the adversaries who intend to inject false data
into the system. In this hostile context, accessing and gathering the trustworthiness of the
aggregated data and announcing decision makers for that context becomes a challenging and
complex task.
3.1.1 Disadvantages of Existing System:
Traditional provenance security solutions use intensively cryptography and digital signatures,
and they employ append-based data structures to store provenance, leading to prohibitive
costs.
Existing research employs separate transmission channels for data and provenance
3.2 Proposed System:
The wireless sensor networks deals with two important aspects of sensor network formation.
These two networks are flat sensor network and clustered sensor network. The number of sensor
source are in small so that the flat sensor network is applicable and affordable but when the
system application need more number of sensor sources then the system go for clustered sensor
network. Thus the sensor nodes are clustered into groups. Every cluster should have the cluster
head. That cluster should be act as a aggregator or aggregate node. Then the aggregate node may
collect the required data and processed for decision making by sending to the signal reader. The
aggregator may be comprised by the adversaries from the secure algorithms so that the cluster
sensors are compromised easily and false data assigned to the sensors. Then the individual sensor
node send signal to the aggregator then it false signal to the reader. The Iterative Filtering
Algorithm may propose to obtain the trustworthiness of the individual sensor nodes.
3.2.1 Advantages:
All the individual sensor nodes are pre-checked before the process of data aggregation
and predict all the sensors are reliable.
An adversary node compromises some of the sensor source nodes, and modifies the
readings of these values such that the simple average of all sensor readings is skewed
towards a minimum value. As these considered sensor nodes report a minimum value,
Iterative Filtering algorithm find them and assigns to them as lower weights, because
their values are far from the values of other sensors. In other words, the proposed
algorithm is much robust against false data injection in this scenario because the
compromised nodes individually falsify the readings without any knowledge about the
aggregation algorithm.
The identification of the collusion attack against the sensor nodes based on trust and
reputation systems.
The estimation of the noise parameter obtained from the sensor errors may required for
the design of a robust and efficient aggregation algorithm.
CHAPTER – 4
SYSTEM SPECIFICATION
HDD : 80 GB
RAM : 1 GB
Assemblies
Compiled CIL code is stored in CLI assemblies. As mandated by the specification,
assemblies are stored in Portable Executable (PE) file format, common on Windows platform for
all dynamic-link library (DLL) and executable EXE files. Each assembly consists of one or more
files, one of which must contain a manifest bearing the metadata for the assembly. The complete
name of an assembly (not to be confused with the file name on disk) contains its simple text
name, version number, culture, and public key token. Assemblies are considered equivalent if
they share the same complete name.
A private key can also be used by the creator of the assembly for strong naming. The
public key token identifies which private key an assembly is signed with. Only the creator of the
keypair (typically .NET developer signing the assembly) can sign assemblies that have the same
strong name as a prior version assembly, since the creator possesses the private key. Strong
naming is required to add assemblies to Global Assembly Cache.
Class library
.NET Framework includes a set of standard class libraries. The class library is organized
in a hierarchy of namespaces. These class libraries implement very many common functions,
such as file reading and writing, graphic rendering, database interaction, and XML document
manipulation. The class libraries are available for all CLI compliant languages. The class library
is divided into two parts (with no clear boundary): Base Class Library (BCL) and Framework
Class Library (FCL).
BCL includes a small subset of the entire class library and is the core set of classes that
serve as the basic API of CLR. BCL classes are available in .NET Framework as well as its
alternative implementations including .NET Compact Framework, Microsoft Silverlight, .NET
Core and Mono.
FCL is a superset of BCL and refers to the entire class library those ships with .NET
Framework. It includes an expanded set of libraries, including the Windows Forms, ASP.NET,
and Windows Presentation Foundation (WPF) but also extensions to the base class
libraries ADO.NET, Language Integrated Query (LINQ), Windows Communication
Foundation (WCF), and Workflow Foundation (WF). FCL is much larger in scope than standard
libraries for languages like C++, and comparable in scope to standard libraries of Java.
With the introduction of alternative implementations (e.g., Silverlight), Microsoft
introduced the concept of Portable Class Libraries (PCL) allowing a consuming library to run on
more than one platform. With the further proliferation of .NET platforms, the PCL approach
failed to scale (PCLs are defined intersections of API surface between two or more
platforms).[36] As the next evolutionary step of PCL, the .NET Standard Library was created
retroactively based on the System.Runtime.dll based APIs found in UWP and Silverlight. New
.NET platforms are encouraged to implement a version of the standard library allowing them to
re-use extant third-party libraries to run without new versions of them. The .NET Standard
Library allows an independent evolution of the library and app model layers within the .NET
architecture.
NuGet is the package manager for all .NET platforms. It is used to retrieve third-party
libraries into a .NET project with a global library feed at NuGet.org. Private feeds can be
maintained separately, e.g., by a build server or a file system directory.
App models
Atop the class libraries, multiple app models are used to create applications. Other app
models are offered by alternative implementations of the .NET Framework. Console, UWP and
ASP.NET Core are available on .NET Core. Mono is used to power Xamarin app models
for Android, iOS, and macOS. The retroactive architectural definition of app models showed up
in early 2015 and was also applied to prior technologies like Windows Forms or WPF.
C++/CLI
Microsoft introduced C++/CLI in Visual Studio 2005, which is a language and means of
compiling Visual C++ programs to run within the .NET Framework. Some parts of the C++
program still run within an unmanaged Visual C++ Runtime, while specially modified parts are
translated into CIL code and run with the .NET Framework's CLR. Assemblies compiled using
the C++/CLI compiler are termed mixed-mode assemblies, since they contain native and
managed code in the same DLL.[39] Such assemblies are also difficult to reverse engineer, since
.NET decompilers such as .NET Reflector reveal only the managed code.
Interoperability
Because computer systems commonly require interaction between newer and older
applications, .NET Framework provides means to access functions implemented in newer and
older programs that execute outside .NET environment. Access to Component Object
Model (COM) components is provided in System.Runtime.InteropServices and
System.Enterprise Services namespaces of the framework. Access to other functions is
via Platform Invocation Services (P/Invoke). Access to .NET functions from native applications
is via reverse P/Invoke function.
Language independence
The .NET Framework introduces a Common Type System (CTS) that defines all
possible data types and programming constructs supported by CLR and how they may or may
not interact with each other conforming to CLI specification. Because of this feature, .NET
Framework supports the exchange of types and object instances between libraries and
applications written using any conforming .NET language.
Type safety
CTS and the CLR used in .NET Framework also enforce type safety. This prevents ill-
defined casts, wrong method invocations, and memory size issues when accessing an object. This
also makes most CLI languages statically typed (with or without type inference). However,
starting with .NET Framework 4.0, the Dynamic Language Runtime extended the CLR, allowing
dynamically typed languages to be implemented atop the CLI.
Portability
While Microsoft has never implemented the full framework on any system except
Microsoft Windows, it has engineered the framework to be cross-platform,[40] and
implementations are available for other operating systems (see Silverlight and § Alternative
implementations). Microsoft submitted the specifications for CLI (which includes the core class
libraries, CTS, and CIL),[41][42][43] C#,[44] and C++/CLI to both Ecma International (ECMA)
and International Organization for Standardization (ISO), making them available as official
standards. This makes it possible for third parties to create compatible implementations of the
framework and its languages on other platforms.
Security
.NET Framework has its own security mechanism with two general features: Code
Access Security (CAS), and validation and verification. CAS is based on evidence that is
associated with a specific assembly. Typically the evidence is the source of the assembly
(whether it is installed on the local machine or has been downloaded from the Internet). CAS
uses evidence to determine the permissions granted to the code. Other code can demand that
calling code be granted a specified permission. The demand causes CLR to perform a call stack
walk: every assembly of each method in the call stack is checked for the required permission; if
any assembly is not granted the permission a security exception is thrown.
Managed CIL bytecode is easier to reverse-engineer than native code,
unless obfuscated .NET decompiler programs enable developers with no reverse-engineering
skills to view the source code behind unobfuscated .NET assemblies. In contrast, apps compiled
to native machine code are much harder to reverse-engineer, and source code is almost never
produced successfully, mainly because of compiler optimizations and lack of reflection.[48] This
creates concerns in the business community over the possible loss of trade secrets and the
bypassing of license control mechanisms.
Memory management
CLR frees the developer from the burden of managing memory (allocating and freeing up
when done); it handles memory management itself by detecting when memory can be safely
freed. Instantiations of .NET types (objects) are allocated from the managed heap; a pool of
memory managed by CLR. As long as a reference to an object exists, which may be either direct,
or via a graph of objects, the object is considered to be in use. When no reference to an object
exists, and it cannot be reached or used, it becomes garbage, eligible for collection.
.NET Framework includes a garbage collector (GC) which runs periodically, on a
separate thread from the application's thread, that enumerates all the unusable objects and
reclaims the memory allocated to them. It is a non-deterministic, compacting, mark-and-
sweep garbage collector. GC runs only when a set amount of memory has been used or there is
enough pressure for memory on the system. Since it is not guaranteed when the conditions to
reclaim memory are reached, GC runs are non-deterministic. Each .NET application has a set of
roots, which are pointers to objects on the managed heap (managed objects). These include
references to static objects and objects defined as local variables or method parameters currently
in scope, and objects referred to by CPU registers. When GC runs, it pauses the application and
then, for each object referred to in the root, it recursively enumerates all the objects reachable
from the root objects and marks them as reachable. It uses CLI metadata and reflection to
discover the objects encapsulated by an object, and then recursively walk them. It then
enumerates all the objects on the heap (which were initially allocated contiguously) using
reflection. All objects not marked as reachable are garbage.[49] This is the mark phase.[50] Since
the memory held by garbage is of no consequence, it is considered free space. However, this
leaves chunks of free space between objects which were initially contiguous. The objects are
then compacted together to make free space on the managed heap contiguous again.[49][50] Any
reference to an object invalidated by moving the object is updated by GC to reflect the new
location.[50] The application is resumed after garbage collection ends. The latest version of .NET
framework uses concurrent garbage collection along with user code, making pauses
unnoticeable, because it is done in the background.
GC used by .NET Framework is also generational. Objects are assigned a generation.
Newly created objects are tagged Generation 0. Objects that survive a garbage collection are
tagged Generation 1. Generation 1 objects that survive another collection are Generation 2. The
framework uses up to Generation 2 objects. Higher generation objects are garbage collected less
often than lower generation objects. This raises the efficiency of garbage collection, as older
objects tend to have longer lifetimes than newer objects. Thus, by eliminating older (and thus
more likely to survive a collection) objects from the scope of a collection run, fewer objects need
checking and compacting.
Performance
When an application is first launched, the .NET Framework compiles the CIL code into
executable code using its just-in-time compiler, and caches the executable program into the .NET
Native Image Cache.[53][54] Due to caching, the application launches faster for subsequent
launches, although the first launch is usually slower. To speed up the first launch, developers
may use the Native Image Generator utility to manually ahead-of-time compile and cache any
.NET application.
The garbage collector, which is integrated into the environment, can introduce
unanticipated delays of execution over which the developer has little direct control. "In large
applications, the number of objects that the garbage collector needs to work with can become
very large, which means it can take a very long time to visit and rearrange all of them."
.NET Framework provides support for calling Streaming SIMD Extensions (SSE) via managed
code from April 2014 in Visual Studio 2013 Update 2. However, Mono has provided support
for SIMD Extensions as of version 2.2 within the Mono.Simd namespace in 2009.[56] Mono's
lead developer Miguel de Icaza has expressed hope that this SIMD support will be adopted by
CLR's ECMA standard.[57] Streaming SIMD Extensions have been available in x86 CPUs since
the introduction of the Pentium III. Some other architectures such as ARM and MIPS also have
SIMD extensions. In case the CPU lacks support for those extensions, the instructions are
simulated in software.
.NET Framework is the predominant implementation of .NET technologies. Other
implementations for parts of the framework exist. Although the runtime engine is described by
an ECMA-ISO specification, other implementations of it may be encumbered by patent issues;
ISO standards may include the disclaimer, "Attention is drawn to the possibility that some of the
elements of this document may be the subject of patent rights. ISO shall not be held responsible
for identifying any or all such patent rights.” It is harder to develop alternatives to FCL, which is
not described by an open standard and may be subject to copyright restrictions. Also, parts of
FCL have Windows-specific functions and behavior, so implementation on non-Windows
platforms can be problematic.
Some alternative implementations of parts of the framework are listed here.
.NET Micro Framework is a .NET platform for extremely resource-constrained devices. It
includes a small version of CLR and supports development in C# (though some developers were
able to use VB.NET,[59] albeit with an amount of hacking, and with limited functionalities) and
debugging (in an emulator or on hardware), both using Microsoft Visual Studio. It also features a
subset of .NET Framework Class Library (about 70 classes with about 420 methods),
a GUI framework loosely based on WPF, and additional libraries specific to embedded
applications.
.NET Core is an alternative Microsoft implementation of the managed code framework; it has
similarities with .NET Framework and even shares some API, but is designed based on different
sets of principles: It is cross-platform and free and open-source.
Mono is an implementation of CLI and FCL, and provides added functions. It is dual-
licensed as free and proprietary software. It includes support for ASP.NET, ADO.NET, and
Windows Forms libraries for a wide rang e of architectures and operating systems. It also
includes C# and VB.NET compilers.
Portable.NET (part of DotGNU) provides an implementation of CLI, parts of FCL, and a C#
compiler. It supports a variety of CPUs and operating systems. The project was discontinued,
with the last stable release in 2009.
Microsoft Shared Source Common Language Infrastructure is a non-free implementation of
CLR. However, the last version runs on Microsoft Windows XP SP2 only, and has not been
updated since 2006. Thus, it does not contain all features of version 2.0 of .NET Framework.
CrossNet[60] is an implementation of CLI and parts of FCL. It is free software using an open
source MIT License.
.NET Core
A venn diagram showing the APIs covered by .NET Framework, .NET Core, and bothb
.NET Core is a cross-platform free and open-source managed software framework similar to
.NET Framework. It consists of CoreCLR, a complete cross-platform runtime implementation
of CLR, the virtual machine that manages the execution of .NET programs. CoreCLR comes
with an improved just-in-time compiler, called RyuJIT .NET Core also includes CoreFX, which
is a partial fork of FCL.[63]While .NET Core shares a subset of .NET Framework APIs, it comes
with its own API that is not part of .NET Framework. Further, .NET Core contains CoreRT, the
.NET Native runtime optimized to be integrated into AOT compiled native binaries. A variant of
the .NET Core library is used for UWP .NET Core's command-line interface offers an execution
entry point for operating systems and provides developer services like compilation and package
management.
.NET Core supports four cross-platform scenarios: ASP.NET Core web apps, command-
line apps, libraries, and Universal Windows Platform apps. It does not implement Windows
Forms or WPF which render the standard GUI for desktop software on Windows.NET Core is
also modular, meaning that instead of assemblies, developers work with NuGet packages. Unlike
.NET Framework, which is serviced using Windows Update, .NET Core relies on its package
manager to receive updates. .NET Core 1.0 was released on 27 June 2016, along with Visual
Studio 2015 Update 3, which enables .NET Core development.
.NET Standard
.NET Standard is a set of APIs to unify the .NET Framework, .NET Core, and Xamarin
platform
5.3 OVERVIEW OF C#
C# is an elegant and type-safe object-oriented language that enables developers to build a wide
range of secure and robust applications that run on the .NET Framework. You can use C# to
create traditional Windows client applications, XML Web services, distributed components,
client-server applications, database applications, and much, much more. Microsoft Visual C#
2005 provides an advanced code editor, convenient user interface designers, integrated debugger,
and many other tools to facilitate rapid application development based on version 2.0 of the C#
language and the .NET Framework.
C# syntax is highly expressive, yet with less than 90 keywords, it is also simple and easy
to learn. The curly-brace syntax of C# will be instantly recognizable to anyone familiar with C,
C++ or Java. Developers who know any of these languages are typically able to begin working
productively in C# within a very short time. C# syntax simplifies many of the complexities of
C++ while providing powerful features such as nullable value types, enumerations, delegates,
anonymous methods and direct memory access, which are not found in Java. C# also supports
generic methods and types, which provide increased type safety and performance, and iterators,
which enable implementers of collection classes to define custom iteration behaviors that are
simple to use by client code.
As an object-oriented language, C# supports the concepts of encapsulation, inheritance
and polymorphism. All variables and methods, including the Main method, the application's
entry point, are encapsulated within class definitions. A class may inherit directly from one
parent class, but it may implement any number of interfaces. Methods that override virtual
methods in a parent class require the override keyword as a way to avoid accidental redefinition.
In C#, a struct is like a lightweight class; it is a stack-allocated type that can implement interfaces
but does not support inheritance.
In addition to these basic object-oriented principles, C# facilitates the development of
software components through several innovative language constructs, including:
Encapsulated method signatures called delegates, which enable type-safe event
notifications.
Properties, which serve as accessors for private member variables.
Attributes, which provide declarative metadata about types at run time.
Inline XML documentation comments.
If the c# developer needs to interact with other Windows software such as COM objects
or native Win32 DLLs, you can do this in C# through a process called "Interop." Interop enables
C# programs to do just about anything that a native C++ application can do. C# even supports
pointers and the concept of "unsafe" code for those cases in which direct memory access is
absolutely critical.
The C# build process is simple compared to C and C++ and more flexible than in Java.
There are no separate header files, and no requirement that methods and types be declared in a
particular order. A C# source file may define any number of classes, structs, interfaces, and
events.
SYSTEM ARCHITECTURE
Compressive Collusion
Sensing Attack
Sink Node
Attack
Detection
SYSTEM DESIGN
Immediately after establishing the route, the setup phase gets started. The source decides
on symmetric key crypto system for encryption the packet during the transmission phase. Source
securely distributes a decryption key and a symmetric key to each node on the path. Key
distribution may be based on the public-key crypto-system. The source also announces two hash
functions to every node in the route. Besides this, source also needs to set up its HLA keys.
Sensor Nodes
Wireless Sensor
Networks
Route
Discovery
7.1.2 Packet Transmission Phase:
After the successful completion of Setup phase, source enters into the transmission phase.
In this phase, before the transmission of packets source computes the hash value of each packet
and generates HLA signatures of the hash value for each node. These signatures are then sent
together with the packets to the route by using a one-way chained encryption. This prevents the
deciphering of the signatures for downstream nodes by the upstream node. When a node in the
route receives the packet from source it extracts packets and signature. Then it verifies the
integrity of received packet. A database is maintained at every node on PSD. It can be considered
as a FIFO queue which records the reception status for the packets sent by source. Every node
stores the received hash value and signature in the database as a proof of reception.
Key Generation
Data
Transmission Packet
Transmission
When the source issues an attack detection request (ADR), the audit phase gets started.
The ADR message includes the id of the nodes on the route, source’ s HLA public key
information, the sequence numbers of the packets sent by source, and the sequence numbers
packets that were received by destination . The auditor requests the packet bitmap information
from each node in the route by issuing a challenge. From the information stored on the database,
every node generates this bitmap. Auditor checks the validity of bitmaps and accepts if it is valid.
Otherwise it rejects the bitmap and considers the node as a malicious one. This mechanism only
guarantees that anode cannot understate its packet loss, i.e., it cannot claim the reception of a
packet that it actually did not receive. This mechanism cannot prevent a node from overly stating
its packet loss by claiming that it did not receive a packet that it actually received. This latter
case is prevented by the mechanism based on reputation which is discussed in the detection
phase.
Wireless Sensor
Networks
Data
Transmission Auditor
Truthful
Detection
7.1.4 Packet Dropping Attack Detection Phase:
After auditing the reply to the challenge issued by the auditor, it enters into the detection
phase. Auditor constructs per hop bitmaps and by using an auto correlation function (ACF) it
will find out the correlation between the lost packets. Then it finds out the difference between the
calculated value and correlation value of wireless channel. Based on the relative difference, it
decides whether the packet loss is due to the malicious node or link error. When it finds out
malicious drop, it can consider both ends of the hop as suspicious. That means either the
transmitter did not send the packet or receiver did not receive. After identifying these two
suspicious nodes, the detector needs to find out the actual culprit. For this, it can check the
reputation value. Now the Auditor module will collect the reputation value for the two suspicious
nodes. When a node fails to forward the packet it, it will get minimum reputation. By checking
this, the detector can easily distinguish the attacker.
Wireless Sensor
Networks
Data
Transmission Attack
Detection
Auto Signature
Correction Verification
Function
CHAPTER - 8
SYSTEM TESTING
Once the system has been designed, the next step is to convert the designed one into
actual code, so as to satisfy the user requirements as expected. If the system is approved to be
error free it can be implemented. The implemented software should be maintained for prolonged
running of the software.
Brute Force
This is the most common and least efficient method for isolating the cost of the software
error. Brute Force debugging method is usually applied when all else fails. Using a "let a
computer find the error" philosophy, memory-dumps are taken, runtime traces are invoked and
the program is loaded with WRITE(in the case of message box) statements. In the information
that is produced, a clue is found loading to the cause of the error.
Back Tracking
The fairly common debugging approach that can be successfully in small program.
Beginning at the site where the symptom has been uncovered, the source code is tracked
backward (manually) until the site of the cost is found. In this project backtracking operations is
used in fetch operation.
Cause -Elimination
This approach is manifested by induction deduction and introduces the concept of
'Binary partition'. A 'Cause Hypothesis' is devised and the error related data are used to prove
or disprove the hypothesis. Alternatively, a list of all possible causes is developed, and tests are
conducted to eliminate each. If an initial test indicates that a particular cause Hypothesis shows
promise, that data are refined in an attempt to isolate the path.
Each of the debugging approaches can be supplemented with debugging tools. A wide
variety of debugging compilers, dynamic debugging aids(traces), automatic test case generators,
memory dumps and cross-reference maps can be applied. However, tools are not a substitute
for careful evaluation, based on the complete software design document and clear source code.
SYSTEM TESTING
Testing Fundamentals
Software testing is an important element of S/W quality assurance and represents the
ultimate review of specification, design and coding. The-increasing visibility of S/W as failure
are motivating forces for well planned, through testing.
Though the test phase is often thought of as separate and distinct from the development
effort-first develop, then - testing is a concurrent process that provides valuable information for
the development team.
There are at least three options for integrating Project Builder into the test phase:
Testers do not install Project Builder, use Project Builder functionality to compile
and source-control the modules to be tested and hand them off to the testers,
whose process remains unchanged.
The tester imports the same project or projects that the developers use.
Create a project based on the development project but customized for the testers
who import it.
A combination of the second and third options works best. Associating the application
with a project can be useful during the testing phase, as well. we can create actions to
automatically run test scripts or add script types and make them dependent on the modules to
test.
Testing objectives
There are several rules that can serve as testing objectives. They are
Testing is process of executing a program with the intent of finding an error.
A good test case is one that has a high probability of finding an undiscovered error.
A successful test is one that uncovers an undiscovered error.
Types of testing
Knowing the specified function that a product has been designed to perform, test can be
conducted that demonstrate the each function is fully operational. This test approach is
called Black Box Testing.
Knowing the internal working of a project, test can be performed to ensure that the
internal operation of the product performs according to the specifications and all internal
components have been adequately exercised. This test approach is called white Box
Testing.
Testing Strategy
The testing process begins with the identification of the strategy. Both the top-down and the
bottom-up testing approaches were considered.
Top-down assumes that the critical control code and functions will be developed and tested
first. These are followed by secondary and supporting functions.
Bottom-up testing assumes that the lower number of incremental changes in modules, the
lower the error rate. After the complete review of the both strategy would be pro-dominantly
top-down approach. But the testing strategies are not mutually exclusive. so, some approaches
from bottom-up testing process more efficient.
Testing Process
This process can be fragmented into number of modules. The modules in turn are
composed of a number of procedures and functions. The testing is carried out incrementally
along the system development process. The testing process can be classified as for stages.
Unit testing
Individual components are tested to ensure that they operate currently. Each component
is treated as a separate entity, which does not need other components during this process. Unit
testing focuses verification efforts on the smallest unit of software design in the module. This is
also known as "Module Testing".
Integration Testing
Integrated testing is the systematic testing for constructing the program structure
while at the same time conducting tests to uncover errors within the interface the need for
integrated test is to find the overall system performance.
Normally, the integration testing can be carried out in two ways.
o Top - down approach
o Bottom - up approach
Validation Testing
The next step is the validation testing of the proposed system. In the Integration testing
when the software has been uncovered and corrected, the validation testing is carried out.
The validation is the set of activities that ensure that the software functions in a manner
that is expected by the client. Hence, validation testing ensures that all functional requirements
are satisfied and all the performance requirements are satisfied.
System Testing
It is performed keeping in mind in the anticipated interaction that might happen. It is also
validated to see that it meet the functional and non -functional requirements. The recovery
routines built in the system are also put into extensive testing.
CHAPTER 9