Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
14 views12 pages

IoT Module 3 C

The document discusses IoT processing topologies, emphasizing the importance of data types (structured vs. unstructured) and processing requirements based on urgency. It outlines on-site and off-site processing methods, detailing their advantages and applications, especially in time-sensitive scenarios. Additionally, it highlights considerations for IoT device design, processing offloading locations, and decision-making strategies for efficient data management.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views12 pages

IoT Module 3 C

The document discusses IoT processing topologies, emphasizing the importance of data types (structured vs. unstructured) and processing requirements based on urgency. It outlines on-site and off-site processing methods, detailing their advantages and applications, especially in time-sensitive scenarios. Additionally, it highlights considerations for IoT device design, processing offloading locations, and decision-making strategies for efficient data management.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

MODULE-3

IoT Processing Topologies and Types

Data Format:
The Internet is a vast space where huge quantities and varieties of data are
generated regularly and flow freely. The massive volume of data generated by
huge number of users is further enhanced by the multiple devices utilized by
most users.
In addition to these data-generating sources, non-human data generation
sources such as sensor nodes and automated monitoring systems further add to
the data load on the Internet. This huge data volume is composed of a variety of
data such as e-mails, text documents (Word docs, PDFs, and others), social
media posts, videos, audio files, and images,
However, these data can be broadly grouped into two types based on how they
can be accessed and stored: 1) Structured data and 2) unstructured data.
I) Structured data
 These are typically text data that have a pre-defined structure. Structured
data are associated with relational database management systems
(RDBMS). These are primarily created by using length-limited data fields
such as phone numbers, social security numbers, and other such
information.
 Even if the data is human or machine generated, these data are easily
searchable by querying algorithms as well as human generated queries.
Common usage of this type of data is associated with flight or train
reservation systems, banking systems, inventory controls, and other
similar systems.
 Established languages such as Structured Query Language (SQL) are
used for accessing these data in RDBMS. However, in the context of IoT,
structured data holds a minor share of the total generated data over the
Internet.

ii) Unstructured data


 In simple words, all the data on the Internet, which is not structured, is
categorized as unstructured. These data types have no pre-defined
structure and can vary according to applications and data-generating
sources.
 Some of the common examples of human-generated unstructured data
include text, e-mails, videos, images, phone recordings, chats, and others.
Some common examples of machine-generated unstructured data include
sensor data from traffic, buildings, industries, satellite imagery,
surveillance videos, and others.
 This data type does not have fixed formats associated with it, which
makes it very difficult for querying algorithms to perform a look-up.
Querying languages such as NoSQL are generally used for this data type.
Importance of Processing in IoT

1. Very Time-Critical Data: This category includes data from sources like
flight control systems and healthcare that require immediate decision
support. These systems have a very low threshold for processing latency,
typically measured in milliseconds. The data needs to be processed
instantly to ensure safety and efficiency. Delays are unacceptable, as they
can lead to critical failures or accidents.
2. Time-Critical Data: Data from sources such as vehicles, traffic systems,
and smart homes falls into this category. These systems can tolerate a
slight processing delay, typically a few seconds, but still require timely
action. For example, traffic management or machine systems need quick
responses, but not as urgently as very time-critical data. Delays can affect
user experience but are not immediately dangerous.
3. Normal Data: This data includes information from less time-sensitive
areas like agriculture and environmental monitoring. Processing latency
can range from a few minutes to several hours, allowing for more
flexibility in response times. These systems do not require immediate
action, and delays do not significantly impact their outcomes. Data can be
processed at a more leisurely pace with no urgent time constraints.
4. Processing Near the Source: Very time-critical data must be processed
close to the source to avoid delays that could compromise decision-
making. This is crucial for applications where rapid responses are
necessary, such as in healthcare or industrial control systems. Edge
computing is often used to ensure this level of immediacy. The goal is to
minimize latency and handle data in real-time to support urgent decisions.
5. Remote and Collaborative Processing: Time-critical data can be
processed remotely, using cloud systems or collaborative processing
networks, where slight delays are acceptable. For example, traffic
systems or surveillance cameras can transmit data to central servers for
analysis. Normal data, on the other hand, can be processed without
concern for strict timing requirements. Cloud computing and distributed
systems are often used for handling such data efficiently.
Processing Topologies

The identification and intelligent selection of processing requirement of an IoT


application are one of the crucial steps in deciding the architecture of the
deployment. A properly designed IoT architecture would result in massive
savings in network bandwidth and conserve significant amounts of overall
energy in the architecture while providing the proper and allowable processing
latencies for the solutions associated with the architecture
. We can divide the various processing solutions into two large topologies:
1) On-site and 2) Off-site.
The off-site processing topology can be further divided into the following:
1) Remote processing and 2) Collaborative processing.

 On-site processing

 On-site processing means the data is processed directly at the source,


which is vital in applications with minimal tolerance for latencies. These
applications, such as healthcare and flight control systems, generate data
at high speeds and require immediate processing to avoid catastrophic
consequences. Latency from processing hardware or network
transmission can result in missed critical data.

 Real-Time Event Detection: In on-site processing, events like a fire can


be detected using sensors, such as temperature sensors connected to a
sensor node. The sensor node processes the event locally and generates
an alert. The data can then be forwarded to a remote infrastructure for
further analysis and storage, balancing immediate response with
ongoing data handling.
Off-site processing

 Cost-Effectiveness: Off-site processing is significantly cheaper than on-


site processing due to its lower requirements for processing at the source.
In large-scale IoT deployments, having dedicated on-site infrastructure is
often unsustainable, making off-site processing a more practical solution.
Sensor nodes are typically not required to process data urgently, which
allows the cost-saving use of external processing resources.
 Data Collection and Transmission: In off-site processing, sensor nodes
are responsible for collecting and framing data, which is then transmitted
to a remote location (server or cloud) for further processing. Unlike on-
site processing, the sensor nodes do not handle heavy computation, and
data transmission is a crucial part of the process.
 Collaborative Processing: Off-site processing often involves multiple
processing nodes that collaborate to handle the data. This arrangement
helps increase processing power and ensures that if a single node cannot
establish a connection to a remote location, others can contribute to
sharing the processing load, making the system more flexible and
reliable.

Remote processing

 Cost and Energy Efficiency: Remote processing allows for the


offloading of data from numerous sensor nodes to a single powerful
server or cloud platform for processing. This significantly reduces costs
and energy consumption by enabling the reuse of processing resources,
and allows for simpler and smaller processing nodes at the deployment
site.
 Scalability and Network Dependency: This topology ensures the
scalability of IoT solutions without drastically increasing deployment
costs. However, it requires robust network connectivity, as the data is
transmitted from sensor nodes to a remote processor, consuming
bandwidth and relying heavily on network availability for effective
operation.
Collaborative processing
 Cost-Effective and Localized Processing: Collaborative processing
is ideal for areas with limited or no network connectivity, enabling
large-scale IoT deployments without the need for remote
infrastructure. It combines the processing power of nearby nodes,
reducing data transfer latencies and conserving network bandwidth.

 Suitable for Agriculture: This topology is particularly useful for


applications like agriculture, where frequent data processing is
unnecessary. Data is typically logged after long intervals, making
local collaborative processing more efficient. Mesh networks are often
preferred for seamless implementation in such scenarios.
IoT Device Design and Selection Considerations

The main consideration of minutely defining an IoT solution is the selection of


the process or for developing the sensing solution (i.e., the sensor node). This
selection is governed by many parameters that affect the usability, design, and
affordability of the designed IoT sensing and processing solution.

1. Size: Larger sensor nodes tend to have higher energy consumption, which
makes them less suitable for compact IoT applications like wearables.
Smaller form factors are more efficient and ideal for applications
requiring portability and low power consumption. For example,
wearables rely on miniaturized, energy-efficient nodes. The size of a
sensor node directly impacts energy efficiency and suitability for various
IoT applications.

2. Energy: Devices with high energy demands require frequent battery


replacements, limiting their long-term sustainability, especially for IoT
applications in remote areas. Efficient energy use is critical for devices
that cannot be easily maintained. Low-energy IoT devices extend
operational lifespan and reduce maintenance costs. Energy consumption
is a key factor in determining the viability of IoT devices in various
environments.

3. Cost: The cost of sensors and processors directly affects the scalability
and affordability of IoT deployments. Lower costs allow for denser
deployments, which are especially important in large-scale IoT networks.
For instance, affordable gas or fire detection systems enable users to
deploy more sensors without breaking the budget. Reducing hardware
costs makes IoT solutions more accessible to a broader range of users.

4. Memory: IoT devices with more memory can perform tasks like local
data processing, filtering, and storage. Higher memory enables more
advanced features but increases the device cost. Devices with limited
memory may struggle with complex tasks, affecting performance. The
balance between memory capacity and cost is crucial in determining the
functionality of an IoT device.

5. Processing Power: IoT devices with higher processing power can handle
complex data, such as video and image processing. Simpler applications,
like environmental sensing, require less processing power. The
processing power needed depends on the complexity of the task the IoT
device is performing. Devices with lower processing power are more
energy-efficient but may have limited functionality.
6. I/O Rating: The input/output (I/O) rating of IoT devices determines the
complexity of the circuit design, energy consumption, and compatibility
with sensors. Newer processors with lower I/O voltages may require
additional circuitry to interface with older sensors. The I/O rating affects
how easily a sensor can be integrated into an IoT system. Proper I/O
voltage is essential for reducing power consumption and simplifying
system design.

7. Add-ons: Add-ons like analog-to-digital converters (ADCs), built-in


clock circuits, and wireless access capabilities enhance the functionality
and versatility of IoT devices. These features simplify the development
process by providing essential components that are already integrated.
IoT devices with more add-ons can handle more complex tasks and are
easier to develop for various applications. Having built-in options reduces
the time and effort required for hardware integration.

Processing Onloading

Offload Location: In IoT systems, processing can be offloaded to different


layers of the network, such as edge, fog, or cloud. Edge devices handle
localized processing, reducing latency and network congestion. Fog processing
is used for more localized tasks and can handle data within a specific
geographic area, while cloud processing is reserved for more extensive
computations but incurs higher bandwidth costs.

Offload Decision Making: Deciding where to offload the processing depends


on factors like data volume, latency, and the required processing power. When
latency is critical, offloading may occur at the edge or fog level for faster
processing. For less time-sensitive tasks, cloud offloading can be used, offering
greater computational power but at the cost of increased latency and bandwidth
usage.

Offloading Considerations: The decision to offload data should consider


factors like energy consumption, available network bandwidth, and system
capabilities. Offloading too much data to distant servers increases latency, while
local processing on resource-constrained devices might not provide sufficient
processing power. Balancing these factors is key to efficient offloading.

Cost of Network Bandwidth: Sending data to remote locations like the cloud
increases network bandwidth usage, which can be costly. Fog or edge
processing can mitigate this by handling data locally, reducing the need for
high-bandwidth connections. Therefore, offloading to closer layers in the
network can lower costs, especially in large-scale IoT deployments.

Scalability and Efficiency: Offloading allows IoT systems to scale efficiently


by distributing processing tasks across different layers of the network. Fog and
edge processing help maintain the simplicity and low cost of local devices,
while offloading heavier computations to the cloud enables the system to handle
larger datasets and more complex tasks without overloading local devices.

Offload Location

The choice of offload location decides the applicability, cost, and sustainability
of the IoT application and deployment. We distinguish the offload location into
four types:

• Edge: Offloading processing to the edge implies that the data processing is
facilitated to a location at or near the source of data generation itself. Offloading
to the edge is done to achieve aggregation, manipulation, bandwidth reduction,
and other data operations directly on an IoT device

• Fog: Fog computing is a decentralized computing infrastructure that is utilized


to conserve network bandwidth, reduce latencies, restrict the amount of data
unnecessarily flowing through the Internet, and enable rapid mobility support
for IoT devices. The data, computing, storage and applications are shifted to a
place between the data source and the cloud resulting in significantly reduced
latencies and network bandwidth usage.

• Remote Server: A simple remote server with good processing power may be
used with IoT-based applications to offload the processing from resource
constrained IoT devices. Rapid scalability may be an issue with remote servers,
and they may be costlier and hard to maintain in comparison to solutions such
as the cloud.

• Cloud: Cloud computing is a configurable computer system, which can get


access to configurable resources, platforms, and high-level services through a
shared pool hosted remotely. A cloud is provisioned for processing offloading
so that processing resources can be rapidly provisioned with minimal effort over
the Internet, which can be accessed globally. Cloud enables massive scalability
of solutions as they can enable resource enhancement allocated to a user or
solution in an on-demand manner, without the user having to go through the
pains of acquiring and configuring new and costly hardware

Offload decision making

The choice of where to offload and how much to offload is one of the major
deciding factors in the deployment of an offsite-processing topology-based IoT
deployment architecture. The decision making is generally addressed
considering data generation rate, network bandwidth, the criticality of
applications, processing resource available at the offload site, and other factors.
Some of these approaches are as follows.

Naive Approach: This is a straightforward, rule-based strategy where data is


offloaded to the closest location based on predefined criteria, without much
decision-making involved. It's simple to implement, but not ideal for dense or
complex IoT setups with high data generation rates or complex data types. The
approach is often used in systems with low complexity, where rules can be
easily defined. For more complex scenarios, it becomes inefficient and doesn't
handle high volumes of data well. Statistical measures are typically used to
define the offload rules in this approach.
Bargaining-based Approach: This approach is focused on improving network
efficiency and service quality by optimizing multiple parameters such as
bandwidth, latency, and throughput. It aims to balance the qualities of different
parameters to enhance the overall system performance, rather than optimizing
for individual devices. Some parameters may be reduced to increase others,
with the goal of improving the collective QoS across the system. Game theory is
a common technique used in this approach to negotiate between different
factors. It avoids reliance on historical data for decision-making.
Learning-based Approach: Unlike bargaining-based approaches, the learning-
based approach relies on historical data and trends to make decisions. It
optimizes Quality of Service (QoS) by learning from past data flows and
improving the system's behavior over time. This approach adapts and refines
strategies based on the collected data to improve performance and system
efficiency. However, it requires significant memory and processing power
during decision-making stages. Machine learning is often used in this approach
to enhance the decision-making process and predict the best outcomes based on
prior trends.

Offloading considerations
There are a few offloading parameters which need to be considered while
deciding upon the offloading type to choose. These considerations typically
arise from the nature of the IoT application and the hardware being used to
interact with the application. Some of these parameters are as follows.

• Bandwidth: The maximum amount of data that can be simultaneously


transmitted over the network between two points is the bandwidth of that
network. The bandwidth of a wired or wireless network is also considered to be
its data-carrying capacity and often used to describe the data rate of that
network.
• Latency: It is the time delay incurred between the start and completion of an
operation. In the present context, latency can be due to the network (network
latency) or the processor (processing latency). In either case, latency arises due
to the physical limitations of the infrastructure, which is associated with an
operation. The operation can be data transfer over a network or processing of a
data at a processor.

•Criticality: It defines the importance of a task being pursued by an IoT


application. The more critical a task is, the lesser latency is expected from the
IoT solution. For example, detection of fires using an IoT solution has higher
criticality than detection of agricultural field parameters. The former requires a
response time in the tune of milliseconds, whereas the latter can be addressed
within hours or even days.
• Resources: It signifies the actual capabilities of an offload location. These
capabilities may be the processing power, the suite of analytical algorithms, and
others. For example, it is futile and wasteful to allocate processing resources
reserved for real-time multimedia processing (which are highly energy-intensive
and can process and analyze huge volumes of data in a short duration) to scalar
data (which can be addressed using nominal resources without wasting much
energy).
• Data volume: The amount of data generated by a source or sources that can be
simultaneously handled by the offload location is referred to as its data volume
handling capacity. Typically, for large and dense IoT deployments, the offload
location should be robust enough to address the processing issues related to
massive data volumes

You might also like