Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
11 views79 pages

Distributed System

The document is a textbook on Distributed Systems by Andrew S. Tanenbaum and Maarten Van Steen, covering various aspects such as architectures, processes, communication, and fault tolerance. It defines distributed systems as collections of independent computers that appear as a single coherent system to users, and discusses types of systems including centralized, decentralized, and distributed. Key goals of distributed systems include resource accessibility, transparency, openness, and scalability, with various techniques for achieving these goals outlined throughout the content.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views79 pages

Distributed System

The document is a textbook on Distributed Systems by Andrew S. Tanenbaum and Maarten Van Steen, covering various aspects such as architectures, processes, communication, and fault tolerance. It defines distributed systems as collections of independent computers that appear as a single coherent system to users, and discusses types of systems including centralized, decentralized, and distributed. Key goals of distributed systems include resource accessibility, transparency, openness, and scalability, with various techniques for achieving these goals outlined throughout the content.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 79

DISTRIBUTED

SYSTEMS
TEXT-BOOK
DISTRIBUTED SYSTEMS, PRINCIPLES AND PARADIGMS, 2ND EDITION
BY ANDREW S. TANENBAUM AND MAARTEEN VAN STEEN, PEARSON
EDUCATION, (ISBN-13: 978- 0132392273), 2013
CONTENT

• Unit 1: Introduction
• Unit 2: Architectures
• Unit 4: Processes
• Unit 5: Communication
• Unit 6: Naming
• Unit 7: Coordination
• Unit 8: Replication
• Unit 9: Fault tolerance
• Unit 10: Security
OUTLINE

• Definition of a Distributed System


• Goals of a Distributed System
• Types of Distributed Systems
1. CENTRALIZED SYSTEMS:

• Centralized systems are systems that use client/server architecture where one or more
client nodes are directly connected to a central server. This is the most commonly used
type of system in many organisations where client sends a request to a company
server and receives the response.
2.DECENTRALIZED SYSTEMS:

• In decentralized systems, every node makes its own decision. The final behaviour of
the system is the aggregate of the decisions of the individual nodes is no single entity
that receives and responds to the request.
3. DISTRIBUTED SYSTEMS:

• A distributed system, also known as distributed computing, is a system with multiple


components located on different machines that communicate and coordinate actions in
order to appear as a single coherent system to the end-user.
CENTRALIZED VS DISTRIBUTED SYSTEMS

• Centralized Systems
– Centralized systems have non-autonomous components
– Centralized systems are often build using homogeneous technology
– Multiple users share the resources of a centralized system at all times
– Centralized systems have a single point of control and of failure
• Distributed Systems
– Distributed systems have autonomous components
– Distributed systems may be built using heterogeneous technology
– Distributed system components may be used exclusively
– Distributed systems are executed in concurrent processes
– Distributed systems have multiple points of failure
INTRODUCTION

Definition:
– A Distributed System is a collection of independent computers that appears to its users as a
single coherent system.
Key Idea:
• “Multiple machines working together to achieve a common goal.”
Visual Elements:
• Diagram showing multiple computers connected via a network with a label “One
System to the User”
• Icons: Cloud, Server, Network
WHAT IS A DISTRIBUTED SYSTEM

• Distributed System Definition:


– A distributed system is a collection of autonomous hosts that are connected through a
computer network.
– Each host executes components and operates a distribution middleware.
– Middleware enables the components to coordinate their activities.
– Users perceive the system as a single, integrated computing facility.
WHAT IS A DISTRIBUTED SYSTEM (CONT)

• A collection of independent computers that appears to its users as a single coherent


system.
• Features:
– No shared memory – message-based communication
– Each runs its own local OS
– Heterogeneity
• Ideal: to present a single-system image:
– The distributed system “looks like” a single computer rather than a collection of separate
computers.
DECENTRALIZED ALGORITHMS

• No machine has complete information about the system state


• Machines make decisions based only on local information
• Failure of a single machine doesn’t ruin the algorithm
• There is no assumption that a global clock exists.
DISTRIBUTED SYSTEM
COMPONENTS:
M I D D L E WA R E ,
A P P L I C AT I O N S , A N D
O P E R AT I N G S Y S T E M S
DEFINITION OF A DISTRIBUTED
SYSTEM

Figure 1-1. A distributed system organized as middleware. The


middleware layer runs on all machines, and offers a uniform
interface to the system
MIDDLEWARE IN
DISTRIBUTED SYSTEMS
• Definition:
Middleware is software that connects different distributed components and
provides common services.
• Purpose:
• Simplifies communication between heterogeneous systems
• Provides transparency (location, replication, access)
• Handles data conversion, message passing, and security
MIDDLEWARE EXAMPLES

• CORBA (Common Object Request Broker Architecture)


• DCOM (Distributed Component Object Management) – being replaced by .net
• Sun’s ONC RPC (Remote Procedure Call)
• RMI (Remote Method Invocation)
• SOAP (Simple Object Access Protocol)
• Java RMI (Remote Method Invocation)
• Web Services (SOAP, REST APIs)
DISTRIBUTED APPLICATIONS

• Definition:
Applications that run on multiple networked computers to achieve a common goal.
• Characteristics:
• Multi-tiered architecture (Client-Server, N-tier)
• Can handle large-scale user requests simultaneously
• Often data-driven with distributed databases
• Examples:
• Online banking systems
• E-commerce platforms (Amazon, Flipkart)
• Social networking apps (Facebook, Twitter)
• Cloud services (Dropbox, Google Drive)
DISTRIBUTED OPERATING
SYSTEMS (DOS)
• Definition:
A DOS manages resources across multiple computers as if they were a single system.
• Features:
• Process Management: Processes can execute on any node.
• Resource Management: Handles CPU, memory, I/O across nodes.
• Communication Support: Provides message passing, RPC, and synchronization.
• Fault Tolerance: Detects and recovers from node failures.
• Examples:
• Amoeba OS – Transparent distributed OS
• Mach OS – Microkernel supporting distributed processes
• Plan 9 – Resource sharing across networked machines
GOALS FOR DISTRIBUTION

• Resource accessibility
– For sharing and enhanced performance
• Distribution transparency
– For easier use
• Openness
– To support interoperability, portability, extensibility
• Scalability
– With respect to size (number of users), geographic
distribution, administrative domains
GOAL 1 – RESOURCE AVAILABILITY

• Support user access to remote resources (printers, data files, web pages, CPU
cycles) and the fair sharing of the resources
• Economics of sharing expensive resources
• Performance enhancement – due to multiple processors; also due to ease of
collaboration and info exchange – access to remote services
– Groupware: tools to support collaboration
• Resource sharing introduces security problems.
GOAL 2 – DISTRIBUTION
TRANSPARENCY
• Software hides some of the details of the distribution of
system resources.
– Makes the system more user friendly.

• A distributed system that appears to its users & applications


to be a single computer system is said to be transparent.
– Users & apps should be able to access remote resources in the same way they
access local resources.

• Transparency has several dimensions.


TYPES OF TRANSPARENCY
Transparency Description
Access Hide differences in data representation &
resource access (enables interoperability)
Location Hide location of resource (can use resource
without knowing its location)
Migration Hide possibility that a system may change
location of resource (no effect on access)
Replication Hide the possibility that multiple copies of the
resource exist (for reliability and/or availability)
Concurrency Hide the possibility that the resource may be
shared concurrently
Failure Hide failure and recovery of the resource. How
does one differentiate betw. slow and failed?
Relocation Hide that resource may be moved during use

Figure 1-2. Different forms of transparency in a distributed system (ISO, 1995)


GOAL 3 - OPENNESS
• An open distributed system “…offers services according to standard rules
that describe the syntax and semantics of those services.” In other words,
the interfaces to the system are clearly specified and freely available.

• Interface Definition/Description Languages (IDL): used to describe the


interfaces between software components, usually in a distributed system
– Definitions are language & machine independent
– Support communication between systems using different OS/programming
languages; e.g. a C++ program running on Windows communicates with a Java
program running on UNIX
– Communication is usually RPC-based.
Open Systems Support …

• Interoperability: the ability of two different systems or applications


to work together
– A process that needs a service should be able to talk to any process
that provides the service.
– Multiple implementations of the same service may be provided, as long
as the interface is maintained
• Portability: an application designed to run on one distributed
system can run on another system which implements the same
interface.
• Extensibility: Easy to add new components, features
GOAL 4 - SCALABILITY

• Dimensions that may scale:


– With respect to size
– With respect to geographical distribution
– With respect to the number of administrative organizations spanned
• A scalable system still performs well as it scales up along any of the three dimensions.
SIZE SCALABILITY
• Scalability is negatively affected when the system is
based on
– Centralized server: one for all users
– Centralized data: a single data base for all users
– Centralized algorithms: one site collects all information,
processes it, distributes the results to all sites.
• Complete knowledge: good
• Time and network traffic: bad
GEOGRAPHIC SCALABILITY

• Early distributed systems ran on LANs, relied on synchronous


communication.
– May be too slow for wide-area networks
– Wide-area communication is unreliable, point-to-point;
– Unpredictable time delays may even affect correctness
• LAN communication is based on broadcast.
– Consider how this affects an attempt to locate a particular kind of service
• Centralized components + wide-area communication: waste of
network bandwidth
SCALABILITY - ADMINISTRATIVE

• Different domains may have different policies about resource usage, management,
security, etc.
• Trust often stops at administrative boundaries
– Requires protection from malicious attacks
SCALING TECHNIQUES

• Scalability affects performance more than anything else.


• Three techniques to improve scalability:
– Hiding communication latencies
– Distribution
– Replication
HIDING COMMUNICATION DELAYS

• Structure applications to use asynchronous communication


(no blocking for replies)
– While waiting for one answer, do something else; e.g., create one thread
to wait for the reply and let other threads continue to process or
schedule another task
• Download part of the computation to the requesting platform to
speed up processing
– Filling in forms to access a DB: send a separate message for each field,
or download form/code and submit finished version.
– i.e., shorten the wait times
SCALING TECHNIQUES

Figure 1-4. The difference between letting (a) a server


or (b) a client check forms as they are being filled.
DISTRIBUTION
• Instead of one centralized service, divide into parts and distribute
geographically
• Each part handles one aspect of the job
– Example: DNS namespace is organized as a tree of domains; each
domain is divided into zones; names in each zone are handled by a
different name server
– WWW consists of many (millions?) of servers
SCALING TECHNIQUES (2)

Figure 1-5. An example of dividing the DNS


name space into zones.
THIRD SCALING TECHNIQUE - REPLICATION

• Replication: multiple identical copies of something


– Replicated objects may also be distributed, but aren’t necessarily.
• Replication
– Increases availability
– Improves performance through load balancing
– May avoid latency by improving proximity of resource
CACHING
• Caching is a form of replication
– Normally creates a (temporary) replica of something closer to the
user
• Replication is often more permanent
• User (client system) decides to cache, server system decides to
replicate
• Both lead to consistency problems
ISSUES/PITFALLS OF DISTRIBUTION

• Requirement for advanced software to realize the potential


benefits.
• Security and privacy concerns regarding network communication
• Replication of data and services provides fault tolerance and
availability, but at a cost.
• Network reliability, security, heterogeneity, topology
• Latency and bandwidth
• Administrative domains
TYPES OF DISTRIBUTED SYSTEMS

• Distributed Computing Systems


– Clusters
– Grids
– Clouds
• Distributed Information Systems
– Transaction Processing Systems
– Enterprise Application Integration
• Distributed Embedded Systems
– Home systems
– Health care systems
– Sensor networks
DISTRIBUTED
COMPUTING
SYSTEMS
CLUSTERS

• Definition:
A cluster is a group of interconnected computers (nodes) that work together as a single
unified system to perform computational tasks. Clusters provide high performance,
reliability, and scalability compared to a single computer.

• Types of Cluster Models:


• Beowulf Cluster: High-performance parallel computing (MPI/PVM)
• MOSIX Cluster: Process migration and load balancing
• Shared-Nothing / Shared-Disk / Shared-Memory
• Advantages: High availability, load balancing, fault tolerance
• Examples: Web server clusters, Hadoop clusters
TYPES OF CLUSTER MODELS:
BEOWULF CLUSTER
• Purpose: Designed for high-performance parallel computing using inexpensive,
commodity hardware.
• Communication: Nodes communicate using Message Passing Interface (MPI) or
Parallel Virtual Machine (PVM).
• Characteristics:
• Homogeneous nodes (same hardware & OS)
• No centralized control; tasks distributed across nodes
• Use Cases: Scientific simulations, weather forecasting, molecular modeling
CLUSTERS – BEOWULF MODEL
• Linux-based
• Master-slave paradigm
– One processor is the master; allocates tasks to other processors,
maintains batch queue of submitted jobs, handles interface to users
– Master has libraries to handle message-based communication or
other features (the middleware).
TYPES OF CLUSTER MODELS:
MOSIX CLUSTER
• Purpose: Provides automatic process migration and dynamic load balancing in
Linux clusters.
• Single System Image (SSI): All nodes appear as one system to users.
• Characteristics:
• Processes can move between nodes automatically for optimal resource use
• Supports heterogeneous hardware
• Use Cases: Research labs, unpredictable workloads
CLUSTERS – MOSIX MODEL

• MOSIX attempts to provide a single-system image of a cluster, meaning that to a


process a cluster computer offers the ultimate distribution transparency by appearing
to be a single computer.

• Provides a symmetric, rather than hierarchical paradigm


– High degree of distribution transparency (single system image)
– Processes can migrate between nodes dynamically and preemptively (more about this
later.) Migration is automatic
• Used to manage Linux clusters
TYPES OF CLUSTER MODELS:
SHARED-NOTHING / SHARED-
DISK / SHARED-MEMORY
CLUSTERS
1. Shared-Nothing: Each node has its own CPU, memory, and storage.
1. Pros: Highly scalable, no single point of failure
2. Example: Hadoop cluster
2. Shared-Disk: Nodes have independent CPUs & memory but share a common disk.
1. Pros: Easier data consistency
2. Example: Oracle RAC
3. Shared-Memory: Nodes share the same memory space.
1. Pros: Fast access for small-scale systems
2. Cons: Difficult to scale beyond few nodes
3. Example: Symmetric Multiprocessing (SMP) systems
CLUSTER COMPUTING SYSTEMS
• Cluster computing systems became popular when the price/performance ratio of
personal computers and workstations improved.
• At a certain point, it became financially and technically attractive to build a
supercomputer using off-the-shelf technology by simply hooking up a collection of
relatively simple computers in a high-speed network.
• In virtually all cases, cluster computing is used for parallel programming in which a
single (compute-intensive) program is run in parallel on multiple machines.
ADVANTAGES OF CLUSTER
COMPUTING:
• High Availability: If one node fails, others take over tasks.
• Load Balancing: Workloads are distributed across nodes for efficiency.
• Fault Tolerance: System continues operating even if some nodes fail.
• Cost-Effective: Built using commodity hardware (especially Beowulf clusters).
EXAMPLES:

• Web Server Clusters: Multiple web servers handle requests together.


• Hadoop Clusters: Distributed storage and processing for Big Data.
• Scientific Computing Clusters: Supercomputing tasks divided across nodes.
CLUSTER COMPUTING
• A collection of similar processors (PCs, workstations) running the
same operating system, connected by a high-speed LAN.
• Parallel computing capabilities using inexpensive PC hardware
• Replace big parallel computers (MPPs)
GRIDS
• Similar to clusters but processors are more
loosely coupled, tend to be heterogeneous, and
are not all in a central location.
• Can handle workloads similar to those on
supercomputers, but grid computers connect over
a network (Internet) and supercomputers’ CPUs
connect to a high-speed internal bus/network
• Problems are broken up into parts and distributed
across multiple computers in the grid – less
communication between parts than in clusters.
GRID COMPUTING

• Definition:
Grid computing is a distributed computing model that connects geographically
dispersed and heterogeneous resources (computers, storage, networks) to work
together on large-scale computation tasks and achieve a common goal.
• It is often referred to as a “computational grid”, similar to how the power grid
distributes electricity.
KEY CHARACTERISTICS:

• Heterogeneous Nodes
• Includes systems with different hardware, operating systems, and configurations.
• Enables integration of diverse computing resources.
• Geographical Distribution
• Resources can be located in different organizations or even countries, connected through the internet.
• Resource Sharing Across Organizations
• Multiple institutions share computing power and storage under agreed policies.
• Supports collaboration between research centers, universities, enterprises.
• High-Throughput Computing
• Focused on maximizing the number of tasks completed over time rather than minimizing single job completion time.
• Suitable for scientific experiments and large datasets.
• Virtual Organization (VO)
• A concept where participants from different organizations form a virtual team to share resources.
EXAMPLES:

• SETI@home: Volunteers donate computing resources to analyze radio signals for


extraterrestrial intelligence.
• LHC Computing Grid: Used by CERN for processing petabytes of data generated
by the Large Hadron Collider experiments.
• EGEE (Enabling Grids for E-sciencE): European project for scientific collaboration.
GRID COMPUTING SYSTEMS
• Modeled loosely on the electrical grid.
• Highly heterogeneous with respect to hardware, software, networks, security
policies, etc.
• Grids support virtual organizations: a collaboration of users who pool
resources (servers, storage, databases) and share them
• Grid software is concerned with managing sharing across administrative
domains.
A PROPOSED ARCHITECTURE FOR GRID SYSTEMS*
• Fabric layer: interfaces to local resources at a
specific site
• Connectivity layer: protocols to support
usage of multiple resources for a single
application; e.g., access a remote resource or
transfer data between resources; and
protocols to provide security
• Resource layer manages a single resource,
using functions supplied by the connectivity
layer
• Collective layer: resource discovery,
allocation, scheduling, etc.
• Applications: use the grid resources Figure 1-7. A layered architecture
• The collective, connectivity and resource for grid computing systems
layers together form the middleware layer for
a grid
OGSA – ANOTHER GRID
ARCHITECTURE

• Open Grid Services Architecture (OGSA) is a service-oriented architecture


– Sites that offer resources to share do so by offering specific Web services.
• The architecture of the OGSA model is more complex than the previous layered model.
CLOUD COMPUTING

• Definition:
Cloud computing is the on-demand delivery of computing resources (servers,
storage, databases, networking, software, analytics) over the internet, with pay-as-
you-go pricing.
KEY CHARACTERISTICS:

• Virtualization
• Physical hardware is abstracted into virtual resources for efficient utilization.
• Enables multiple virtual machines (VMs) on the same physical server.
• Elasticity & Scalability
• Elasticity: Resources can be scaled up or down automatically based on demand.
• Scalability: Allows adding more resources to handle growing workloads.
• On-Demand Self-Service
• Users can provision resources themselves without human intervention.
• Pay-Per-Use Model
• Customers pay only for the resources they consume.
• Broad Network Access
• Services are available over the internet on multiple devices.
SERVICE MODELS:

• IaaS (Infrastructure as a Service):


• Virtual servers, storage, networking
• Example: AWS EC2, Google Compute Engine
• PaaS (Platform as a Service):
• Development platforms and environments
• Example: Google App Engine, Microsoft Azure
• SaaS (Software as a Service):
• Ready-to-use software delivered over the web
• Example: Gmail, Salesforce
EXAMPLES:

• Amazon Web Services (AWS) – Market leader in cloud infrastructure


• Microsoft Azure – Enterprise cloud services
• Google Cloud Platform (GCP) – Big Data and AI-oriented services
DISTRIBUTED
INFORMATION
SYSTEMS
DISTRIBUTED INFORMATION
SYSTEMS
• A Distributed Information System (DIS) is a system where data and information are
stored and managed across multiple nodes, yet appear as a single integrated system to
users.
• Key Features:
• Data Consistency: Ensures all nodes show the same information.
• Transparency: Users don’t need to know the data location.
• Fault Tolerance: Continues to work even if some nodes fail.
• Scalability: Easily add more nodes to handle increased demand.
• Examples:
• Online Banking Systems
• Airline Reservation Systems
• E-commerce Websites
TRANSACTION PROCESSING
SYSTEMS (TPS)
• A Transaction Processing System is a distributed system that processes large volumes of
concurrent transactions reliably and ensures data integrity.
• Key Features:
• ACID Properties:
• Atomicity – Complete or nothing
• Consistency – Valid state before & after
• Isolation – Transactions execute independently
• Durability – Results persist even after failures
• Fault Tolerance: Recovers from system failures without losing transactions.
• High Throughput: Handles thousands of transactions per second.
• Examples:
• ATM Banking Networks
• Online Payment Gateways
• E-commerce Checkout Systems
TRANSACTION PROCESSING SYSTEMS

• Figure 1-8. Example primitives for transactions.

Figure 1-8. Example primitives for transactions


NESTED TRANSACTIONS

• A nested transaction is a transaction within another transaction (a sub-transaction)


– Example: a transaction may ask for two things (e.g., airline reservation info + hotel info)
which would spawn two nested transactions
• Primary transaction waits for the results.
– While children are active parent may only abort, commit, or spawn other children
IMPLEMENTING TRANSACTIONS

• Conceptually, private copy of all data


• Actually, usually based on logs
• Multiple sub-transactions – commit, abort
– Durability is a characteristic of top-level transactions only
• Nested transactions are suitable for distributed systems
– Transaction processing monitor may interface between client and
multiple data bases.
ENTERPRISE APPLICATION
INTEGRATION (EAI)
• Definition:
Enterprise Application Integration refers to connecting and enabling communication
between different enterprise applications to work as a unified system.
• Key Features:
• Middleware: Acts as a bridge for data exchange.
• Data Transformation: Converts data formats between applications.
• Workflow Automation: Integrates processes across different systems.
• Loose Coupling: Systems remain independent but can communicate.
• Examples:
• ERP Systems integrated with CRM and Supply Chain
• HR System integrated with Payroll and Accounting
• Tools: MuleSoft, IBM WebSphere, TIBCO
TRANSACTIONS

• Transaction processing may be centralized (traditional client/server system) or


distributed.
• A distributed database is one in which the data storage is distributed – connected to
separate processors.
ENTERPRISE APPLICATION
INTEGRATION

• Less structured than transaction-based systems


• EA components communicate directly
– Enterprise applications are things like HR data, inventory programs, …
– May use different OSs, different DBs but need to interoperate sometimes.
• Communication mechanisms to support this include CORBA,
Remote Procedure Call (RPC) and Remote Method Invocation
(RMI)
ENTERPRISE APPLICATION
INTEGRATION

Figure 1-11. Middleware as a communication facilitator in


enterprise application integration.
DISTRIBUTED
EMBEDDED
SYSTEMS
DISTRIBUTED EMBEDDED
SYSTEMS
• Definition:
A Distributed Embedded System consists of multiple embedded devices connected via a
network to perform real-time, coordinated tasks.
• Key Features:
• Real-Time Operation: Executes tasks with strict timing constraints.
• Resource Constraints: Limited memory, power, and processing power.
• Networking: Devices communicate via CAN bus, Ethernet, or wireless.
• Fault Tolerance: Must operate safely even if a node fails.
• Examples:
• Automotive Systems: Engine control units (ECUs), ABS braking systems.
• Industrial Automation: Robotic arms, assembly line control.
• Smart Appliances: Connected washing machines, refrigerators.
DISTRIBUTED PERVASIVE
SYSTEMS
• The first two types of systems are characterized by their stability:
nodes and network connections are more or less fixed
• This type of system is likely to incorporate small, battery-powered,
mobile devices
– Home systems
– Electronic health care systems – patient monitoring
– Sensor networks – data collection, surveillance
HOME SYSTEMS

• Definition:
Home systems are distributed systems designed for smart homes to enhance comfort,
security, and energy efficiency.
• Key Features:
• Remote Monitoring & Control: Access devices via mobile apps.
• Interconnected Devices: Appliances communicate over Wi-Fi, Zigbee, or Bluetooth.
• Automation: Predefined rules for lighting, heating, security.
• Energy Optimization: Reduces power consumption.
• Examples:
• Smart Lighting: Philips Hue, smart LED systems.
• Smart Thermostats: Nest, Ecobee.
• Home Security: Cameras, motion sensors, smart locks.
HEALTH CARE SYSTEMS

• Definition:
Distributed health care systems enable real-time monitoring, data sharing, and remote
diagnosis using interconnected devices.
• Key Features:
• Patient Monitoring: Vital signs captured by wearable devices.
• Data Sharing: Secure exchange of electronic health records (EHR).
• Telemedicine: Remote consultation and treatment.
• Reliability & Security: HIPAA compliance for patient data.
• Examples:
• ICU patient monitoring systems.
• Wearable health trackers (Fitbit, Apple Watch).
• Hospital Information Systems.
ELECTRONIC HEALTH
CARE SYSTEMS

Figure 1-12. Monitoring a person in a pervasive electronic health care


system, using (a) a local hub or (b) a continuous wireless connection.
SENSOR NETWORKS

• Definition:
A Sensor Network is a network of spatially distributed sensors that monitor physical or
environmental conditions and send data for processing.
• Key Features:
• Wireless Communication: Sensors connect via Wi-Fi, Zigbee, LoRaWAN.
• Energy Efficiency: Low power consumption for long-term use.
• Scalability: Thousands of sensors can work together.
• Data Aggregation: Sensors send data to a central node or cloud for analysis.
• Examples:
• Environmental Monitoring: Air quality, temperature, pollution.
• Smart Agriculture: Soil moisture, crop health sensors.
• Military Surveillance: Intrusion detection systems.
SENSOR NETWORKS
• A collection of geographically distributed nodes consisting of a comm.
device, a power source, some kind of sensor, a small processor…
• Purpose: to collectively monitor sensory data (temperature, sound,
moisture etc.,) and transmit the data to a base station
• “smart environment” – the nodes may do some rudimentary processing
of the data in addition to their communication responsibilities.
SENSOR NETWORKS

Figure 1-13. Organizing a sensor network database, while storing


and processing data (a) only at the operator’s site or …
SENSOR NETWORKS

Figure 1-13. Organizing a sensor network database, while storing


and processing data … or (b) only at the sensors.

You might also like