Chapter 1: Introduction to Emerging Technologies
Outline
Evolution of Technologies
Introduction to the Industrial Revolution (IR)
Historical Background (IR 1.0, IR 2.0, IR 3.0)
Role of Data for Emerging Technologies
Enabling devices and network (Programmable devices)
Human to Machine Interaction
Future Trends in Emerging Technologies
Some emerging technologies that will shape the future of you and your business.
Introduction: - Evolution of technology, the role of data for emerging technology, enabling devices and
networks for technologies (programmable devices), Human to Machine Interaction (HCI) and future trends
of technologies are cover in this chapter.
1.1 Evolution of Technologies
Emerging technology is a term generally used to describe a new technology, but it may also refer to
the continuing development of existing technology; it can have slightly different meanings when used
in different areas, such as media, business, science, or education. The term commonly refers to
technologies that are currently developing, or that are expected to be available within the next five to
ten years, and is usually reserved for technologies that are creating or are expected to create significant
social or economic effects. Technological evolution is a theory of radical transformation of society through
technological development.
What is the root word of technology and evolution?
• Technology: 1610s, "discourse or treatise on an art or the arts," from Greek tekhnologia "systematic
treatment of an art, craft, or technique," originally referring to grammar, from tekhno- (see techno-) + -
logy. The meaning "science of the mechanical and industrial arts" is first recorded in 1859.
• Evolution: evolution means the process of developing by gradual changes. This noun is from Latin
evolutio, "an unrolling or opening," combined from the prefix e-, "out," plus volvere, "to roll."
List of some currently available emerged technologies
o Artificial Intelligence
o Blockchain
o Augmented Reality and Virtual Reality
o Cloud Computing
o Angular and React
o DevOps
o Internet of Things (IoT)
o Intelligent Apps (I-Apps)
o Big Data
o Robotic Processor Automation (RPA)
1.1.1 Introduction to the Industrial Revolution (IR)
The Industrial Revolution was a period of major industrialization and innovation that took place
during the late 1700s and early 1800s. An Industrial Revolution at its core occurs when a society
shifts from using tools to make products to use new sources of energy, such as coal, to power
machines in factories. The revolution started in England, with a series of innovations to make labor
more efficient and productive. The Industrial Revolution was a time when the manufacturing of
goods moved from small shops and homes to large factories. This shift brought about changes in
culture as people moved from rural areas to big cities in order to work.
Generally, the following industrial revolutions fundamentally changed and transfer the world
around us into modern society.
The steam engine,
The age of science and mass production, and
The rise of digital technology
Smart and autonomous systems fueled by data and machine learning.
1.1.2 The Most Important Inventions of the Industrial Revolution
Transportation: The Steam Engine, The Railroad, The Diesel Engine, The Airplane.
Communication.: The Telegraph. The Transatlantic Cable. The Phonograph. The Telephone.
Industry: The Cotton Gin. The Sewing Machine. Electric Lights.
1.1.3 Historical Background (IR 1.0, IR 2.0, IR 3.0)
The industrial revolution began in Great Britain in the late 1770s before spreading to the rest of
Europe.
The four types of industries are:
The primary industry involves getting raw materials e.g. mining, farming, and fishing.
The secondary industry involves manufacturing e.g. making cars and steel.
Tertiary industries provide a service e.g. teaching and nursing.
The quaternary industry involves research and development industries e.g. IT.
a. Industrial Revolution (IR 1.0)
The Industrial Revolution (IR) is described as a transition to new manufacturing processes. IR was
first coined in the 1760s, during the time where this revolution began. The transitions in the first
IR included going from hand production methods to machines, the increasing use of steam power
(see Figure 1.1), the development of machine tools and the rise of the factory system.
Figure 1.1 steam engine
b. Industrial Revolution (IR 2.0)
The Second IR, also known as the Technological Revolution, began somewhere in the 1870s. The
advancements in IR 2.0 included the development of methods for manufacturing interchangeable
parts and widespread adoption of pre-existing technological systems such as telegraph and railroad
networks. This adoption allowed the vast movement of people and ideas, enhancing
communication. Moreover, new technological systems were introduced, such as electrical power
(see Figure 1.2) and telephones.
Figure 1.2 Electricity transmission line
c. Industrial Revolution (IR 3.0)
Then came the Third Industrial Revolution (IR 3.0). IR 3.0 introduced the transition from
mechanical and analog electronic technology to digital electronics (see Figure 1.3) which began
from the late 1950s. Due to the shift towards digitalization, IR 3.0 was given the nickname,
―Digital Revolution‖. The core factor of this revolution is the mass production and widespread use
of digital logic circuits and its derived technologies such as the computer, handphones and the
Internet.
Figure 1.3 High Tech Electronics
d. Fourth Industrial Revolution (IR 4.0)
Now, with advancements in various technologies such as robotics, Internet of Things (IoT see
Figure 1.4), additive manufacturing and autonomous vehicles, the term ―Fourth Industrial
Revolution‖ or IR 4.0 was coined by Klaus Schwab, the founder and executive chairman of World
Economic Forum, in the year 2016. The technologies mentioned above are what you call – cyber-
physical systems. A cyber-physical system is a mechanism that is controlled or monitored by
computer-based algorithms, tightly integrated with the Internet and its users.
One example that is being widely practiced in industries today is the usage of Computer Numerical
Control (CNC) machines. These machines are operated by giving it instructions using a computer.
Another major breakthrough that is associated with IR 4.0 is the adoption of Artificial Intelligence
(AI), where we can see it being implemented into our smartphones. AI is also one of the main
elements that give life to Autonomous Vehicles and Automated Robots.
Figure 1. 4 Anybody Connected device (ABCD)
1.2 Role of Data for Emerging Technologies
Data is regarded as the new oil and strategic asset since we are living in the age of big data, and drives
or even determines the future of science, technology, the economy, and possibly everything in our
world today and tomorrow. Data have not only triggered tremendous hype and buzz but more
importantly, presents enormous challenges that in turn bring incredible innovation and economic
opportunities. This reshaping and paradigm-shifting are driven not just by data itself but all other
aspects that could be created, transformed, and/or adjusted by understanding, exploring, and utilizing
data.
The preceding trend and its potential have triggered new debate about data-intensive scientific
discovery as an emerging technology, the so-called ―fourth industrial revolution,‖ There is no doubt,
nevertheless, that the potential of data science and analytics to enable data-driven theory, economy,
and professional development is increasingly being recognized. This involves not only core disciplines
such as computing, informatics, and statistics, but also the broad-based fields of business, social
science, and health/medical science.
1.3 Enabling devices and network (Programmable devices)
In the world of digital electronic systems, there are four basic kinds of devices: memory,
microprocessors, logic, and networks.
Memory devices store random information such as the contents of a spread sheet or database.
Microprocessors execute software instructions to perform a wide variety of tasks such as
running a word processing program or video game.
In the world of digital electronic systems, there are four basic kinds of devices: memory,
microprocessors, logic, and networks.
Memory devices store random information such as the contents of a spread sheet or database.
Microprocessors execute software instructions to perform a wide variety of tasks such as
running a word processing program or video game.
In the world of digital electronic systems, there are four basic kinds of devices: memory,
microprocessors, logic, and networks.
Memory devices store random information such as the contents of a spread sheet or database.
Microprocessors execute software instructions to perform a wide variety of tasks such as
running a word processing program or video game.
Figure 1.5 programmable device
Why is a computer referred to as a programmable device?
Because what makes a computer a computer is that it follows a set of instructions. Many electronic
devices are computers that perform only one operation, but they are still following instructions that
reside permanently in the unit.
List of some Programmable devices
o Achronix Speedster SPD60
o Actel’s
o Altera Stratix IV GT and Arria II GX •
o Atmel’s AT91CAP7L
o Cypress Semiconductor’s programmable system-on-chip (PSoC) family
o Lattice Semiconductor’s ECP3
o Lime Microsystems’ LMS6002
o Silicon Blue Technologies
o Xilinx Virtex 6 and Spartan 6
o Xmos Semiconductor L series
A full range of network-related equipment referred to as Service Enabling Devices (SEDs), which
can include:
o Traditional channel service unit (CSU) and data service unit (DSU)
o Modems
o Routers
o Switches
o Conferencing equipment
o Network appliances (NIDs and SIDs)
o Hosting equipment and servers
1.4 Human to Machine Interaction
It is refers to the communication and interaction between a human and a machine via a user
interface.
Nowadays, natural user interfaces such as gestures have gained increasing attention as they
allow humans to control machines through natural and intuitive behaviors
What is interaction in human-computer interaction?
o HCI is the study of how people interact with computers and to what extent computers are
or are not developed for successful interaction with human beings.
o As its name implies, HCI consists of three parts:
The user,
The computer itself, and
The ways they work together.
How do users interact with computers?
o The user interacts directly with hardware for the human input and output such as displays,
e.g. through a graphical user interface.
How important is human-computer interaction?
o The goal of HCI is to improve the interaction between users and computers by making
computers more user-friendly and receptive to the user's needs.
o The main advantages of HCI are simplicity, ease of deployment & operations and cost
savings for smaller set-ups.
1.4.1 Disciplines Contributing to Human-Computer Interaction (HCI)
Cognitive psychology: Limitations, information processing, performance prediction, cooperative
working, and capabilities.
Computer science: Including graphics, technology, prototyping tools, user interface management
systems.
Linguistics.
Engineering and design.
Artificial intelligence.
Human factors.
1.5 Future Trends in Emerging Technologies
1.5.1 Emerging technology trends in 2019
5G Networks
Artificial Intelligence (AI)
Autonomous Devices
Blockchain
Augmented Analytics
Digital Twins
Enhanced Edge Computing and
Immersive Experiences in Smart Spaces
1.5.2 Some emerging technologies that will shape the future of you and your business
The future is now or so they say. So-called emerging technologies are taking over our minds more
and more each day. These are very high-level emerging technologies though. They sound like tools
that will only affect the top tier of technology companies who employ the world‘s top 1% of
geniuses. This is totally wrong. Chatbots, virtual/augmented reality, blockchain, Ephemeral Apps
and Artificial Intelligence are already shaping your life whether you like it or not. At the end of the
day, you can either adapt or die.
2 Chapter 2: Data Science
Outline
An Overview of Data Science
What are data and information?
Data Processing Cycle
Data types and their representation
Data types from Computer programming perspective
Data types from Data Analytics perspective
Data value Chain
Data Acquisition
Data Analysis
Data Curation
Data Storage
Data Usage
Basic concepts of big data
What Is Big Data?
Clustered Computing and Hadoop Ecosystem
Clustered Computing
Hadoop and its Ecosystem
Big Data Life Cycle with Hadoop
2.1 An Overview of Data Science
Data science is a multi-disciplinary field that uses scientific methods, processes, algorithms, and systems to
extract knowledge and insights from structured, semi-structured and unstructured data. Data science is
much more than simply analyzing data. It offers a range of roles and requires a range of skills.
2.1.1 What are data and information?
Data can be defined as a representation of facts, concepts, or instructions in a formalized manner, which
should be suitable for communication, interpretation, or processing, by human or electronic machines. It
can be described as unprocessed facts and figures. It is represented with the help of characters such as
alphabets (A-Z, a-z), digits (0-9) or special characters (+, -, /, *, <,>, =, etc.).
Whereas information is the processed data on which decisions and actions are based. It is data that has
been processed into a form that is meaningful to the recipient and is of real or perceived value in the
current or the prospective action or decision of recipient. Furtherer more, information is interpreted data;
created from organized, structured, and processed data in a particular context.
2.1.2 Data Processing Cycle
Data processing is the re-structuring or re-ordering of data by people or machines to increase their
usefulness and add values for a particular purpose.
These three steps constitute the data processing cycle.
Figure 2.1 Data Processing Cycle
Input − in this step, the input data is prepared in some convenient form for processing.
For example, when electronic computers are used, the input data can be recorded on any one of the
several types of storage medium, such as hard disk, CD, flash disk and so on.
Processing − in this step, the input data is changed to produce data in a more useful form.
For example, interest can be calculated on deposit to a bank, or a summary of sales for the month
can be calculated from the sales orders.
Output − at this stage, the result of the proceeding processing step is collected.
The particular form of the output data depends on the use of the data.
For example, output data may be payroll for employees.
2.2 Data types and their representation
Data types can be described from diverse perspectives. In computer science and computer
programming, for instance, a data type is simply an attribute of data that tells the compiler or
interpreter how the programmer intends to use the data.
2.2.1 Data types from Computer programming perspective
Almost all programming languages explicitly include the notion of data type, though different
languages may use different terminology. Common data types include:
Integers(int)- is used to store whole numbers, mathematically known as integers
Booleans(bool)- is used to represent restricted to one of two values: true or false
Characters(char)- is used to store a single character
Floating-point numbers(float)- is used to store real numbers
Alphanumeric strings(string)- used to store a combination of characters and numbers
A data type makes the values that expression, such as a variable or a function, might take. This data type
defines the operations that can be done on the data, the meaning of the data, and the way values of that
type can be stored.
2.2.2 Data types from Data Analytics perspective
From a data analytics point of view, it is important to understand that there are three common
types of data types or structures: Structured, Semi-structured, and Unstructured data types.
Figure 2.2 Data types from a data analytics perspective
Structured Data is data that adheres to a pre-defined data model and is therefore
straightforward to analyze. Examples Excel files or SQL databases.
Semi-structured Data is a form of structured data that does not conform with the formal
structure of data models associated with relational databases or other forms of data tables,
Examples JSON and XML are forms of semi-structured data.
Unstructured data is information that either does not have a predefined data model or is not
organized in a pre-defined manner. Examples audio, video files or No-SQL databases.
Metadata – Data about Data
o The last category of data type is metadata.
o From a technical point of view, this is not a separate data structure, but it is one of the
most important elements for Big Data analysis and big data solutions.
o Metadata is data about data.
o It provides additional information about a specific set of data.
o In a set of photographs, for example, metadata could describe when and where the
photos were taken. The metadata then provides fields for dates and locations which, by
themselves, can be considered structured data. Because of this reason, metadata is
frequently used by Big Data solutions for initial analysis.
2.3 Data value Chain
The Data Value Chain is introduced to describe the information flow within a big data system as a
series of steps needed to generate value and useful insights from data.
Figure 2.3 Data Value Chain
2.3.1 Data Acquisition
It is the process of gathering, filtering, and cleaning data before it is put in a data warehouse or any
other storage solution on which data analysis can be carried out. Data acquisition is one of the
major big data challenges in terms of infrastructure requirements. The infrastructure required to
support the acquisition of big data must deliver low, predictable latency in both capturing data and
in executing queries; be able to handle very high transaction volumes, often in a distributed
environment; and support flexible and dynamic data structures.
2.3.2 Data Analysis
It is concerned with making the raw data acquired amenable to use in decision-making as
well as domain-specific usage.
Data analysis involves exploring, transforming, and modelling data with the goal of
highlighting relevant data, synthesizing and extracting useful hidden information with high
potential from a business point of view.
Related areas include data mining, business intelligence, and machine learning.
2.3.3 Data Curation
It is the active management of data over its life cycle to ensure it meets the necessary data
quality requirements for its effective usage.
It is processes can be categorized into different activities such as content creation,
selection, classification, transformation, validation, and preservation.
2.3.4 Data Storage
It is the persistence and management of data in a scalable way that satisfies the needs of
applications that require fast access to the data.
Relational Database Management Systems (RDBMS) have been the main, and almost
unique, a solution to the storage paradigm for nearly 40 years.
Data Usage- covers the data-driven business activities that need access to data, its analysis,
and the tools needed to integrate the data analysis within the business activity.
2.3.5 Data usage
It covers the data-driven business activities that need access to data, its analysis, and the tools
needed to integrate the data analysis within the business activity. Data usage in business decision-
making can enhance competitiveness through the reduction of costs, increased added value, or any
other parameter that can be measured against existing performance criteria.
2.4 Basic concepts of big data
Big data is a blanket term for the non-traditional strategies and technologies needed to gather,
organize, process, and gather insights from large datasets. While the problem of working with data that
exceeds the computing power or storage of a single computer is not new, the pervasiveness, scale, and
value of this type of computing have greatly expanded in recent years.
2.4.1 What Is Big Data?
Big data is the term for a collection of data sets so large and complex that it becomes difficult to
process using on-hand database management tools or traditional data processing applications. In
this context, a ―large dataset‖ means a dataset too large to reasonably process or store with
traditional tooling or on a single computer.
Volume: large amounts of data Zeta bytes/Massive datasets
Velocity: Data is live streaming or in motion
Variety: data comes in many different forms from diverse sources
Veracity: can we trust the data? How accurate is it? etc.
Figure 2.4 Characteristics of big data
2.4.2 Clustered Computing and Hadoop Ecosystem
Clustered Computing
Because of the qualities of big data, individual computers are often inadequate for handling the
data at most stages. To better address the high storage and computational needs of big data,
computer clusters are a better fit. Big data clustering software combines the resources of many
smaller machines, seeking to provide a number of benefits:
o Resource Pooling: Combining the available storage space to hold data is a clear
benefit, but CPU and memory pooling are also extremely important.
o High Availability: Clusters can provide varying levels of fault tolerance and
availability guarantees to prevent hardware or software failures from affecting access to
data and processing.
o Easy Scalability: Clusters make it easy to scale horizontally by adding additional
machines to the group.
Hadoop and its Ecosystem : Hadoop is an open-source framework intended to make interaction
with big data easier. It is a framework that allows for the distributed processing of large
datasets across clusters of computers using simple programming models. The four key
characteristics of Hadoop are:
o Economical: Its systems are highly economical as ordinary computers can be used for
data processing.
o Reliable: It is reliable as it stores copies of the data on different machines and is
resistant to hardware failure.
o Scalable: It is easily scalable both, horizontally and vertically. A few extra nodes help
in scaling up the framework.
o Flexible: It is flexible and you can store as much structured and unstructured data as
you need to and decide to use them later.
Hadoop has an ecosystem that has evolved from its four core components: data management,
access, processing, and storage. It is continuously growing to meet the needs of Big Data. It
comprises the following components and many others:
o HDFS: Hadoop Distributed File System
o YARN: Yet Another Resource Negotiator
o MapReduce: Programming based Data Processing
o Spark: In-Memory data processing
o PIG, HIVE: Query-based processing of data services
o HBase: NoSQL Database
o Mahout, Spark MLLib: Machine Learning algorithm libraries
o Solar, Lucene: Searching and Indexing
o Zookeeper: Managing cluster
o Oozie: Job Scheduling
Figure 2.5 Hadoop Ecosystem
Big Data Life Cycle with Hadoop
o Ingesting data into the system- the 1st stage of Big Data processing is Ingest. The
data is ingested or transferred to Hadoop from various sources such as relational
databases, systems, or local files.
o Processing the data in storage- the 2nd stage is processing. In this stage, the data is
stored and processed.
o Computing and analyzing data- the 3rd stage is to Analyze. Here, the data is
analyzed by processing frameworks such as Pig, Hive, and Impala.
o Visualizing the results- the 4th stage is Access, which is performed by tools such as
Hue and Cloud era Search. In this stage, the analyzed data can be accessed by users.
3 . Chapter Three Artificial Intelligence (AI)
Outline
What is Artificial Intelligence (AI)
o Need for Artificial Intelligence
o Goals of Artificial Intelligence
o What Comprises to Artificial Intelligence?
o Advantages of Artificial Intelligence
o Disadvantages of Artificial Intelligence
History of AI
Levels of AI
Types of AI
o How humans think
o Mapping human thinking to artificial intelligence components
Influencers of artificial intelligence
o Big Data
o Cloud computing and application programming interfaces
o The emergence of data science
Applications of AI
AI tools and platforms
Semple AI application
3.1 What is Artificial Intelligence (AI)
Artificial Intelligence is composed of two words Artificial and Intelligence.
Artificial defines "man-made," and intelligence defines "thinking power", or ―the ability to learn and
solve problems‖ hence Artificial Intelligence means "a man-made thinking power."
So, we can define Artificial Intelligence (AI) as the branch of computer science by which we can
create intelligent machines which can behave like a human, think like humans, and able to make
decisions.
Intelligence, as we know, is the ability to acquire and apply knowledge. Knowledge is the information
acquired through experience. Experience is the knowledge gained through exposure (training).
Summing the terms up, we get artificial intelligence as the ―copy of something natural (i.e., human
beings) ‗WHO‘ is capable of acquiring and applying the information it has gained through exposure.‖
Artificial Intelligence exists when a machine can have human-based skills such as learning, reasoning,
and solving problems with Artificial Intelligence you do not need to preprogram a machine to do some
work, despite that you can create a machine with programmed algorithms which can work with own
intelligence.
Intelligence is composed of:
Reasoning
Learning
Problem Solving
Perception
Linguistic Intelligence
The advent of Big Data, driven by the arrival of the internet, smart mobile and social media has enabled
AI algorithms, in particular from Machine Learning and Deep Learning, to leverage Big Data and perform
their tasks more optimally. Machine Learning is an advanced form of AI where the machine can learn as it
goes rather than having every action programmed by humans.
Many times, students get confused between Machine Learning and Artificial Intelligence (see figure 3.1),
but Machine learning, a fundamental concept of AI research since the field‘s inception, is the study of
computer algorithms that improve automatically through experience. The term machine learning was
introduced by Arthur Samuel in 1959. Neural networks are biologically inspired networks that extract
features from the data in a hierarchical fashion. The field of neural networks with several hidden layers is
called deep learning.
Figure 3.1 Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL)
3.1.1 Need for Artificial Intelligence
1. To create expert systems that exhibit intelligent behavior with the capability to learn, demonstrate, explain
and advice its users.
2. Helping machines find solutions to complex problems like humans do and applying them as algorithms in a
computer-friendly manner.
3.1.2 Goals of Artificial Intelligence
Following are the main goals of Artificial Intelligence:
1. Replicate human intelligence
2. Solve Knowledge-intensive tasks
3. An intelligent connection of perception and action
4. Building a machine which can perform tasks that requires human intelligence such as:
Proving a theorem
Playing chess
Plan some surgical operation
driving a car in traffic
5. Creating some system which can exhibit intelligent behavior, learn new things by itself,
demonstrate, explain, and can advise to its user.
3.1.3 What Comprises to Artificial Intelligence?
Artificial Intelligence is not just a part of computer science even it's so vast and requires lots of
other factors that can contribute to it. To create the AI-first we should know that how intelligence
is composed, so Intelligence is an intangible part of our brain which is a combination of
Reasoning, learning, problem-solving, perception, language understanding, etc.
To achieve the above factors for a machine or software Artificial Intelligence requires the
following disciplines (see Figure 3.2):
Mathematics
Biology
Psychology
Sociology
Computer Science
Neurons Study
Statistics.
Figure 3.2 Artificial Intelligence is multidisciplinary
3.1.4 Advantages of Artificial Intelligence
Following are some main advantages of Artificial Intelligence:
High Accuracy with fewer errors: AI machines or systems are prone to fewer errors and
high accuracy as it takes decisions as per pre-experience or information.
High-Speed: AI systems can be of very high-speed and fast-decision making, because of
that AI systems can beat a chess champion in the Chess game.
High reliability: AI machines are highly reliable and can perform the same action multiple
times with high accuracy.
Useful for risky areas: AI machines can be helpful in situations such as defusing a bomb,
exploring the ocean floor, where to employ a human can be risky.
Digital Assistant: AI can be very useful to provide digital assistant to users such as AI
technology is currently used by various E-commerce websites to show the products as per
customer requirements.
Useful as a public utility: AI can be very useful for public utilities such as a self-driving car
which can make our journey safer and hassle-free, facial recognition for security purposes,
Natural language processing (for search engines, for spelling checker, for assistant like Siri,
for translation like google translate), etc.
3.1.5 Disadvantages of Artificial Intelligence
High Cost: The hardware and software requirement of AI is very costly as it requires lots of
maintenance to meet current world requirements.
Can't think out of the box: Even we are making smarter machines with AI, but still they
cannot work out of the box, as the robot will only do that work for which they are trained,
or programmed.
No feelings and emotions: AI machines can be an outstanding performer, but still it does
not have the feeling so it cannot make any kind of emotional attachment with humans, and
may sometime be harmful for users if the proper care is not taken.
Increase dependence on machines: With the increment of technology, people are getting
more dependent on devices and hence they are losing their mental capabilities.
No Original Creativity: As humans are so creative and can imagine some new ideas but
still AI machines cannot beat this power of human intelligence and cannot be creative and
imaginative.
3.2 History of AI
Artificial Intelligence is not a new word and not a new technology for researchers. This technology is
much older than you would imagine. Even there are the myths of Mechanical men in Ancient Greek
and Egyptian Myths.
Now AI has developed to a remarkable level. The concept of Deep learning, big data, and data science
are now trending like a boom. Nowadays companies like Google, Facebook, IBM, and Amazon are
working with AI and creating amazing devices. The future of Artificial Intelligence is inspiring and
will come with high intelligence.
3.3 Levels of AI
Stage 1 – Rule-Based Systems
The most common uses of AI today fit in this bracket, covering everything from business software
(Robotic Process Automation) and domestic appliances to aircraft autopilots.
Stage 2 – Context Awareness and Retention
Algorithms that develop information about the specific domain they are being applied in. They are
trained on the knowledge and experience of the best humans, and their knowledge base can be updated
as new situations and queries arise. Well, known applications of this level are chatbots and
―roboadvisors‖.
Stage 3 – Domain-Specific Expertise
Going beyond the capability of humans, these systems build up expertise in a specific context taking
in massive volumes of information which they can use for decision making. Successful use cases have
been seen in cancer diagnosis and the well-known Google Deepmind‘s AlphaGo. Currently, this type
is limited to one domain only would forget all it knows about that domain if you started to teach it
something else.
Stage 4 – Reasoning Machines
These algorithms have some ability to attribute mental states to themselves and others – they have a
sense of beliefs, intentions, knowledge, and how their own logic works. This means they could reason
or negotiate with humans and other machines. At the moment these algorithms are still in
development, however, commercial applications are expected within the next few years.
Stage 5 – Self Aware Systems / Artificial General Intelligence (AGI)
These systems have human-like intelligence – the most commonly portrayed AI in media – however,
no such use is in evidence today. It is the goal of many working in AI and some believe it could be
realized already from 2024.
Stage 6 – Artificial Superintelligence (ASI)
AI algorithms can outsmart even the most intelligent humans in every domain. Logically it is difficult
for humans to articulate what the capabilities might be, yet we would hope examples would include
solving problems we have failed to so far, such as world hunger and dangerous environmental change.
Views vary as to when and whether such a capability could even be possible, yet there a few experts
who claim it can be realized by 2029. Fiction has tackled this idea for a long time, for example in the
film Ex Machina or Terminator.
Stage 7 – Singularity and Transcendence
This is the idea that development provided by ASI (Stage 6) leads to a massive expansion in human
capability.
perception
Figure 3.3 The seven layers of AI maturity
3.4 Types of AI
Artificial Intelligence can be divided into various types, there are mainly two types of the main
categorization which are based on capabilities and based on functionally of AI, as shown in figure
Following is the flow diagram which explains the types of AI.
Figure 3.4 types of Artificial Intelligence (AI)
A. Based on Capabilities
1. Weak AI or Narrow AI:
Narrow AI is a type of AI which is able to perform a dedicated task with intelligence.
The most common and currently available AI is Narrow AI in the world of Artificial
Intelligence.
Narrow AI cannot perform beyond its field or limitations, as it is only trained for one
specific task.
Hence it is also termed as weak AI.
Narrow AI can fail in unpredictable ways if it goes beyond its limits.
Apple Siri is a good example of Narrow AI, but it operates with a limited pre-defined range
of functions.
2. General AI:
is a type of intelligence that could perform any intellectual task with efficiency like a
human.
The idea behind the general AI to make such a system that could be smarter and think like a
human on its own.
Currently, there is no such system exists which could come under general AI and can
perform any task as perfect as a human.
As systems with general AI are still under research, and it will take lots of effort and time to
develop such systems.
3. Super AI:
is a level of Intelligence of Systems at which machines could surpass human intelligence,
and can perform any task better than a human with cognitive properties. This refers to
aspects like general wisdom, problem solving and creativity. It is an outcome of general AI.
Some key characteristics of strong AI include capability include the ability to think, to
reason solve the puzzle, make judgments, plan, learn, and communicate on its own.
Super AI is still a hypothetical concept of Artificial Intelligence. The development of such
systems in real is still a world-changing task.
B. Based on the functionality
1 Reactive Machines
Purely reactive machines are the most basic types of Artificial Intelligence.
Such AI systems do not store memories or past experiences for future actions.
These machines only focus on current scenarios and react on it as per possible best action.
IBM's Deep Blue system is an example of reactive machines.
Google's AlphaGo is also an example of reactive machines.
2 Limited Memory
Limited memory machines can store past experiences or some data for a short period of time.
These machines can use stored data for a limited time period only.
Self-driving cars are one of the best examples of Limited Memory systems. These cars can
store the recent speed of nearby cars, the distance of other cars, speed limits, and other
information to navigate the road.
3 Theory of Mind
Theory of Mind AI should understand human emotions, people, beliefs, and be able to
interact socially like humans.
This type of AI machines is still not developed, but researchers are making lots of efforts and
improvement for developing such AI machines.
4 Self-Awareness
Self-awareness AI is the future of Artificial Intelligence. These machines will be super
intelligent and will have their own consciousness, sentiments, and self-awareness.
These machines will be smarter than the human mind.
Self-Awareness AI does not exist in reality still and it is a hypothetical concept.
3.4.1 How humans think: The goal of many researchers is to create strong and general AI that
learns like a human and can solve general problems as the human brain does. Achieving this goal
might require many more years.
How does a human being think? Intelligence or the cognitive process is composed of three main
stages:
Observe and input the information or data in the brain.
Interpret and evaluate the input that is received from the surrounding environment.
Make decisions as a reaction towards what you received as input and interpreted and
evaluated.
AI researchers are simulating the same stages in building AI systems or models. This process
represents the main three layers or components of AI systems.
3.4.2 Mapping human thinking to artificial intelligence components
Because AI is the science of simulating human thinking, it is possible to map the human thinking
stages to the layers or components of AI systems.
In the first stage, humans acquire information from their surrounding environments through human
senses, such as sight, hearing, smell, taste, and touch, through human organs, such as eyes, ears,
and other sensing organs, for example, the hands.
In AI models, this stage is represented by the sensing layer, which perceives information from the
surrounding environment. This information is specific to the AI application. For example, there are
sensing agents such as voice recognition for sensing voice and visual imaging recognition for
sensing images.
The second stage is related to interpreting and evaluating the input data. In AI, this stage is
represented by the interpretation layer, that is, reasoning and thinking about the gathered input that
is acquired by the sensing layer.
The third stage is related to taking action or making decisions. After evaluating the input data, the
interacting layer performs the necessary tasks. Robotic movement control and speech generation
are examples of functions that are implemented in the interacting layer.
3.5 Influencers of artificial intelligence
This section explores some of the reasons why AI is taking off now. The following influencers of AI
are described in this section:
Big data: Structured data versus unstructured data
Advancements in computer processing speed and new chip architectures
Cloud computing and APIs
The emergence of data science
3.5.1 Big Data
Big data refers to huge amounts of data. Big data requires innovative forms of information
processing to draw insights, automate processes, and help decision making. Big data can be
structured data that corresponds to a formal pattern, such as traditional data sets and databases.
Also, big data includes semi-structured and unstructured formats, such as word-processing
documents, videos, images, audio, presentations, social media interactions, streams, web pages,
and many other kinds of content.
Structured data versus unstructured data
Traditionally, computers primarily process structured data, that is, information with an
organized structure, such as a relational database that is searchable by simple and
straightforward search engine algorithms or SQL statements.
But, real-world data such as the type that humans deal with constantly does not have a high
degree of organization.
For example, text that is written or spoken in natural language (the language that humans
speak) does not constitute structured data.
Unstructured data is not contained in a regular database and is growing exponentially,
making up most of the data in the world.
Advancements in computer processing speed, new chip architectures, and big data file
systems.
Significant advancements in computer processing and memory speeds enable us to make sense of
the information that is generated by big data more quickly. In the past, statisticians and early data
scientists were limited to working with sample data sets. In recent years, big data and the ability to
process a large amount of data at high speeds have enabled researchers and developers to access
and work with massive sets of data. Processing speeds and new computer chip architectures
contribute to the rapid evolution of AI applications.
3.5.2 Cloud computing and application programming interfaces
=>What is the cloud? What do you know about cloud computing?
Cloud computing is a general term that describes the delivery of on-demand services,
usually through the internet, on a pay-per-use basis.
These services might be data analysis, social media, video storage, e-commerce, and AI
capabilities that are available through the internet and supported by cloud computing.
In general, application programming interfaces (APIs) expose capabilities and services.
APIs enable software components to communicate with each other easily.
So programming becomes easier and faster.
AI APIs are usually delivered on an open cloud-based platform on which developers can
infuse AI capabilities into digital applications, products, and operations by using one or
more of the available APIs.
E.g.
o IBM delivers Watson AI services over IBM Cloud.
o Amazon AI services are delivered over Amazon Web Services (AWS).
o Microsoft AI tools are available over the MS Azure cloud.
o Google AI services are available in the Google Cloud Platform.
3.5.3 The emergence of data science
Data science has emerged in the last few years as a new profession that combines several
disciplines, such as statistics, data analysis, machine learning, and others.
The goal of data science is to extract knowledge or insights from data in various forms,
either structured or unstructured, which is like data mining.
After you collect a large enough volume of data, patterns emerge.
Then, data scientists use learning algorithms on these patterns.
Data science uses machine learning and AI to process big data.
3.6 Applications of AI
Artificial Intelligence has various applications in today's society. It is becoming
essential for today's time because it can solve complex problems in an efficient way in
multiple industries, such as Healthcare, entertainment, finance, education, etc. AI is
making our daily life more comfortable and faster.
3.7 AI tools and platforms
Many tools are used in AI, including versions of search and mathematical optimization, logic, methods
based on probability and economics.
AI has developed a large number of tools to solve the most difficult problems in computer science,
like:
✓ Search and optimization
✓ Logic
✓ Probabilistic methods for uncertain reasoning
✓ Classifiers and statistical learning methods
✓ Neural networks
✓ Control theory
✓ Languages
The most common artificial intelligence platforms include Microsoft AZURE Machine Learning,
Google Cloud Prediction API, IBM Watson, TensorFlow, Infosys Nia, Wipro HOLMES, API.AI,
Premonition, Rainbird, Ayasdi, MindMeld, and Meya.
1.1 Semple AI application
I. Commuting
• Google‘s AI-Powered Predictions
• Ridesharing Apps Like Uber and Lyft
• Commercial Flights Use an AI Autopilot
II. Email
• Spam Filters
• Smart Email Categorization
III. Social Networking
• Facebook - When you upload photos to Facebook, the service automatically highlights faces and
suggests friends tag.
IV. Online Shopping
Search - Your Amazon searches (―ironing board‖, ―pizza stone‖, ―Android charger‖, etc.) quickly
return a list of the most relevant products related to your search
Recommendations - You see recommendations for products you‘re interested in as ―customers
who viewed this item also viewed‖ and ―customers who bought this item also bought‖, as well as
via personalized recommendations on the home page, bottom of item pages, and through email.
Amazon uses artificial neural networks to generate these product recommendations.
V. Mobile Use
Voice-to-Text - A standard feature on smartphones today is voice-to-text. By pressing a button
or saying a particular phrase (―Ok Google‖, for example), you can start speaking and your phone
converts the audio into text
Smart Personal Assistants - Now that voice-to-text technology is accurate enough to rely on for
basic conversation, it has become the control interface for a new generation of smart personal
assistants. Siri and Google Now (now succeeded by the more sophisticated Google Assistant),
which could perform internet searches, set reminders, and integrate with your calendar.
Microsoft has followed suit with Cortana, its own AI assistant that comes pre-loaded on
Windows computers and Microsoft smartphones.
4 Chapter 4: Internet of Things (IoT)
Outline
Overview of IoT
o What is IoT?
o History of IoT
o IoT − Advantages
o IoT – Disadvantages
o Challenges of IoT
How does it work?
o Architecture of IoT
o Devices and Networks
IoT Tools and Platforms
o IoT Based Smart Home
o IoT Based Smart City
o IoT Based Smart Farming
4.1 Overview of IoT
The most important features of IoT include artificial intelligence, connectivity, sensors, active
engagement, and small device use. A brief review of these features is given below –
AI − IoT essentially makes virtually anything ―smart‖, meaning it enhances every aspect of life
with the power of data collection, artificial intelligence algorithms, and networks. This can
mean something as simple as enhancing your refrigerator and cabinets to detect when milk and
your favorite cereal run low, and to then place an order with your preferred grocer.
Connectivity − New enabling technologies for networking and specifically IoT networking,
mean networks are no longer exclusively tied to major providers. Networks can exist on a
much smaller and cheaper scale while still being practical. IoT creates these small networks
between its system devices.
Sensors − IoT loses its distinction without sensors. They act as defining instruments that
transform IoT from a standard passive network of devices into an active system capable of
real-world integration.
Active Engagement − Much of today's interaction with connected technology happens
through passive engagement. IoT introduces a new paradigm for active content, product, or
service engagement.
Small Devices − Devices, as predicted, have become smaller, cheaper, and more powerful over
time. IoT exploits purpose-built small devices to deliver its precision, scalability, and versatility.
4.1.1 What is IoT?
According to the Internet Architecture Board‘s (IAB) definition, IoT is the networking of smart
objects, meaning a huge number of devices intelligently communicating in the presence of
internet protocol that cannot be directly operated by human beings but exist as components in
buildings, vehicles or the environment.
According to the Internet Engineering Task Force (IETF) organization‘s definition, IoT is the
networking of smart objects in which smart objects have some constraints such as limited
bandwidth, power, and processing accessibility for achieving interoperability among smart
objects.
The term Internet of Things (IoT) according to the 2020 conceptual framework is expressed
through a simple formula such as:
IoT= Services+ Data+ Networks + Sensors
Generally, The Internet of Things (IoT) is the network of physical objects or "things"
embedded with electronics, software, sensors, and network connectivity, which enables these
objects to collect and exchange data.
IoT is a system of interrelated computing devices, mechanical and digital machines, objects,
animals or people that are provided with unique identifiers and the ability to transfer data over
a network without requiring human-to-human or human-to-computer interaction.
IoT systems allow users to achieve deeper automation, analysis, and integration within a
system. They improve the reach of these areas and their accuracy.
The internet of things (IoT) has found its application in several areas such as connected
industry, smart-city, smart-home, smart-energy, connected car, smart agriculture, connected
building and campus, health care, logistics, among other domains (see Figure 4.1). IoT systems
allow users to achieve deeper automation, analysis, and integration within a system. They
improve the reach of these areas and their accuracy.
Figure 4.1 IoT in Different Domains
4.1.2 IoT – Advantages
The advantages of IoT span across every area of lifestyle and business. Here is a list of some of the
advantages that IoT has to offer:
Improved Customer Engagement − Current analytics suffer from blind-spots and
significant flaws inaccuracy; and as noted, engagement remains passive.
Technology Optimization − The same technologies and data which improve the customer
experience also improve device use, and aid in more potent improvements to technology.
Reduced Waste − IoT makes areas of improvement clear. Current analytics give us
superficial insight, but IoT provides real-world information leading to the more effective
management of resources.
Enhanced Data Collection − Modern data collection suffers from its limitations and its
design for passive use.
4.1.3 IoT-Disadvantages
As the number of connected devices increases and more information is shared between
devices, the potential that a hacker could steal confidential information also increases.
If there‘s a bug in the system, it‘s likely that every connected device will become
corrupted.
Since there‘s no international standard of compatibility for IoT, it‘s difficult for devices
from different manufacturers to communicate with each other.
Enterprises may eventually have to deal with massive numbers maybe even millions of IoT
devices and collecting and managing the data from all those devices will be challenging.
4.1.4 Challenges of IoT
Security − IoT creates an ecosystem of constantly connected devices communicating over
networks.
Privacy − The sophistication of IoT provides substantial personal data in extreme detail
without the user's active participation.
Complexity − Some find IoT systems complicated in terms of design, deployment, and
maintenance given their use of multiple technologies and a large set of new enabling
technologies.
Flexibility − Many are concerned about the flexibility of an IoT system to integrate easily
with another.
Compliance − IoT, like any other technology in the realm of business, must comply with
regulations.
4.2 How does it work?
An IoT ecosystem consists of web-enabled smart devices that use embedded processors,
sensors and communication hardware to collect, send and act on data they acquire from their
environments.
IoT devices share the sensor data they collect by connecting to an IoT gateway or another edge
device where data is either sent to the cloud to be analyzed or analyzed locally.
The connectivity, networking and communication protocols used with these web-enabled
devices largely depend on the specific IoT applications deployed.
4.2.1 Architecture of IoT
In general, an IoT device can be explained as a network of things that consists of hardware,
software, network connectivity, and sensors. Hence, the architecture of IoT devices
comprises four major components: sensing, network, data processing, and application
layers.
Sensing Layer - The main purpose of the sensing layer is to identify any phenomena in the
devices‘ peripheral and obtain data from the real world.
Figure 4.2 Architecture of IoT
Motion Sensors: Motion sensors measure the change in motion as well as the orientation of
the devices.
There are two types of motions one can observe in a device: linear and angular motions.
Environmental Sensors: Sensors such as Light sensors, Pressure sensors, etc. are embedded
in IoT devices to sense the change in environmental parameters in the device‘s peripheral.
Position sensors: Position sensors of IoT devices deal with the physical position and
location of the device.
The most common position sensors used in IoT devices are magnetic sensors and Global
Positioning System (GPS) sensors.
Network Layer : The network layer acts as a communication channel to transfer data, collected
in the sensing layer, to other connected devices.
In IoT devices, the network layer is implemented by using diverse communication technologies
(e.g., Wi-Fi, Bluetooth, Zigbee, Z-Wave, LoRa, cellular network, etc.) to allow data flow between
other devices within the same network.
Data Processing Layer: The data processing layer consists of the main data processing unit
of IoT devices.
The data processing layer takes data collected in the sensing layer and analyses the data to make
decisions based on the result.
In some IoT devices (e.g., smart watch, smart home hub, etc.), the data processing layer also saves
the result of the previous analysis to improve the user experience.
Application Layer: The application layer implements and presents the results of the data
processing layer to accomplish disparate applications of IoT devices.
4.2.2 Devices and Networks
Connected devices are part of a scenario in which every device talks to other related devices in
an environment to automate home and industrial tasks, and to communicate usable sensor data
to users, businesses and other interested parties.
Consumer connected devices include smart TVs, smart speakers, toys, wearables, and smart
appliances.
Figure 4.3 Networked IoT Devices
4.3 IoT Tools and Platforms
There are many vendors in the industrial IoT platform marketplace, offering remarkably similar
capabilities and methods of deployment. These IoT Platform Solutions are based on the Internet of
Things and cloud technology. They can be used in areas of smart home, city, enterprise, home
automation, healthcare or automotive, just to name a few.
4.4 Applications of IoT
There are many vendors in the industrial IoT platform marketplace, offering remarkably
similar capabilities and methods of deployment.
These IoT Platform Solutions are based on the Internet of Things and cloud technology.
Applications of IoT
o Agriculture
o Consumer Use
o Healthcare
o Insurance
o Manufacturing
o Retail
o Transportation
o Utilities
4.4.1 IoT Based Smart Home
Smart Home initiative allows subscribers to remotely manage and monitor different home devices
from anywhere via smartphones or over the web with no physical distance limitations. With the
ongoing development of mass-deployed broadband internet connectivity and wireless technology, the
concept of a Smart Home has become a reality where all devices are integrated and interconnected via
the wireless network. These ―smart‖ devices have the potential to share information with each other
given the permanent availability to access the broadband internet connection.
Remote Control Appliances: Switching on and off remotely appliances to avoid accidents and
save energy.
Weather: Displays outdoor weather conditions such as humidity, temperature, pressure, wind
speed and rain levels with the ability to transmit data over long distances.
Smart Home Appliances: Refrigerators with LCD screen telling what‘s inside, food that‘s
about to expire, ingredients you need to buy and with all the information available on a
smartphone app. Washing machines allowing you to monitor the laundry remotely, and. The
kitchen ranges with the interface to a Smartphone app allowing remotely adjustable
temperature control and monitoring the oven‘s self-cleaning feature.
Safety Monitoring: cameras, and home alarm systems making people feel safe in their daily
life at home.
Intrusion Detection Systems: Detection of window and door openings and violations to prevent
intruders.
Energy and Water Use: Energy and water supply consumption monitoring to obtain advice on
how to save cost and resources, & many more.
4.4.2 IoT Based Smart City
In cities, the development of smart grids, data analytics, and autonomous vehicles will provide an
intelligent platform to deliver innovations in energy management, traffic management, and
security, sharing the benefits of this technology throughout society.
Structural Health: Monitoring of vibrations and material conditions in buildings, bridges
and historical monuments.
Lightning: intelligent and weather adaptive lighting in street lights.
Safety: Digital video monitoring, fire control management, public announcement systems.
Transportation: Smart Roads and Intelligent High-ways with warning messages and
diversions according to climate conditions and unexpected events like accidents or traffic
jams.
Smart Parking: Real-time monitoring of parking spaces available in the city making
residents able to identify and reserve the closest available spaces,
Waste Management: Detection of rubbish levels in containers to optimize the trash
collection routes. Garbage cans and recycle bins with RFID tags allow the sanitation staff
to see when garbage has been put out.
4.4.3 IoT Based Smart Farming
Green Houses: Control micro-climate conditions to maximize the production of fruits and
vegetables and its quality.
Compost: Control of humidity and temperature levels in alfalfa, hay, straw, etc. to prevent
fungus and other microbial contaminants.
Animal Farming/Tracking: Location and identification of animals grazing in open
pastures or location in big stables, Study of ventilation and air quality in farms and
detection of harmful gases from excrements.
Offspring Care: Control of growing conditions of the offspring in animal farms to ensure
its survival and health.
Field Monitoring: Reducing spoilage and crop waste with better monitoring, accurate
ongoing data obtaining, and management of the agriculture fields, including better control
of fertilizing, electricity and watering.
5 Chapter 5: Augmented Reality (AR)
Overview of augmented reality
Virtual reality (VR), Augmented Reality (AR) vs Mixed reality (MR)
o Virtual Reality (VR)
o Augmented Reality (AR)
o Mixed Reality (MR)
The architecture of AR Systems
Applications of AR Systems
o AR In education
o AR In Medicine
o AR In Entertainment
1. Overview of augmented reality
The fundamental idea of AR is to combine, or mix, the view of the real environment with additional,
virtual content that is presented through computer graphics. Its convincing effect is achieved by
ensuring that the virtual content is aligned and registered with the real objects. As a person moves in
an environment and their perspective view of real objects changes, the virtual content should also be
presented from the same perspective.
Augmented reality (AR) is a form of emerging technology that allows users to overlay computer-
generated content in the real world. AR refers to a live view of a physical real-world environment
whose elements are merged with augmented computer-generated images creating a mixed reality. The
augmentation is typically done in real-time and in semantic context with environmental elements. By
using the latest AR techniques and technologies, the information about the surrounding real world
becomes interactive and digitally usable. Through this augmented vision, a user can digitally interact
with and adjust information about their surrounding environment.
Augmented Reality (AR) as a real-time direct or indirect view of a physical real-world environment
that has been enhanced/augmented by adding virtual computer-generated information to it. Augmented
reality is the integration of digital information with the user's environment in real-time. Unlike virtual
reality, which creates a totally artificial environment, augmented reality uses the existing environment
and overlays new information on top of it. A live direct or indirect view of a physical, real-world
environment whose elements are augmented by computer-generated sensory input such as sound,
video, graphics or GPS data.
2. Virtual reality (VR), Augmented Reality (AR) vs Mixed reality (MR)
With constant development in computer vision and the exponential advancement of
computer processing power, virtual reality (VR), augmented reality (AR), and mixed
reality (MR) technology is becoming more and more prominent. With some overlap in
the applications and functions of these emerging technologies, sometimes these terms
get confused or are used incorrectly. The main differences between them are explained
below (see Figure 5.1).
Figure 5.1 Paul Milgram's Reality-Virtuality (RV) Continuum
5.2.1 Virtual Reality (VR)
VR is fully immersive, which tricks your senses into thinking you‘re in a different
environment or world apart from the real world.
Using a head-mounted display (HMD) or headset, you‘ll experience a computer-generated
world of imagery and sounds in which you can manipulate objects and move around using
haptic controllers while tethered to a console or PC.
It is also called a computer-simulated reality. It refers to computer technologies using reality
headsets to generate realistic sounds, images and other sensations that replicate a real
environment or create an imaginary world. Advanced VR environment will engage all five
senses (taste, sight, smell, touch, sound), but it is important to say that this is not always
possible (See Figure 5.2).
Using VR devices such as HTC Vive, Oculus Rift or Google Cardboard, users can be
transported into a number of real-world and imagined environments. The most advanced VR
experiences even provide freedom of movement – users can move in a digital environment
and hear sounds. Moreover, special hand controllers can be used to enhance VR
experiences.
Figure 5.2 Example of Immersive Technology
Most VR headsets are connected to a computer (Oculus Rift) or a gaming console
(PlayStation VR) but there are standalone devices (Google Cardboard is among the most
popular) as well. Most standalone VR headsets work in combination with smartphones –
you insert a smartphone, wear a headset, and immerse in the virtual reality (see Figure 5.3).
Figure 5.3 VR Case that Inserts a Smartphone
5.2.2 Augmented Reality (AR)
In augmented reality, users see and interact with the real world while digital content is
added to it.
Augmented Reality (AR) is a live, direct or indirect view of a physical, real-world
environment whose elements are augmented (or supplemented) by computer-generated
sensory input such as sound, video, graphics or GPS data.
Figure 5.4 Direct and Indirect Augmentation of Objects
5.2.3 Mixed Reality (MR)
Mixed Reality (MR), sometimes referred to as hybrid reality, is the merging of real and
virtual worlds to produce new environments and visualizations where physical and digital
objects co-exist and interact in real-time.
The key characteristic of MR is that the synthetic content and the real-world content are
able to react to each other in real-time.
Figure 5.5 Mixed Reality in Engineering and Medicine
MR allows you to see and immerse yourself in the world around you even as you interact with
a virtual environment using your own hands—all without ever removing your headset.
Figure 5.6 Mixed Reality in Entertainment
3. The architecture of AR Systems
The first Augmented Reality Systems (ARS) were usually designed with a basis on three main blocks,
as is illustrated in Figure 5.7: (1) Infrastructure Tracker Unit, (2) Processing Unit, and (3) Visual Unit.
The Infrastructure Tracker Unit was responsible for collecting data from the real world, sending them
to the Processing Unit, which mixed the virtual content with the real content and sent the result to the
Video Out module of the Visual Unit. Some designs used a Video In, to acquire required data for the
Infrastructure Tracker Unit.
Figure 5.7 Augmented Reality Systems (ARS) standard architecture
The Visual Unit can be classified into two types of system, depending on the followed visualization
technology:
1. Video see-through: It uses a Head-Mounted Display (HMD) that employs a video-mixing and
displays the merged images on a closed-view HMD.
2. Optical see-through: It uses an HMD that employs optical combiners to merge the images within an
open-view HMD.
HMDs are currently the dominant display technology in the AR field. However, they lack in several
aspects, such as ergonomics, high prices and relatively low mobility due to their sizes and connectivity
features. An additional problem involving HMD is the interaction with the real environment, which
places virtual interactive zones to the user, making the collision with these zones hard due to the
difficulty to interact with multiple points in different depths. Alternative approaches to developing
ARS involve the use of monitors and tablets. Monitors are used as an option for indirect view since the
user does not look directly into the mixed world. Tablets are used in direct view since the user points
the camera to the scene and looks directly into the mixed world. Both approaches still have difficulties
in getting a collision.
4. Applications of AR Systems
Technology is ever-changing and ever-growing. One of the newest developing technologies is
augmented reality (AR), which can be applied to many different disciplines such as education,
medicine, entertainment, military, etc. Let us see some of its applications.
5.4.1 AR In education
Augmented reality allows flexibility in use that is attractive to education. AR technology can be
utilized through a variety of mediums including desktops, mobile devices, and smartphones. The
technology is portable and adaptable to a variety of scenarios. AR can be used to enhance content
and instruction within the traditional classroom, supplement instruction in the special education
classroom, extend content into the world outside the classroom, and be combined with other
technologies to enrich their individual applications. More importantly, the following reasons for
using augmented reality in education:
Affordable learning materials - posters, digital illustrations, physical models, prototypes are
very expensive and it‘s impossible for schools to find enough money to purchase all the
supplementary materials they would like to. Using AR technology allows for avoiding
investments in physical materials. Besides, students can get access to learning materials and
interact with them anytime.
Interactive lessons - when AR technology is used in classrooms, students can view models on
their own smartphones and get a better idea of the concepts they are studying. That increases
engagements and reinforces the learning.
Higher engagement - when teachers integrate augmented reality into their lectures, they attract
the attention of their students and make lessons more effective. When students are interested, it
is much easier to make them work more productively.
Higher retention - using the AR app, students can get access to augmented models that
represent any real objects from a famous monument or work of art to a molecule. Besides,
students can get access to a website with specific information. When learning with AR
technology, students use different senses and retain more knowledge for a long time.
Boost intellectual curiosity - augmented reality makes students more excited about learning
certain subjects. Modern students were born in a digital era so they will always be excited with
innovative technologies that can help them learn new ideas and develop their critical thinking
skills.
When using AR technology in the classroom, teachers can create an authentic learning environment for
students with different learning styles.
5.4.2 AR In Medicine
Augmented reality is one of the current technologies changing all industries, including healthcare
and medical education. The purpose of any invention and technology is to simplify our life.
Augmented reality has the potential to play a big role in improving the healthcare industry. Only a
few years since the first implementations of augmented reality in medicine, it has already filled an
important place in doctors‘ and nurses‘ routine, as well as patients‘ lives. This new technology is
enhancing medicine and healthcare towards more safety and efficiency. For now, augmented
reality has already made significant changes in the following medical areas:
surgery (minimally invasive surgery);
education of future doctors; • diagnostics;
AR tools may also aid to detect the signs of depression and other mental illnesses by
reading from facial expressions, voice tones, and physical gestures.
In medicine, AR has the following applications:
1) Describing symptoms - Have you ever been in a situation when it was hard to describe to the
doctor what was bothering you? It is a common problem for all us, the roots of which extend to
overreacting and lack of knowledge. And what is most important, it impacts on finding out the
accurate diagnosis. The first steps to find the solutions are already made. To increase patients‘
education, medical app AyeDecide is using augmented reality to show the simulation of the vision,
harmed by the different diseases. It helps patients to understand their conditions and describe
correctly their symptoms.
2) Nursing care - About 40% of the first intravenous injections fail, and this ratio is even higher in the
case of children and elderly patients. The AccuVein uses augmented reality to cope with this
negative statistic. A handheld scanner projects on the skin and shows the patients‘ veins. It
increases the successful finding of the vein from the first try in 3,5 times. That is why this
invention got the greatest recognition among the general public and medical staff.
3) Surgery - In no sphere augmented reality does not have such practical application as in the
medicine, especially in surgery, where it literally helps to save lives. Three-dimensional
reconstructions of organs or tumors will help surgeons become more efficient at surgery
operations. For example, spinal surgery, as usually, is a long and difficult process. But with the use
of AR, it can reduce the time, cut the risks and improve the results. The Israeli startup Augmedics
had created an augmented reality headset for spine surgeons. This technology overlays a 3D model
of the CT-scan on the spine, so, the surgeon gets some kind of ―X-ray‖ vision.
4) Ultrasounds - Some time ago ultrasound made a small revolution in medicine. Today, it has
another one chance to make the same with using augmented reality. Already a few AR software
companies developed handy ultrasound scanner, which with the help of smart glasses works as a
traditional one. It is hard to overestimate the usefulness of this technology. Especially when we
talk about using it in the developing countries, in military medicine (on the battlefields) and even
in the ambulance.
5) Diabetes management - In 2017, the number of people struggle with diabetes reached up to 425
million adults worldwide. And the amount of diagnosed people is increasing every year. In 2014,
Google revealed the plans for creating a smart contact lens (Google Contact Lens), in which the
main function will be to measure the glucose levels in the tears. It will help people with this
disease to live the life they used to, without permanent worries about sugar level in the blood.
6) Navigation - The using AR in navigation apps has already become a ―traditional‖ way. By pointing
your phone to the city landscape, you get the information about nearby objects of your interest
(museums, hotels, shops, metro stations, etc.). The same way AR can be useful to provide
information about the nearest hospitals. For example, the EHBO app helps to find the nearest to
you AEDs (automated external defibrillators).
Generally, AR provides the following benefits to patients and healthcare workers:
Reduce the risks associated with minimally invasive surgery.
Better informed decisions about the right treatment and illness prevention.
Make procedures more tolerable.
Better aftercare
Medical training and education.
Assistance in medical procedures and routine tasks.
5.4.3 AR In Entertainment
Augmented reality can be used in various ―entertainment‖ industries as entertainment covers quite
a number of different industries – music, movies, live shows, games – and all of them can benefit
from using augmented reality.
AR in games - the AR games were praised for increasing physical activity in people – you
actually have to move around to find your target, for example, Pokémon. At the same time,
there are complaints that players could cause various incidents and accidents being too
engrossed in the game. In any case, Pokémon GO has rightfully earned its popularity and
opened the world of AR games to us.
AR in music - music is not only about listening to favorite tracks and putting together
playlists. When we like a piece, we often want to find out more about its background: the
performers‘ bios, the lyrics of the song, the making of the recording or the music video.
Augmented reality can do all that and much more providing complete information on the
track or its performer. Augmented reality can enhance live performances by illustrating the
story told by a track or displaying the way it was created by the band.
AR on TV - this may seem a bit far-fetched, as television already shows a virtual world,
thus adding augmented reality will raise it to the second power. However, some
experiments of fusing augmented reality on TV are already being made with the promise of
future enhancements. One way of integrating augmented reality in television is adding
supplementary information to what is going on the TV screen – such as match scores,
betting options, and the like.
AR in eSports - recently, the industry of eSports has been gaining popularity in all parts of
the globe. Competitive online gaming has become as fascinating as real sports, and the
technology is following it closely with new solutions and unusual implementations.
Augmented reality turns eSports shows into interactive experiences allowing the watchers
to become participants.
AR in the theater - in this sector, augmented reality can serve not only for entertainment
purposes but also for the purposes of accessibility. The possibility to overlay virtual objects
over the real environment can be used, for example, for subtitling in various theater shows.
Now, many theaters use LED displays either to provide subtitles for translation or to assist
hearing-impaired visitors. However, LED equipment is not available in each theater and
even when it is, it can distract the viewers from the show.
6 Chapter SIX ETHICS AND PROFESSIONALISM OF EMERGING
TECHNOLOGIES
Outline
Technology and ethics.
New ethical questions
o General ethical principles
o Professional responsibilities.
o Professional leadership principles
Digital privacy
o Information Privacy
o Communication Privacy
o Individual Privacy
o Some digital privacy principles
Accountability and trust
Treats and challenges
o Ethical and regulatory challenges
o Treats.
1. Technology and ethics
The Internet boom has provided many benefits for society, allowing the creation of new tools
and new ways for people to interact.
As with many technological advances, however, the Internet has not been without negative
aspects.
Technology can serve to promote or restrict human rights.
The Information Society should foster the use of emerging technologies in such a way as to
maximize the benefits that they provide while minimizing the harms.
Ethics is particularly important for the accountancy profession, with a code for professional
ethics based on five basic principles – integrity, objectivity, competence and due care,
confidentiality, and professional behaviour.
2. New ethical questions
The increasing use of big data, algorithmic decision-making, and artificial intelligence can
enable more consistent, evidence-based and accurate judgments or decisions, often more
quickly and efficiently.
However, these strengths can potentially have a darker side too, throwing up questions around
the ethical use of these fairly new technologies.
o For example, outputs can be based on biased data, which could lead to discriminatory
outcomes.
6.2.1 General ethical principles
1. Contribute to society and to human well-being, acknowledging that all people are
stakeholders in computing.
2. Avoid harm.
3. Be honest and trustworthy.
4. Be fair and take action not to discriminate
5. Respect the work required to produce new ideas, inventions, creative works, and
computing artifacts.
6. Respect privacy.
7. Honor confidentiality
6.2.2 Professional responsibilities.
1. Strive to achieve high quality in both the processes and products of professional work.
2. Maintain high standards of professional competence, conduct, and ethical practice.
3. Know and respect existing rules pertaining to professional work.
4. Accept and provide appropriate professional review.
5. Give comprehensive and thorough evaluations of computer systems and their impacts,
including analysis of possible risks.
6. Perform work only in areas of competence.
7. Foster public awareness and understanding of computing, related technologies, and
their consequences.
8. Access computing and communication resources only when authorized or when
compelled by the public good.
9. Design and implement systems that are robustly and usably secure.
6.2.3 Professional leadership principles.
1. Ensure that the public good is the central concern during all professional computing
work.
2. Articulate, encourage acceptance of and evaluate fulfillment of social responsibilities
by members of the organization or group.
3. Manage personnel and resources to enhance the quality of working life.
4. Articulate, apply, and support policies and processes that reflect the principles of the
Code.
5. Create opportunities for members of the organization or group to grow as
professionals.
6. Use care when modifying or retiring systems. Interface changes, the removal of
features, and even software updates have an impact on the productivity of users and
the quality of their work.
7. Recognize and take special care of systems that become integrated into the
infrastructure of society.
6.3 Digital privacy
Digital Privacy is the protection of personally identifiable or business identifiable information that
is collected from respondents through information collection activities or from other sources. It is a
collective definition that encompasses three sub-related categories; information privacy,
communication privacy, and individual privacy It is often used in contexts that promote advocacy
on behalf of individual and consumer privacy rights in digital spheres, and is typically used in
opposition to the business practices of many e-marketers/businesses/companies to collect and use
such information and data.
6.3.1 Information Privacy
In the context of digital privacy, information privacy is the notion that individuals should have the
freedom, or right, to determine how their digital information, mainly that pertaining to personally
identifiable information, is collected and used. Every country has various laws that dictate how
information may be collected and used by companies. Some of those laws are written to give
agency to the preferences of individuals/consumers in how their data is used. In other places, like
in the United States, privacy law is argued by some to be less developed in this regard, For
example, some legislation, or lack of, allows companies to self-regulate their collection and
dissemination practices of consumer information.
6.3.2 Communication Privacy
In the context of digital privacy, communication privacy is the notion that individuals should have
the freedom, or right, to communicate information digitally with the expectation that their
communications are secure; meaning that messages and communications will only be accessible to
the sender's original intended recipient. However, communications can be intercepted or delivered
to other recipients without the sender's knowledge, in a multitude of ways. Communications can be
intercepted directly through various hacking methods; this is expanded upon further below.
Communications can also be delivered to recipients unbeknownst to the sender due to false
assumptions made regarding the platform or medium which was used to send information. An
example of this is a failure to read a company's privacy policy regarding communications on their
platform could lead one to assume their communication is protected when it is in fact not.
Additionally, companies frequently have been known to lack transparency in how they use
information, this can be both intentional and unintentional. Discussion of communication privacy
necessarily requires consideration of technological methods of protecting
information/communication in digital mediums, the effectiveness, and ineffectiveness of such
methods/systems, and the development/advancement of new and current technologies.
6.3.3 Individual Privacy
In the context of digital privacy, individual privacy is the notion that individuals have a right to
exist freely on the internet, in that they can choose what types of information they are exposed to,
and more importantly that unwanted information should not interrupt them An example of a digital
breach of individual privacy would be an internet user receiving unwanted ads and emails/spam, or
a computer virus that forces the user to take actions they otherwise wouldn't. In such cases the
individual, during that moment, doesn't exist digitally without interruption from unwanted
information; thus, their individual privacy has been infringed upon.
6.3.4 Some digital privacy principles
Data Minimization: collect the minimal amount of information necessary from individuals
and businesses consistent with the Department‘s mission and legal requirements.
Transparency: Notice covering the purpose of the collection and use of identifiable
information will be provided in a clear manner. Information collected will not be used for
any other purpose unless authorized or mandated by law.
Accuracy: Information collected will be maintained in a sufficiently accurate, timely, and
complete manner to ensure that the interests of the individuals and businesses are
protected.
Security: Adequate physical and IT security measures will be implemented to ensure that
the collection, use, and maintenance of identifiable information are properly safeguarded
and the information is promptly destroyed in accordance with approved records control
schedules.
6.4 Accountability and trust
When emerging technology creates far-reaching and rapid change, it can also bring new risks.
Understanding and mitigating them will help to build confidence. Often legal and regulatory
frameworks haven‘t kept pace with digital transformation, and organizations are seeking guidance.
challenge is exacerbated by the speed at which technological change is occurring and the breadth of its
adoption – which is introducing new risks that demand new responses. Emerging technologies can
provide improved accuracy, better quality and cost efficiencies for businesses in every sector. They
can enhance trust in the organization‘s operations and financial processes, which is crucial for
sustainable success. But this can produce a paradox: the very solutions that can be used to better
manage risk, increase transparency and build confidence are often themselves the source of new risks,
which may go unnoticed. There‘s a danger that the use of technology will degrade people‘s
willingness to judge and intervene because they feel that they are less personally connected to
consumers and consumer outcomes – the logic of the machine has taken over from individual
responsibility.
The obligation of an individual or organization to account for its activities, accept responsibility for
them, and to disclose the results in a transparent manner. It also includes the responsibility for money
or other entrusted property.
6.5 Treats and challenges
6.5.1 Ethical and regulatory challenges
With Technology moving at a fast pace it is always been a challenge for Security. As security
professionals, we need to keep pace with ever-changing technology and be aware of the AI, IoT,
Big Data, Machine Learning, etc. It is no more Guards, guns & gates it is more than that & we
need to play a major role for a security professional to support business or rather we should be able
to understand the language of business and talk to the leaders in their language. With Growing
needs Cyber & Data Security is getting prominence that requires security practitioners to focus on
the business need for securing data, understanding security and risk from a business perspective by
extensively interacting with the business community in understanding their requirements or what
they want.
Emerging technologies are already impacting how we live and work. They're also changing how
we approach, plan, and integrate security operations. Certainly, we are living in an era where
innovation, agility, and imagination are all essential in order to keep pace with the exponential
technological transformation taking place. For security, both physical and cyber, the equation is
the same catalyzing many new potential applications for emerging technologies. Emerging
technologies are making an impact include:
1. Counter-terrorism and law enforcement informatics via predictive analytics and artificial
intelligence.
2. Real-time horizon scanning and data mining for threats and information sharing
3. Automated cybersecurity and information assurance
4. Enhanced Surveillance (chemical and bio-detection sensors, cameras, drones, facial
recognition, license plate readers)
5. Simulation and augmented reality technologies for training and modeling
6. Safety and security equipment (including bullet and bomb proof) made with lighter and
stronger materials
7. Advanced forensics enabled by enhanced computing capabilities (including future
quantum computing)
8. Situational awareness capabilities via GPS for disaster response and crisis response
scenarios
9. Biometrics: assured identity security screening solutions by bio-signature: (every aspect
of your physiology can be used as a bio-signature. Measure unique heart/pulse rates,
electrocardiogram sensor, blood oximetry, skin temperature)
10. Robotic Policing (already happening in Dubai!)
6.5.1.1 Challenges in using Artificial Intelligence
AI is only as good as the data it is exposed to, which is where certain challenges may present
themselves. How a business teaches and develops its AI will be the major factor in its usefulness.
Humans could be the weak link here, as people are unlikely to want to input masses of data into a
system.
Another dilemma that comes along with AI is its potential to replace human workers. As machines
become more ―intelligent‖ they could begin to replace experts in higher-level jobs. Alternatively,
AI also has the potential to take the burden of laborious and time-consuming tasks from these
people, freeing up their time and brainpower for other things e.g. doctors using diagnostic AI to
help them diagnose patients will analyze the data presented by the AI and make the ultimate
decision. Managing the challenges posed by AI will require careful planning to ensure that the full
benefits are realized and risks are mitigated.
6.5.1.2 Challenges in using Robotics in manufacturing
With automation and robotics moving from production lines out into other areas of work and
business, the potential for humans losing jobs is great here too. As automation technologies
become more advanced, there will be a greater capability for automation to take over more and
more complex jobs. As robots learn to teach each other and themselves, there is the potential for
much greater productivity but this also raises ethical and cybersecurity concerns.
6.5.1.3 Challenges in using the Internet of Things
As more and more connected devices (such as smartwatches and fitness trackers) join the Internet
of Things (IoT) the amount of data being generated is increasing. Companies will have to plan
carefully how this will affect the customer-facing application and how to best utilize the masses of
data being produced. There are also severe security implications of mass connectivity that need to
be addressed.
6.5.1.4 Challenges in Big Data
Almost all the technologies mentioned above have some relation to Big Data. The huge amount of
data being generated on a daily basis has the potential to provide businesses with better insight into
their customers as well as their own business operations.
Although data can be incredibly useful for spotting trends and analyzing impacts, surfacing all this
data to humans in a way that they can understand can be challenging. AI will play a role here.
6.5.1.5 Treats
New and emerging technologies pose significant opportunities for businesses if they utilize them
well and understand their true value early on. They also pose risks and questions not only to
business but to society as a whole. Planning for how to deal with these emerging technologies and
where value can be derived while assessing potential risks before they become a fully-fledged
reality is essential for businesses that want to thrive in the world of AI, Big Data and IoT.
Some risks of emerging technology are:
Driverless car: while a compelling option for future fleer cars, companies could crash and
burn from claims related to bodily injury and property damage.
Wearables: Google glass, Fitbit and other wearables can expose companies to the invasion
of privacy claims that may not be covered by general liability or personal injury claims that
weren‘t foreseen.
Drones: Turbulence is in the offing for manufacturers and organizations that fail to protect
themselves for property damage and bodily injury, as well as errors and omissions.
Internet of things: The proliferation of sensors and cross-platform integration creates
potential exposure from privacy invasion, bodily injury and property damage that may
connect an organization to huge liabilities.