Ambo University
School of Informatics & Electrical Engineering
Computer Science Department
Introduction to Emerging Technologies Course (EMTE1012)
Lecture Note
Chapter 1: Introduction to Emerging Technologies
Introduction
In this chapter evolution of technology, the role of data for emerging technology, enabling devices and
networks for technologies (programmable devices), Human to Machine Interaction (HCI) and future
trends of technologies are discussed.
1.1 Evolution of Technologies
Activity 1.1
➢ Define emerging technologies?
➢ Define Technology and Evolution in the context of your prior knowledge and compare it
with the discussion given below?
Emerging technology is a term generally used to describe a new technology, but it may also refer to the
continuing development of existing technology; it can have slightly different meanings when used in
different areas, such as media, business, science, or education. The term commonly refers to technologies
that are currently developing, or that are expected to be available within the next five to ten years, and is
usually reserved for technologies that are creating or are expected to create
significant social or economic effects. Technological evolution is a theory of radical
transformation of society through technological development.
What is the root word of technology and evolution?
Technology: 1610s, "discourse or treatise on an art or the arts," from Greek tekhnologia "systematic
treatment of an art, craft, or technique," originally referring to grammar, from tekhno- (see techno-
) + -logy. The meaning "science of the mechanical and industrial arts" is first recorded in 1859.
• Evolution: evolution means the process of developing by gradual changes. This noun is
from Latin evolutio, "an unrolling or opening," combined from the prefix e-, "out," plus
volvere, "to roll."
List of some currently available emerged technologies
➢ Artificial Intelligence ➢ Cloud Computing
➢ Blockchain ➢ Angular and React
➢ Augmented Reality and Virtual ➢ DevOps
Reality ➢ Internet of Things (IoT)
Lecture Note Compiled By :Mr. Diriba M. @2024/25 A.Y Page 1 of 15
➢ Intelligent Apps (I-Apps) ➢ Robotic Processor Automation
➢ Big Data (RPA)
1.1.1 Introduction to the Industrial Revolution (IR)
The Industrial Revolution was a period of major industrialization and innovation that took place during
the late 1700s and early 1800s. An Industrial Revolution at its core occurs when a society shifts from
using tools to make products to use new sources of energy, such as coal, to power machines in factories.
The revolution started in England, with a series of innovations to make labor more efficient and
productive. The Industrial Revolution was a time when the manufacturing of goods moved from small
shops and homes to large factories. This shift brought about changes in culture as people moved from
rural areas to big cities in order to work.
The American Industrial Revolution commonly referred to as the Second Industrial Revolution, started
sometime between 1820 and 1870. The impact of changing the way items was manufactured had a
wide reach. Industries such as textile manufacturing, mining, glass making, and agriculture all had
undergone changes. For example, prior to the Industrial Revolution, textiles were primarily made of
wool and were handspun.
From the first industrial revolution (mechanization through water and steam power) to the mass
production and assembly lines using electricity in the second, the fourth industrial revolution will take
what was started in the third with the adoption of computers and automation and enhance it with smart
and autonomous systems fueled by data and machine learning.
Generally, the following industrial revolutions fundamentally changed and transfer the world around
us into modern society.
The steam engine,
The age of science and mass production, and
The rise of digital technology
Smart and autonomous systems fueled by data and machine learning.
1.1.2 The Most Important Inventions of the Industrial Revolution
Transportation: The Steam Engine, The Railroad, The Diesel Engine, The Airplane.
Communication: The Telegraph,the Transatlantic Cable, the Phonograph the
Telephone.
Industry: The Cotton Gin. The Sewing Machine. Electric Lights.
1.1.3 Historical Background (IR 1.0, IR 2.0, IR 3.0)
The industrial revolution began in Great Britain in the late 1770s before spreading to the rest of
Europe. The first European countries to be industrialized after England were Belgium, France, and the
German states. The final cause of the Industrial Revolution was the effects created by the Agricultural
Revolution. As previously stated, the Industrial Revolution began in Britain in the 18th century due
Lecture Note Compiled By :Mr. Diriba M. @2024/25 A.Y Page 2 of 15
in part to an increase in food production, which was the key outcome of the Agricultural Revolution.
The four types of industries are:
➢ The primary industry involves getting raw materials e.g. mining, farming, and fishing.
➢ The secondary industry involves manufacturing e.g. making cars and steel.
➢ Tertiary industries provide a service e.g. teaching and nursing.
➢ The quaternary industry involves research and development industries e.g. IT.
1.1.3.1 Industrial Revolution (IR 1.0)
The Industrial Revolution (IR) is described as a transition to new manufacturing processes. IR was first coined in the
1760s, during the time where this revolution began. The transitions in the first IR included going from hand
production methods to machines, the increasing use of steam power (see Figure 1.1), the development of machine
tools and the rise of the factory system.
Figure 1.1 steam engine
1.1.3.2 Industrial Revolution (IR 2.0)
The Second IR, also known as the Technological Revolution, began somewhere in the 1870s. The
advancements in IR 2.0 included the development of methods for manufacturing interchangeable parts
and widespread adoption of pre-existing technological systems such as telegraph and railroad networks.
This adoption allowed the vast movement of people and ideas, enhancing communication. Moreover, new
technological systems were introduced, such as electrical power (see Figure 1.2) and telephones.
Lecture Note Compiled By :Mr. Diriba M. @2024/25 A.Y Page 3 of 15
Figure 1.2 Electricity transmission line
1.1.3.3 Industrial Revolution (IR 3.0)
Then came the Third Industrial Revolution (IR 3.0). IR 3.0 introduced the transition from mechanical
and analog electronic technology to digital electronics (see Figure 1.3) which began from the late
1950s. Due to the shift towards digitalization, IR 3.0 was given the nickname, “Digital Revolution”.
The core factor of this revolution is the mass production and widespread use of digital logic circuits
and its derived technologies such as the computer, handphones and the Internet. These technological
innovations have arguably transformed traditional production and business techniques enabling
people to communicate with another without the need of being physically present. Certain practices
that were enabled during IR 3.0 is still being practiced until this current day, for example – the
proliferation of digital computers and digital record.
Figure 1.3 High Tech Electronics
1.1.3.4 Fourth Industrial Revolution (IR 4.0)
Now, with advancements in various technologies such as robotics, Internet of Things (IoT see Figure
1.4), additive manufacturing and autonomous vehicles, the term “Fourth Industrial Revolution” or
IR 4.0 was coined by Klaus Schwab, the founder and executive chairman of World Economic Forum,
in the year 2016. The technologies mentioned above are what you call cyber- physical systems. A
cyber-physical system is a mechanism that is controlled or monitored by computer-based algorithms,
tightly integrated with the Internet and its users.
One example that is being widely practiced in industries today is the usage of Computer Numerical
Control (CNC) machines. These machines are operated by giving it instructions using a computer.
Another major breakthrough that is associated with IR 4.0 is the adoption of Artificial Intelligence
(AI), where we can see it being implemented into our smartphones. AI is also one of the main
elements that give life to Autonomous Vehicles and Automated Robots.
Lecture Note Compiled By :Mr. Diriba M. @2024/25 A.Y Page 4 of 15
1.2 Role of Data for Emerging Technologies
Data is regarded as the new oil and strategic asset since we are living in the age of big data, and drives
or even determines the future of science, technology, the economy, and possibly everything in our
world today and tomorrow. Data have not only triggered tremendous hype and buzz but more
importantly, presents enormous challenges that in turn bring incredible innovation and economic
opportunities. This reshaping and paradigm-shifting are driven not just by data itself but all other
aspects that could be created, transformed, and/or adjusted by understanding, exploring, and utilizing
data. The preceding trend and its potential have triggered new debate about data-intensive scientific
discovery as an emerging technology, the so-called “fourth industrial revolution,” There is no doubt,
nevertheless, that the potential of data science and analytics to enable data-driven theory, economy,
and professional development is increasingly being recognized. This involves not only core
disciplines such as computing, informatics, and statistics, but also the broad-based fields of business,
social science, and health/medical science.
1.3 Enabling devices and network (Programmable devices)
In the world of digital electronic systems, there are four basic kinds of devices: memory,
microprocessors, logic, and networks. Memory devices store random information such as the contents
of a spreadsheet or database. Microprocessors execute software instructions to perform a wide variety
of tasks such as running a word processing program or video game. Logic devices provide specific
functions, including device-to-device interfacing, data communication, signal processing, data display,
timing and control operations, and almost every other function a system must perform. The network is
a collection of computers, servers, mainframes, network devices, peripherals, or other devices connected
to one another to allow the sharing of data. An excellent example of a network is the Internet, which
connects millions of people all over the world Programmable devices (see Figure 1.5) usually refer to
chips that incorporate field programmable logic devices (FPGAs), complex programmable logic devices
Lecture Note Compiled By :Mr. Diriba M. @2024/25 A.Y Page 5 of 15
(CPLD) and programmable logic devices (PLD). There are also devices that are the analog equivalent
of these called field- programmable analog arrays.
Figure 1.5 programmable device
Why is a computer referred to as a programmable device?
Because what makes a computer a computer is that it follows a set of instructions. Many electronic
devices are computers that perform only one operation, but they are still following instructions that
reside permanently in the unit.
1.3.1 List of some Programmable devices
➢ Achronix Speedster SPD60 ➢ Lattice Semiconductor’s ECP3
➢ Actel’s ➢ Lime Microsystems’ LMS6002
➢ Altera Stratix IV GT and Arria II GX ➢ Silicon Blue Technologies
➢ Atmel’s AT91CAP7L ➢ Xilinx Virtex 6 and Spartan 6
➢ Cypress Semiconductor’s programmable ➢ Xmos Semiconductor L series
system-on-chip (PSoC) family
A full range of network-related equipment referred to as Service Enabling Devices (SEDs), which can
include:
➢ Traditional channel service unit (CSU) and data service unit (DSU)
➢ Modems ➢ Conferencing equipment
➢ Routers ➢ Network appliances (NIDs and SIDs)
➢ Switches ➢ Hosting equipment and servers
1.4 Human to Machine Interaction
Human-machine interaction (HMI) refers to the communication and interaction between a
human and a machine via a user interface. Nowadays, natural user interfaces such as gestures have
gained increasing attention as they allow humans to control machines through natural and intuitive
behaviors
What is interaction in human-computer interaction?
HCI (human-computer interaction) is the study of how people interact with computers and to what
extent computers are or are not developed for successful interaction with human beings. As its name
implies, HCI consists of three parts: the user, the computer itself, and the ways they work
Lecture Note Compiled By :Mr. Diriba M. @2024/25 A.Y Page 6 of 15
together.
How do users interact with computers?
The user interacts directly with hardware for the human input and output such as displays, e.g. through
a graphical user interface. The user interacts with the computer over this software interface using the
given input and output (I/O) hardware.
How important is human-computer interaction?
The goal of HCI is to improve the interaction between users and computers by making
computers more user-friendly and receptive to the user's needs. The main advantages of HCI are
simplicity, ease of deployment & operations and cost savings for smaller set-ups. They also reduce
solution design time and integration complexity.
1.4.1 Disciplines Contributing to Human-Computer Interaction (HCI)
➢ Cognitive psychology: Limitations, information processing, performance prediction,
cooperative working, and capabilities.
➢ Computer science: Including graphics, technology, prototyping tools, user interface
management systems.
➢ Linguistics. ➢ Artificial intelligence.
➢ Engineering and design. ➢ Human factors.
1.5 Future Trends in Emerging Technologies
1.5.1 Emerging technology trends in 2019
➢ 5G Networks ➢ Digital Twins
➢ Artificial Intelligence (AI) ➢ Enhanced Edge Computing and
➢ Autonomous Devices ➢ Immersive Experiences in Smart
➢ Blockchain Spaces
➢ Augmented Analytics
1.5.2 Some emerging technologies that will shape the future of you and your business
The future is now or so they say. So-called emerging technologies are taking over our minds more and
more each day. These are very high-level emerging technologies though. They sound like tools that
will only affect the top tier of technology companies who employ the world’s top 1% of geniuses.
This is totally wrong. Chatbots, virtual/augmented reality, blockchain, Ephemeral Apps and Artificial
Intelligence are already shaping your life whether you like it or not. At the end of the day, you can
either adapt or die.
Chapter 2: Data Science
Introduction
In the previous chapter, the concept of the role of data for emerging technologies was discussed. In
Lecture Note Compiled By :Mr. Diriba M. @2024/25 A.Y Page 7 of 15
this chapter, you are going to learn more about data science, data vs. information, data types and
representation, data value chain, and basic concepts of big data.
2.1. An Overview of Data Science
Activity 2.1
➢ What is data science? Can you describe the role of data in emerging technology?
➢ What are data and information?
➢ What is big data?
Data science is a multi-disciplinary field that uses scientific methods, processes, algorithms, and systems
to extract knowledge and insights from structured, semi-structured and unstructured data. Data science is
much more than simply analyzing data. It offers a range of roles and requires a range of skills.Let’s consider
this idea by thinking about some of the data involved in buying a box of cereal from the store or
supermarket:
As an academic discipline and profession, data science continues to evolve as one of the most promising
and in-demand career paths for skilled professionals. Today, successful data professionals understand
that they must advance past the traditional skills of analyzing large amounts of data, data mining, and
programming skills. In order to uncover useful intelligence for their organizations, data scientists must
master the full spectrum of the data science life cycle and possess a level of flexibility and understanding
to maximize returns at each phase of the process. Data scientists need to be curious and result-oriented,
with exceptional industry-specific knowledge and communication skills that allow them to explain
highly technical results to their non-technical counterparts. They possess a strong quantitative
background in statistics and linear algebra as well as programming knowledge with focuses on data
warehousing, mining, and modeling to build and analyze algorithms. In this chapter, we will talk about
basic definitions of data and information, data types and representation, data value change and basic
concepts of big data.
What are data and information?
Data can be defined as a representation of facts, concepts, or instructions in a formalized manner,
which should be suitable for communication, interpretation, or processing, by human or electronic
machines. It can be described as unprocessed facts and figures. It is represented with the help of
characters such as alphabets (A-Z, a-z), digits (0-9) or special characters (+, -, /, *, <,>, =, etc.).
Whereas information is the processed data on which decisions and actions are based. It is data that has
been processed into a form that is meaningful to the recipient and is of real or perceived value in the
current or the prospective action or decision of recipient. Furtherer more, information is interpreted
data; created from organized, structured, and processed data in a particular context.
2.1.1. Data Processing Cycle
Lecture Note Compiled By :Mr. Diriba M. @2024/25 A.Y Page 8 of 15
Data processing is the re-structuring or re-ordering of data by people or machines to increase their
usefulness and add values for a particular purpose. Data processing consists of the following basic
steps - input, processing, and output. These three steps constitute the data processing cycle.
Figure 2.1 Data Processing Cycle
➢ Input − in this step, the input data is prepared in some convenient form for processing. The form will
depend on the processing machine. For example, when electronic computers are
used, the input data can be recorded on any one of the several types of storage medium, such as hard
disk, CD, flash disk and so on.
➢ Processing − in this step, the input data is changed to produce data in a more useful form. For
example, interest can be calculated on deposit to a bank, or a summary of sales for the month can be
calculated from the sales orders.
➢ Output − at this stage, the result of the proceeding processing step is collected. The particular form
of the output data depends on the use of the data. For example, output data may be payroll for
employees.
2.3 Data types and their representation
Data types can be described from diverse perspectives. In computer science and computer
programming, for instance, a data type is simply an attribute of data that tells the compiler or
interpreter how the programmer intends to use the data.
2.3.1. Data types from Computer programming perspective
Almost all programming languages explicitly include the notion of data type, though different
languages may use different terminology. Common data types include:
➢ Integers(int)- is used to store whole numbers, mathematically known as integers
➢ Booleans(bool)- is used to represent restricted to one of two values: true or false
➢ Characters(char)- is used to store a single character
➢ Floating-point numbers(float)- is used to store real numbers
➢ Alphanumeric strings(string)- used to store a combination of characters and numbers
A data type makes the values that expression, such as a variable or a function, might take. This data
type defines the operations that can be done on the data, the meaning of the data, and the way values
of that type can be stored.
2.3.2. Data types from Data Analytics perspective
From a data analytics point of view, it is important to understand that there are three common types of
Lecture Note Compiled By :Mr. Diriba M. @2024/25 A.Y Page 9 of 15
data types or structures: Structured, Semi-structured, and Unstructured data types. Fig. 2.2 below
describes the three types of data and metadata.
Figure 2.2 Data types from a data analytics perspective
Structured Data
Structured data is data that adheres to a pre-defined data model and is therefore straightforward to
analyze. Structured data conforms to a tabular format with a relationship between the different rows
and columns. Common examples of structured data are Excel files or SQL databases. Each of these
has structured rows and columns that can be sorted.
Semi-structured Data
Semi-structured data is a form of structured data that does not conform with the formal structure of
data models associated with relational databases or other forms of data tables, but nonetheless,
contains tags or other markers to separate semantic elements and enforce hierarchies of records and
fields within the data. Therefore, it is also known as a self-describing structure. Examples of semi-
structured data include JSON and XML are forms of semi-structured data.
Unstructured Data
Unstructured data is information that either does not have a predefined data model or is not organized
in a pre-defined manner. Unstructured information is typically text-heavy but may contain data such
as dates, numbers, and facts as well. This results in irregularities and ambiguities that make it difficult
to understand using traditional programs as compared to data stored in structured databases. Common
examples of unstructured data include audio, video files or No- SQL databases.
Metadata – Data about Data
The last category of data type is metadata. From a technical point of view, this is not a separate data
structure, but it is one of the most important elements for Big Data analysis and big data solutions.
Metadata is data about data. It provides additional information about a specific set of data.
In a set of photographs, for example, metadata could describe when and where the photos were taken.
The metadata then provides fields for dates and locations which, by themselves, can be considered
structured data. Because of this reason, metadata is frequently used by Big Data solutions for initial
analysis.
2.4. Data value Chain
The Data Value Chain is introduced to describe the information flow within a big data system as a
series of steps needed to generate value and useful insights from data. The Big Data Value Chain
Lecture Note Compiled By :Mr. Diriba M. @2024/25 A.Y Page 10 of 15
identifies the following key high-level activities:
Figure 2.3 Data Value Chain
2.4.1. Data Acquisition
It is the process of gathering, filtering, and cleaning data before it is put in a data warehouse or any
other storage solution on which data analysis can be carried out. Data acquisition is one of the major
big data challenges in terms of infrastructure requirements. The infrastructure required to support the
acquisition of big data must deliver low, predictable latency in both capturing data and in executing
queries; be able to handle very high transaction volumes, often in a distributed environment; and
support flexible and dynamic data structures.
2.4.2. Data Analysis
It is concerned with making the raw data acquired amenable to use in decision-making as well as
domain-specific usage. Data analysis involves exploring, transforming, and modeling data with the
goal of highlighting relevant data, synthesizing and extracting useful hidden information with high
potential from a business point of view. Related areas include data mining, business intelligence, and
machine learning.
2.4.3. Data Curation
It is the active management of data over its life cycle to ensure it meets the necessary data quality
requirements for its effective usage. Data curation processes can be categorized into different
activities such as content creation, selection, classification, transformation, validation, and
preservation. Data curation is performed by expert curators that are responsible for improving the
accessibility and quality of data. Data curators (also known as scientific curators or data annotators)
hold the responsibility of ensuring that data are trustworthy, discoverable, accessible, reusable and fit
their purpose. A key trend for the duration of big data utilizes community and crowdsourcing
approaches.
2.4.4. Data Storage
It is the persistence and management of data in a scalable way that satisfies the needs of applications
that require fast access to the data. Relational Database Management Systems (RDBMS) have been
the main, and almost unique, a solution to the storage paradigm for nearly 40 years. However, the
ACID (Atomicity, Consistency, Isolation, and Durability) properties that guarantee database
Lecture Note Compiled By :Mr. Diriba M. @2024/25 A.Y Page 11 of 15
transactions lack flexibility with regard to schema changes and the performance and fault tolerance
when data volumes and complexity grow, making them unsuitable for big data scenarios. NoSQL
technologies have been designed with the scalability goal in mind and present a wide range of
solutions based on alternative data models.
2.4.5. Data Usage
It covers the data-driven business activities that need access to data, its analysis, and the tools needed
to integrate the data analysis within the business activity. Data usage in business decision- making
can enhance competitiveness through the reduction of costs, increased added value, or any other
parameter that can be measured against existing performance criteria.
Basic concepts of big data
Big data is a blanket term for the non-traditional strategies and technologies needed to gather,
organize, process, and gather insights from large datasets. While the problem of working with data that
exceeds the computing power or storage of a single computer is not new, the pervasiveness, scale,
and value of this type of computing have greatly expanded in recent years.
In this section, we will talk about big data on a fundamental level and define common concepts you
might come across. We will also take a high-level look at some of the processes and technologies
currently being used in this space.
2.4.6. What Is Big Data?
Big data is the term for a collection of data sets so large and complex that it becomes difficult to
process using on-hand database management tools or traditional data processing applications.
In this context, a “large dataset” means a dataset too large to reasonably process or store with
traditional tooling or on a single computer. This means that the common scale of big datasets is
constantly shifting and may vary significantly from organization to organization. Big data is
characterized by 3V and more:
➢ Volume: large amounts of data Zeta bytes/Massive datasets
➢ Velocity: Data is live streaming or in motion
➢ Variety: data comes in many different forms from diverse sources
➢ Veracity: can we trust the data? How accurate is it? etc.
Lecture Note Compiled By :Mr. Diriba M. @2024/25 A.Y Page 12 of 15
Figure 2.4 Characteristics of big data
2.4.7. Clustered Computing and Hadoop Ecosystem
2.4.8. Clustered Computing
Because of the qualities of big data, individual computers are often inadequate for handling the data
at most stages. To better address the high storage and computational needs of big data, computer
clusters are a better fit.Big data clustering software combines the resources of many smaller machines,
seeking to provide a number of benefits:
Resource Pooling: Combining the available storage space to hold data is a clear benefit, but CPU
and memory pooling are also extremely important. Processing large datasets requires large amounts
of all three of these resources.
High Availability: Clusters can provide varying levels of fault tolerance and availability
guarantees to prevent hardware or software failures from affecting access to data and processing. This
becomes increasingly important as we continue to emphasize the importance of real-time analytics.
Easy Scalability: Clusters make it easy to scale horizontally by adding additional machines to the
group. This means the system can react to changes in resource requirements without expanding the
physical resources on a machine.
Using clusters requires a solution for managing cluster membership, coordinating resource sharing, and
scheduling actual work on individual nodes. Cluster membership and resource allocation can be
handled by software like Hadoop’s YARN (which stands for Yet Another Resource Negotiator).
The assembled computing cluster often acts as a foundation that other software interfaces with to
process the data. The machines involved in the computing cluster are also typically involved with the
management of a distributed storage system, which we will talk about when we discuss data
persistence.
2.5.2.2. Hadoop and its Ecosystem
Hadoop is an open-source framework intended to make interaction with big data easier. It is a
framework that allows for the distributed processing of large datasets across clusters of computers
using simple programming models. It is inspired by a technical document published by Google. The
four key characteristics of Hadoop are:
Economical: Its systems are highly economical as ordinary computers can be used for
data processing.
Reliable: It is reliable as it stores copies of the data on different machines and is resistant to
hardware failure.
Scalable: It is easily scalable both, horizontally and vertically. A few extra nodes help in
scaling up the framework.
Flexible: It is flexible and you can store as much structured and unstructured data as you
need to and decide to use them later.
Lecture Note Compiled By :Mr. Diriba M. @2024/25 A.Y Page 13 of 15
Hadoop has an ecosystem that has evolved from its four core components: data management, access,
processing, and storage. It is continuously growing to meet the needs of Big Data. It comprises the
following components and many others:
HDFS: Hadoop Distributed File System HBase: NoSQL Database
YARN: Yet Another Resource Negotiator Mahout, Spark MLLib: Machine
MapReduce: Programming based Data Learning algorithm libraries
Processing Solar, Lucene: Searching and Indexing
Spark: In-Memory data processing Zookeeper: Managing cluster
PIG, HIVE: Query-based processing of Oozie: Job Scheduling
data services
Figure 2.5 Hadoop Ecosystem
2.4.9. Big Data Life Cycle with Hadoop
2.5.3.1. Ingesting data into the system
The first stage of Big Data processing is Ingest. The data is ingested or transferred to Hadoop from
various sources such as relational databases, systems, or local files. Sqoop transfers data from
RDBMS to HDFS, whereas Flume transfers event data.
2.5.3.2. Processing the data in storage
The second stage is Processing. In this stage, the data is stored and processed. The data is stored in
the distributed file system, HDFS, and the NoSQL distributed data, HBase. Spark and MapReduce
perform data processing.
2.5.3.3. Computing and analyzing data
The third stage is to Analyze. Here, the data is analyzed by processing frameworks such as Pig, Hive,
and Impala. Pig converts the data using a map and reduce and then analyzes it. Hive is also based on
Lecture Note Compiled By :Mr. Diriba M. @2024/25 A.Y Page 14 of 15
the map and reduce programming and is most suitable for structured data.
2.5.3.4. Visualizing the results
The fourth stage is Access, which is performed by tools such as Hue and Cloudera Search. In this
stage, the analyzed data can be accessed by users.
Home assignment I
1. Where did the Industrial Revolution start and why did it begin there?
2. What does “emerging” mean, emerging technologies and how are they found?
3. What makes “emerging technologies” happen and what impact will they have on Individuals,
Society, and Environment?
4. How do recent approaches to “embodied interaction” differ from earlier accounts of the role
of cognition in human-computer interaction?
5. What is the reason for taking care of design a good computer-human interface?
6. Discuss the pros and cons of human-computer interaction technology?
Home assignment II
1. Define data science; what are the roles of a data scientist?
2. Discuss a series of steps needed to generate value and useful insights from data?
3. What is the principal goal of data science?
4. List out and discuss the characteristics of Big Data?
Lecture Note Compiled By :Mr. Diriba M. @2024/25 A.Y Page 15 of 15