Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
11 views47 pages

Com 325 HCI

The document discusses Human-Computer Interaction (HCI), emphasizing its role as the communication point between users and computers, and highlights the importance of effective design for usability and user experience. It outlines the components of HCI, including users, machines, and interaction, as well as the goals of HCI to create systems that prioritize user needs and improve productivity. The document also covers the significance of good design principles and interaction techniques in enhancing user engagement and satisfaction.

Uploaded by

Akinladejo Dotun
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views47 pages

Com 325 HCI

The document discusses Human-Computer Interaction (HCI), emphasizing its role as the communication point between users and computers, and highlights the importance of effective design for usability and user experience. It outlines the components of HCI, including users, machines, and interaction, as well as the goals of HCI to create systems that prioritize user needs and improve productivity. The document also covers the significance of good design principles and interaction techniques in enhancing user engagement and satisfaction.

Uploaded by

Akinladejo Dotun
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 47

CHAPTER ONE

HUMAN COMPUTER INTERACTION


The Human Computer Interface (HCI) can be described as the point of communication between
the human user and the computer. The flow of information between the human and computer is
defined as the loop of interaction. The essence of any human computer interface is to convey
information. Visualization has been a good information communication tool for more than two
decades. While many visualization systems provide users with a better understanding of their
data, they are often difficult to use and are not a reliable, accurate tool for conveying
information, which limits their acceptance and use. As scientists, medical researchers, and
information analysts face drastic growth in the size of their datasets, the efficient, accurate, and
reproducible communication of information becomes essential.
The human computer interface can be described as the point of communication between the
human user and the computer. The flow of information between the human and computer is
defined as the loop of interaction. The loop of interaction has several aspects to it, including:
 Visual Based: The visual-based human computer interaction is probably the most widespread
area in Human Computer Interaction (HCI) research.
 Audio Based: The audio-based interaction between a computer and a human is another
important area of HCI systems. This area deals with information acquired by different audio
signals.
 Task Environment: The conditions and goals set upon the user.
 Machine Environment: The environment that the computer is connected to, e.g., a laptop in a
college student’s dorm room.
 Areas of the Interface: Non-overlapping areas involve processes of the human and computer
not pertaining to their interaction. Meanwhile, the overlapping areas only concern themselves
with the processes pertaining to their interaction.
 Input Flow: The flow of information that begins in the task environment, when the user has
some tasks that requires using their computer.
 Output: The flow of information that originates in the machine environment.
 Feedback: Loops through the interface that evaluate, moderate, and confirm processes as they
pass from the human through the interface to the computer and back.
 Fit: This is the match between the computer design, the user, and the task to optimize the
human resources needed to accomplish the task.
Human Computer Interaction (HCI) is the study of how people communicate with computers and
to what extent computers are developed for effective human interaction. Human Computer
Interaction (HCI) is a cutting-edge branch of computer science that studies how humans and
machines communicate. Card, Moran, and Newell coined the concept in their book ‘The
Psychology of Human Computer Interaction.’ Computer Human Interaction (CHI) is another
term for Human Computer Interaction (HCI). Both terms refer to Human Machine Interaction
(HMI). Interface is something that we want it look good and friendly to the end user. The poorly

1
designed interfaces can lead to serious implications and will not be able to engage the end user
for the longer period and will reduce the users in future. A poor design may lead to loses money
as its workforce is less productive.
Components of HCI
Components of HCI Below are the components of HCI:
 User
 The Machine
 Interaction

User
We may use the word ‘User’ to refer to an individual user or a community of users working
together. It is critical to understand how people’s sensory systems (sight, hearing, and touch)
transmit information. Furthermore, different users develop different conceptions or mental
models about their experiences, and they acquire and retain information in different ways.
Cultural and national variations also play a role.
The Machine
When we say ‘Computer,’ we are talking to a wide range of technology, from desktop computers
to large-scale computer systems. If we were talking about the creation of a Website, for example,
the Website would be referred to as the machine. Mobile phones and VCRs are examples of
devices that can be called computers.
Interaction
Humans and robots have distinct characteristics. Despite this, HCI takes every effort to ensure
that they get along and communicate effectively. You must apply what you know about humans
and computers to build a functional framework, and you must communicate with possible users
during the design process. The schedule and budget are essential in real-world processes, and
they are critical.
The Goals of HCI
The aim of HCI is to create systems that are both useful and safe, as well as functional. To create
usable computer systems, developers must try to understand the elements that influence how
people use technology, develop tools and approaches to aid in the development of appropriate
systems, and achieve efficient, effective, and safe interaction by putting people first. Underlying
the whole theme of Human computer interaction is the belief that people using a computer

2
system should come first. Their needs, capabilities, and preferences for conducting various tasks
should direct developers in the way that they design systems. People should not have to change
the way that they use a system to fit in with it. Instead, the system should be designed to match
their requirements.
Much of the research in the field of Human Computer Interaction (HCI) takes an interest in:
 Methods for designing new computer interfaces, thereby optimizing a design for a desired
property, such as learnability, findability, the efficiency of use.
 Methods for implementing interfaces, e.g., by means of software libraries.
 Methods for evaluating and comparing interfaces with respect to their usability and other
desirable properties.
 Methods for studying human computer use and its sociocultural implications more broadly.
 Methods for determining whether the user is human or computer.
 Models and theories of human computer use as well as conceptual frameworks for the design
of computer interfaces, such as cognitivist user models, Activity Theory or ethno-methodological
accounts of human computer use.
 Perspectives that critically reflect upon the values that underlie computational design, computer
use and HCI research practice.
Visions of what researchers in the field seek to achieve might vary. When pursuing a cognitivist
perspective, researchers of HCI may seek to align computer interfaces with the mental model
that humans have of their activities. When pursuing a post-cognitivist perspective, researchers of
HCI may seek to align computer interfaces with existing social practices or existing sociocultural
values.
Researchers in HCI are interested in developing design methodologies, experimenting with
devices, prototyping software, and hardware systems, exploring interaction paradigms, and
developing models and theories of interaction.
Importance of HCI
Below are few features of Human Computer Interface (HCI).
 User-Friendly Applications The most significant benefit of implementing human computer
interaction is the creation of more user-friendly applications. You can make computers and
systems more responsive to the user’s needs, resulting in a better user experience. This goal-
oriented design makes it easier to attain your objectives. That, in turn, will lead to higher
company success, which is the most important benefit of HCI.
 Increase Customer Acquisition A strong user experience will help in attracting and retaining
customers. The more customer acquisition will help in building the trust and so will be helpful in
retaining for the longer period.
 Optimize Resources, Development Time and Costs A well-designed application or Website
work for end users for longer period. It helps in optimizing the resources. The optimizing
resource utilization then help in reducing the time and cost of development. On the other hand,

3
poorly, designed application leads to rework of the same application again and again and
increase the development cost and time.
 Increased Productivity HCI helps in development of effective, user friendly and easy to use
interface. This help in increasing the productivity and so also help in increasing the
organization’s business. Human computer interaction helps in reducing the errors hence helps in
the smoother workflow. A great UI/UX system can also lessen errors and promote a smoother
workflow for employees. An effective tip in this regard would be using light colors and
highlighting relevant content so that users can see the important information briefly. This will
help them focus on the most pertinent information without getting distracted.
 Software Success Human Computer Interaction principles not only important for the end user,
but also is an extremely high priority for software development companies. If a software product
is unusable and causes frustration, no person will use the program by choice, and as a result sale
will be negatively affected.
 Improved Accessibility There are many people with different disabilities. HCI helped in
development of such software which can be accessed by not just the normal people but also by
the disabled peoples.

Importance of Good Design


A screen’s layout and appearance affect a person in a different of ways, so it is important to give
importance to the good design. Good design helps the user in communication with the system
affectively. So now a day’s most of the companies have understood and focusing on the design.
A well-designed interface and screen are especially important for the users. It is their space to
work with simple and complex tasks. The structure and appearance of a screen have several
effects on an individual user.
Benefits of Good Design
 Screens looks very friendly and less messy.
 Help in understanding the overall design and so easy to start communication without wasting
much time.
 Help in reducing the training time so the cost as well.
 The organization customers benefit because of improved services.
 Less user support costs.
 Increases employee satisfaction is increased because aggravation and frustration are reduced.
 Increased productivity
Principles of User Interface Design
An interface can be nothing more than a person’s extension. This means that the system and its
software must consider a person’s abilities and adapt to his or her unique specifications. A good
user interface is critical to good user experience. If the interface does not allow people to easily

4
use the Website or app, they will not use the product or they will overwhelm technical support
with costs, ballooning costs.
 Connector and a Separator The interface should act as both a connector and a separator: a
connector in that it links the user to the computer’s control, and a separator in that it reduces the
risk of the participants hurting one another.
 Clarity in Design any interface’s first and most critical task is to provide clarity. People must
be able to understand an interface you have built to use it effectively. Clarity inspires confidence
and encourages continued use. One hundred uncluttered screens are superior to one cluttered
screen.
 Interfaces Exist to Enable Interaction between humans and our world is facilitated by
interfaces. They can help us explain, illuminate, allow, display relationships, bring us together,
separate us, manage expectations, and provide access to services. The best user interfaces can
encourage the good connection to the world.
 Consistency Screen elements do not appear to be consistent with one another unless they
behave in the same way. Elements that behave in the same way should have the same
appearance. However, unlike elements must appear unlike (inconsistent) just as much as like
elements must appear consistent. Re-use code helps in maintaining the consistency.
 Visual Order and Viewer focus The essential and significant elements of the show must be
drawn attention to at the appropriate time. It must be obvious that these items can be picked, as
well as how to pick them.
 Effective Visual Contrast To achieve this goal, effective visual contrast between different
components of the screen is used. Sound and animation are also used to attract focus. The user
must also be given feedback.
Importance of User Interface (UI)
User Interface (UI) design is the link between users and application which includes the basic
design elements. These design elements need to be present to help the end user to navigate
through the application. It makes the relationship of the end user with the application strong and
friendly. A good UI then enhance the communication, visibility, and productivity. A user
interface is that portion of an interactive computer system that communicates with the user.
Design of the user interface includes any aspect of the system that is visible to the user. Because
the user interface design includes everything that is visible to the user, it is deeply ingrained in
the overall design of the interactive system. A good user interface cannot be added to a system
after it has been developed; it must be designed from the start. A well-designed user interface
can significantly reduce training time and improve performance. The design of a user interface
can have a significant impact on training time, performance speed, mistake rates, user happiness,
and the user’s long-term retention of operations knowledge. In the past, shoddy designs gave
way to sophisticated systems.
Design of a User Interface (UI)
Design of a User Interface or UI begins with task analysis—an understanding of the user’s
underlying tasks and the problem domain. The user interface should be designed in terms of the

5
user’s terminology and conception of his or her job, rather than the programmer’s. There are
several levels of design which are explained below.
Design Levels of a User Interface (UI)
It is helpful to think about the user interface at many levels of abstraction and come up with a
design and implementation for each one. This breaks down the developer’s workload into
smaller chunks, making it easier to manage. The User Interface (UI) is basically into the
following:

 Conceptual Level: The conceptual level describes the basic entities underlying the user’s view
of the system and the actions possible upon them.
 Semantic Level: The semantic level describes the functions performed by the system. This
corresponds to a description of the functional specifications of the system, but it does not address
how the user will invoke the functions.
 Syntactic Level: The syntactic level describes the input and output sequences required to
invoke the described functions. A related method is the syntactic semantic object-action model,
which separates the task and computer concepts (i.e., the semantics in the previous paragraph)
from the task syntax.
 Lexical Level: The lexical level determines how raw hardware operations are transformed into
inputs and outputs.
HCI improves user-computer experiences by making machines more accessible and responsive
to the user’s needs. HCI is important because it would be necessary for goods to be more
effective, healthy, and useful. It will make the user’s experience more pleasurable in the long
run. As a result, having someone with HCI skills involved in all phases of any product or device
creation is critical. HCI is often crucial to prevent goods or programmes from failing entirely.
Interaction technique
An interaction technique or user interface technique is a combination of input and output
consisting of hardware and software elements that provides a way for computer users to
accomplish a simple task. For example, one can go back to the previously visited page on a Web
6
browser by either clicking a button, hitting a key, performing a mouse gesture or uttering a
speech command. The computing perspective of interaction technique: Here, an interaction
technique involves one or several physical input devices, including a piece of code which
interprets user input into higher-level commands, possibly producing user feedback and one or
several physical output devices. Consider for example, the process of deleting a file using a
contextual menu. This first requires a mouse and a screen (input/output devices). Then, a piece of
code needs to paint the contextual menu on the screen and animate the selection when the mouse
moves (user feedback). The software also needs to send a command to the file system when the
user clicks on the "delete" item (interpretation).

The user view of interaction technique:


Here, an interaction technique is a way to perform a simple computing task and can be described
by the way of instructions or usage scenarios. For example "right-click on the file you want to
delete, then click on the delete item".
The conceptual view of interaction technique:
Here, an interaction technique is an idea and a way to solve a particular user interface design
problem. It does not always need to be bound to a specific input or output device. For example,
menus can be controlled with many sorts of pointing devices. Interaction techniques as
conceptual ideas can be refined, extended, modified and combined. For example, pie menus are a
radial variant of contextual menus. Marking menus combine pie menus with gestures. In general,
a user interface can be seen as a combination of many interaction techniques, some of which are
not necessarily widgets.
Interaction styles
Interaction techniques that share the same metaphor or design principles can be seen as
belonging to the same interaction style. Examples are command line and direct manipulation user
interfaces. More details are provided in subsequent chapter of this guide.

7
CHAPTER TWO
CONCEPTUALIZE INTERACTION

What is a Problem Space?

In Human-Computer Interaction (HCI), a problem space refers to the complete set of goals,
tasks, processes, constraints, and requirements that need to be considered in order to design an
effective interactive system.

It includes:

 The user’s needs, goals, and expectations.


 The context in which the interaction will take place.
 Limitations imposed by technology or environment.
 The current practices or processes being replaced or supported.

Why is it Important?

 Helps designers define the scope and requirements of a system.


 Identifies design challenges early on.
 Ensures user needs and goals are properly addressed.

Example:

Designing an online food ordering system:

 Problem Space: Users want to browse food menus, place orders, and make payments
easily using their smartphones within minutes, with secure payment options and reliable
delivery tracking.

CONCEPTUAL MODELS BASED ON ACTIVITIES AND OBJECTS

A conceptual model is a high-level description of how a system is organized and operates from
the user’s perspective. It bridges the gap between system functionalities and user expectations.

A. Conceptual Models Based on Activities

These models are structured around the tasks or operations users need to perform.

Key Features:

 Task-centered
 Supports workflows and sequences of actions
 Focuses on how users accomplish goals

8
Examples:

 Word processors: Typing → Formatting → Saving → Printing


 Email systems: Composing → Sending → Archiving

Benefits:

 Aligns design with user intentions


 Easy to identify common pain points or inefficiencies

Conceptual Models Based on Objects

These models focus on the data or entities that the system manipulates, rather than the actions
performed.

Key Features:

 Object-oriented
 Supports data management and relationships
 Emphasizes interaction with components

Examples:

 Graphic editors: Image → Layer → Shape → Color


 File managers: Folder → File → Properties

Benefits:

 Promotes data consistency


 Makes navigation more intuitive through object hierarchies

Feature Activity-Based Model Object-Based Model


Focus User actions/tasks Entities/objects manipulated
Orientation Process-driven Data-driven
Example System Word Processor, Online Booking File Manager, Drawing Tool
Benefit Intuitive workflows Clear structure and relationships

INTERFACE METAPHORS AND INTERACTION PARADIGMS

A. Interface Metaphors

An interface metaphor uses familiar concepts or objects from the real world to help users
understand and navigate a digital interface.

Purpose:

9
 Simplifies user learning
 Provides predictability in interactions
 Enhances usability through familiarity

Common Types:

Metaphor Type Example Description


Desktop Metaphor Files, Folders, Trash Bin Mimics a physical desk environment
Shopping Cart E-commerce checkout Simulates real-life shopping process
Calendar Scheduling tools Uses pages and dates for task planning
Bookshelf Digital libraries or readers Represents storage and categorization

Benefits:

 Reduces cognitive load


 Improves onboarding and user experience
 Encourages intuitive exploration

Limitations:

 Can become outdated (e.g., floppy disk icon for “Save”)


 May confuse users if not well-aligned with function

B. Interaction Paradigms

An interaction paradigm defines a general model or approach to how users interact with a
computer system. It sets the tone for designing interaction strategies and interface elements.

Paradigm Description Examples


Command-Line Users input commands via text UNIX shell, DOS
WIMP (Windows, Icons, Menus, Graphical interface with visual Windows OS,
Pointer) elements macOS
Users interact with visual Drag-and-drop in
Direct Manipulation
representations directly GUI
Mobile apps,
Touch-Based Users interact via gestures and touch
tablets
Interaction through natural language Siri, Alexa,
Conversational/Voice
processing Chatbots
Interaction with virtual objects in AR apps, Google
Augmented Reality
physical space Lens

Choosing a Paradigm:

 Depends on user context (expert vs. novice)

10
 Influenced by device type (desktop vs. mobile)
 Informed by design goals (efficiency vs. ease of use)

CHAPTER THREE

11
COGNITION IN HCI

Cognition refers to the mental processes involved in acquiring knowledge and understanding
through thought, experience, and the senses. These processes include attention, perception,
memory, reasoning, problem-solving, and decision-making.

In HCI, cognition plays a central role in how users interact with technology. An understanding of
cognitive processes allows designers to create systems that support rather than hinder human
thought and behavior.

Key Cognitive Processes in HCI:

 Perception: How users interpret visual, auditory, and tactile cues.


 Attention: How users focus on certain interface elements.
 Memory: How information is retained and recalled.
 Reasoning and Problem-solving: How users make decisions while navigating or using
systems.

In Human-Computer Interaction, understanding how people think, reason, and process


information is critical to designing usable systems. Conceptual frameworks for cognition provide
structured ways to model these mental activities and relate them to user interface design.

The three foundational cognitive frameworks are:

 Mental Models
 Information Processing
 External Cognition

A. MENTAL MODELS

A mental model is the user’s internal representation or understanding of how a system works.
These models are formed through experience, instruction, and interaction with similar systems.

📌 Key Characteristics:

 Often incomplete or incorrect


 Based on intuition and prior knowledge
 Change over time as the user gains experience

Example:

When using a microwave, users may think it works like an oven (heat comes from coils), when
in fact it uses electromagnetic waves. Their mental model affects how they set time and power
levels.

Relevance to HCI:

12
 Interfaces should align with users’ mental models to reduce learning curves.
 Consistency in layout and function reinforces accurate mental models.
 Unexpected behavior (e.g., unclear error messages) can lead to confusion and frustration.

Design Implications:

 Provide feedback to help users update their models.


 Use analogies and metaphors to align the interface with familiar experiences.
 Conduct usability testing to reveal users' assumptions.

B. INFORMATION PROCESSING MODEL

This framework likens the human brain to a computer, processing input data into usable
output through a series of cognitive stages.

Components of the Information Processing Model:

Stage Description
Sensory Memory Briefly stores raw sensory input (visual, auditory, etc.)
Perception Filters and interprets sensory data into meaningful patterns
Working Memory (STM) Temporarily holds and manipulates information for current tasks
Long-Term Memory (LTM) Stores knowledge and experiences for future retrieval
Response Execution Executes actions based on decisions and input

Example in HCI:

When a user sees a “Save” icon:

 Sensory memory: Eyes detect the icon.


 Perception: Recognizes it as a floppy disk.
 Working memory: Associates the icon with saving a file.
 LTM: Draws on past experience with similar icons.
 Response: Clicks the icon.

Relevance to Interface Design:

 Limit the load on working memory by simplifying tasks.


 Use familiar icons and labels to trigger recognition (not recall).
 Ensure visual clarity and grouping for easier perception.

Design Tips:

 Minimize the number of steps needed to complete a task.


 Highlight critical information using color or bold text.
 Provide help or hints to guide decision-making.

13
C. EXTERNAL COGNITION

External cognition refers to how people use the environment—especially external representations
such as tools, notes, diagrams, or screens—to support and extend their thinking.

Types of External Representations:

Tool/Medium Function
Notes or To-Do Lists Offload memory, organize thoughts
Diagrams/Flowcharts Visualize relationships and processes
Maps Support spatial reasoning
GUIs & Dashboards Help users monitor and interact with data

Example:

Using a calendar app to track deadlines allows users to externalize their scheduling rather than
remember each date.

Benefits:

 Reduces cognitive load


 Enhances decision-making
 Improves memory and learning through visual aids

Relevance in HCI:

 Tools and interfaces should enhance users' ability to think and reason.
 Use visual hierarchies and layout to make information digestible.
 Encourage users to externalize tasks (e.g., reminders, bookmarks, breadcrumbs).

Design Considerations:

 Visual clarity: Avoid clutter and use whitespace effectively.


 Interactivity: Allow manipulation of visual elements (e.g., drag-and-drop).
 Accessibility: Support diverse cognitive styles and disabilities.

CONCEPTUAL FRAMEWORKS FOR COGNITION

14
Conceptual frameworks help to organize our understanding of how users think and interact with
systems. In HCI, the three primary cognitive frameworks are:

A. Mental Models
B. Information Processing
C. External Cognition

Let’s discuss each framework in detail:

A. Mental Models

Mental models are internal representations of how users believe a system works. These models
are built from prior experiences, observation, and instruction. They allow users to predict the
outcomes of their actions when interacting with systems.

Characteristics of Mental Models:

 Can be accurate or flawed


 Heavily influence user expectations
 Inform user decisions and problem-solving

Example:
If a user believes the "Trash Bin" on a desktop permanently deletes files (instead of moving them
for later recovery), they may avoid using it, based on their mental model.

Design Implications:

 Systems should match or support users' mental models.


 Interface metaphors (e.g., folder icons for directories) help reinforce intuitive models.
 Inconsistencies between system behavior and user models lead to confusion and errors.

B. Information Processing

The Information Processing Model describes human cognition as a sequence of steps through
which information flows — similar to how a computer processes data.

Components of Information Processing

Stage Description
Sensory Memory Captures raw sensory input briefly
Perception Interprets the sensory input into meaningful information
Working Memory Temporarily holds information for immediate tasks
Long-Term Memory Stores experiences and knowledge for future use
Response Execution User takes action based on processed information

15
Example:
A user reads a confirmation dialog box. They perceive the text, hold the information in working
memory, compare it with their goal, then click “OK” or “Cancel.”

🧠 Design Implications:

 Reduce cognitive load by minimizing unnecessary steps.


 Use recognition rather than recall (e.g., dropdown menus vs. text input).
 Provide visual cues to assist perception and memory.

C. External Cognition

External cognition describes how users utilize physical tools, symbols, and representations
outside the mind to support their thinking.

Examples of External Cognition Tools:

 Calendars and reminder apps


 Notes and sticky pads
 Diagrams and maps
 Visual dashboards and graphs

Design Implications:

 Design interfaces that encourage offloading of cognitive work.


 Use clear visual structures to organize information.
 Allow annotations, bookmarks, and highlighting.

Framework Focus Design Contribution


Mental Models Internal user expectations Align interface behavior with user expectations
Information Flow of data through Reduce overload, support memory and
Processing memory decision-making
Enable visual aids and user-driven
External Cognition Tools that aid thinking
representations

INFORMAL DESIGN BASED ON COGNITION

Informal design is the process of applying theoretical knowledge — especially from cognitive
psychology — into real-world system development without strict adherence to formal models or
engineering processes.

In HCI, understanding cognition allows designers to apply design principles that intuitively
support users’ mental and perceptual processes.

Informal design techniques may include:

16
 User personas and scenarios (to model mental models)
 Sketches and wireframes (to externalize cognitive processing)
 Heuristic evaluations (based on cognitive heuristics like visibility, feedback, consistency)

📌 Example:
Designers building a mobile banking app consider that users may forget their transaction history
(limited memory). So, they add a “Recent Transactions” panel, aligning with both information
processing theory and the need for external cognition.

CHAPTER FOUR
Collaboration and Communication

17
Human interaction with computers increasingly involves not just individual use, but also
collaborative work. Social mechanisms are the frameworks, norms, and behaviors that allow
people to interact and coordinate effectively.

Key Social Mechanisms Include:

1. Turn-Taking
This refers to the structured pattern in which people take turns while speaking or interacting.
Design Implication: In collaborative software (e.g., Zoom, Google Docs), clear visual cues
(like a raised-hand icon) support orderly interaction.
2. Awareness
The knowledge of who is doing what in a shared system.
Design Implication: Presence indicators and real-time updates (e.g., “John is editing cell
B3”) in collaborative tools.
3. Shared Context
Participants need a mutual understanding of the task and environment.
Design Implication: Interfaces should provide shared workspaces or annotations that
maintain a common frame of reference.
4. Grounding
The process of establishing shared knowledge or agreement during communication.
Design Implication: Chat confirmations, color-coded comments, and notifications that help
users reach agreement.
5. Coordination
The management of interdependent tasks, timing, and resources.
Design Implication: Features like task assignment, shared calendars, and synchronized
editing.

6. Example in Technology:
Slack or Microsoft Teams supports turn-taking, grounding (via threads), and shared context
(via channels and file sharing).

Summary Table: Social Mechanisms in Collaborative Systems

Mechanism Description Design Application


Turn-Taking Organized speaking/sharing Raise-hand icon, speaker queues
Awareness Knowing who is active and where Activity feeds, online status
Shared Context Common task understanding Shared whiteboards, comments
Grounding Building mutual understanding Message reactions, threaded replies
Coordination Managing interdependent actions Shared calendars, task assignments

ETHNOGRAPHIC ISSUES IN COLLABORATION AND COMMUNICATION

Ethnography in HCI is a qualitative research method used to study how people use technology in
their natural settings. It provides deep insight into the social and organizational context of users.

18
Key Ethnographic Issues Include:

1. Context of Use
– Understanding the environment in which collaboration occurs (e.g., office, factory,
hospital).
– Ethnographers observe how people really work, not how they say they work.
2. Unspoken Practices
– Many collaborative behaviors are implicit or informal.
– Ethnography helps uncover these practices (e.g., body language, shared jokes, rituals).
3. Workarounds
– Users often adapt or modify systems to fit their needs.
– Studying these practices reveals design flaws or opportunities.
4. Communication Patterns
– Who communicates with whom, when, and how.
– Ethnography tracks verbal and non-verbal exchanges to understand collaboration flow.

Example:
An ethnographic study in a hospital might observe how nurses share information during shift
changes—possibly revealing that much coordination is done informally, leading to
improvements in handoff software.

Design Implications from Ethnographic Insights:

 Systems must be flexible to accommodate diverse user behaviors.


 Designers should respect and build upon existing social structures.
 Collaborative software must consider real-world context, not idealized workflows.

Language Framework

Language is central to human interaction. In HCI, understanding the structure and function of
language helps improve command languages, user interfaces, and error messages.

Concepts:

 Natural Language: Speech or text as used in daily communication (e.g., voice assistants
like Siri).
 Command Language: Syntax and rules used to issue instructions to the system (e.g.,
terminal commands).
 Controlled Vocabulary: A restricted set of terms used to minimize ambiguity (e.g., in
menu selections).

Design Implication:

Interfaces should align with users' linguistic expectations. For instance, a voice-enabled ATM
should recognize common banking terms like "balance" or "transfer."

19
B. Distributed Cognition

Distributed cognition is the theory that cognitive processes are not confined to an individual’s
mind but are spread across people, tools, artifacts, and environments.

Example:

In an airline cockpit, cognition is distributed across:

 Pilots (team members)


 Checklists (external memory)
 Instrument panels (sensory feedback)
 Communication with air traffic control

Design Implications:

 Interfaces should support teamwork and shared knowledge.


 External tools (checklists, dashboards) should be integrated into digital workflows.
 Redundant feedback mechanisms can prevent errors in high-risk environments.

CHAPTER FIVE

20
AFFECTIVE AND EXPRESSIVE INTERFACE

Affective and expressive interfaces are elements of user interface design that go beyond mere
functionality to consider the emotional responses and feelings of users. The term "affective"
pertains to emotions and moods, while "expressive" relates to how those emotions are conveyed
or perceived through the interface. These interfaces are crucial in enhancing user engagement,
satisfaction, and overall user experience.

Affective interfaces aim to detect, interpret, and respond to users’ emotional states. This can be
done using a variety of technologies such as emotion recognition software, facial expression
analysis, voice tone detection, and physiological sensors. For example, an e-learning application
might detect when a user is frustrated or confused based on their facial expressions or time taken
on a question and adapt its responses accordingly—perhaps offering hints, encouragement, or
changing the difficulty level.

Expressive interfaces, on the other hand, focus on the system’s ability to project emotions or
personality traits. These are often implemented using avatars, emotive icons, animations, sound
effects, or tone of voice in speech interfaces. Expressive elements make interactions more
human-like and relatable, helping users to form a more natural and engaging relationship with
the system. For instance, a virtual assistant that uses cheerful language and animations can make
an application feel more friendly and welcoming.

Designing affective and expressive interfaces involves understanding the psychological and
social dimensions of human emotion. It also requires careful balancing to avoid unintended
reactions. Overly expressive systems may come across as annoying or distracting, while poorly
implemented affective responses may seem insincere or robotic. Therefore, emotional design
must be based on thorough research, testing, and an understanding of the cultural and individual
variability in emotional expression.

Explain the Application of Anthropomorphism to Interaction Design

Anthropomorphism is the attribution of human traits, emotions, or intentions to non-human


entities. In interaction design, anthropomorphism is used to make computer systems, devices,
and digital agents appear more human-like to improve user engagement, understanding, and
trust.

The application of anthropomorphism in interaction design is widespread and includes things


like giving devices human voices, facial expressions, or names. For example, Apple's Siri,
Amazon's Alexa, and Google Assistant are all anthropomorphic interfaces. They use natural
language processing and human-like voice responses to make interactions more conversational
and intuitive.

One of the key benefits of anthropomorphism in interaction design is its ability to reduce the
learning curve associated with new technologies. When systems behave in familiar, human-like
ways, users find it easier to predict their responses and understand how to interact with them.

21
This is particularly beneficial in contexts where users may have low technical skills, such as
elderly users or children.

Anthropomorphic design can also increase emotional engagement and trust. For instance, users
may feel more comfortable and supported when interacting with a healthcare robot that exhibits
empathetic behaviors, such as nodding, smiling, or using a soothing tone. Similarly, customer
service chatbots that mimic polite human dialogue can lead to more positive user experiences.

However, designers must be cautious not to overuse anthropomorphism, as it can lead to


unrealistic expectations or disappointment when the system fails to respond in truly human ways.
Moreover, users from different cultural backgrounds may interpret anthropomorphic cues
differently. Ethical considerations must also be taken into account, especially when designing
systems for vulnerable populations, to ensure that the interface does not deceive or manipulate
users.

Define Virtual Characters and Agents

Virtual characters and agents are computer-generated entities designed to simulate human-like
interaction within a digital environment. They are widely used in gaming, education, customer
service, training simulations, and virtual reality (VR) environments.

A virtual character is typically an animated figure that may represent a human, animal, or
fictional creature. These characters can display gestures, facial expressions, and voice outputs to
simulate realistic communication. They often function as avatars, guides, or participants within a
digital space, helping to create immersive and interactive user experiences. For example, in a
language learning application, a virtual tutor might provide instructions, feedback, and
encouragement to learners.

A virtual agent, on the other hand, is an autonomous software entity capable of perceiving its
environment, making decisions, and taking actions to achieve specific goals. Virtual agents are
often embedded with artificial intelligence (AI) and natural language processing (NLP)
capabilities that allow them to understand user input and generate appropriate responses. Virtual
agents may or may not have a visual representation (i.e., they can be embodied or disembodied).
For instance, a virtual customer support agent on a website may answer user queries through a
chat interface without having a visible avatar.

These characters and agents can be designed with affective and expressive capabilities to further
humanize interactions. For example, an agent in a virtual therapy app might be programmed to
recognize signs of user distress and respond with empathetic language and tone. Similarly,
virtual characters in educational games can use facial expressions and body language to maintain
learner engagement and convey emotions like excitement or disappointment.

The development of virtual characters and agents involves interdisciplinary collaboration,


including fields such as computer graphics, animation, psychology, AI, and linguistics. Their
effectiveness depends on how well they simulate human behavior and how appropriately they
interact with users in their specific context.

22
Virtual characters and agents are digital entities that interact with users in human-like or animal-
like ways within software applications, websites, simulations, video games, educational
environments, and artificial intelligence (AI) systems. They are often powered by AI
technologies and designed to perform tasks, assist users, or create immersive environments by
simulating behavior, speech, and emotional expression.

These characters can be either:

 Autonomous agents that make decisions and respond to users intelligently,


 Or pre-scripted avatars used for entertainment or instructional purposes.

They play a key role in Human-Computer Interaction (HCI), making interactions more engaging,
personal, and natural.

Types of Virtual Characters

There are two primary kinds of virtual characters commonly used in interaction design and
intelligent systems:

1. Synthetic Characters

Synthetic characters are computer-generated personas that simulate lifelike human or animal
behavior. They may include facial expressions, gestures, emotions, and conversational skills.
These characters are often embedded in simulations, video games, virtual environments, or social
robots.

Characteristics:

 Often have human-like features such as faces, voices, and body language.
 May use Natural Language Processing (NLP) to converse with users.
 Designed to appear intelligent and emotionally responsive.
 Can be embodied (physically represented, e.g., in robots) or disembodied (on screens or
in virtual spaces).

Examples:

 Virtual customer service agents in online banking platforms.


 Avatars in virtual reality (VR) environments that guide users through tasks.
 AI tutors in e-learning platforms that mimic human teachers.

Benefits:

 Increased engagement through emotional and social interaction.


 Ability to create believable, human-like simulations for training and education.
 Can personalize responses based on user behavior or preferences.

23
Challenges:

 Requires complex programming and AI to manage behavior and realism.


 May create ethical concerns if users believe they are interacting with real people.

2. Animated Agents

Animated agents are computer-generated characters with movement and graphical animations
that perform specific roles in software environments. They can move, point, talk, express
emotions, or demonstrate tasks using pre-programmed or AI-driven animation sequences.

Characteristics:

 Designed primarily for visual communication and task guidance.


 May be less autonomous than synthetic characters, often following a scripted behavior
model.
 Often embedded in learning tools, help systems, video games, and user interfaces.
 Their animation can be 2D or 3D depending on the system’s complexity.

Examples:

 Microsoft’s Clippy (the animated paperclip assistant) in older versions of Microsoft


Office.
 Game characters that give hints or tutorials at different levels.
 Language-learning avatars that help with pronunciation and grammar.

Benefits:

 Makes digital environments more visually interactive and engaging.


 Helps users understand how to perform tasks visually.
 Supports multimodal interaction (e.g., combining text, speech, and animation).

Challenges:

 Requires skilled animation and interface design to be effective.


 Poorly designed agents can distract or irritate users rather than assist them.

Emotional Agents & Embodied Conversational Interface Agents

In modern Human-Computer Interaction (HCI), intelligent interfaces are increasingly designed


to mimic or respond to human emotion, behavior, and dialogue. Two important types of such
agents are:

 Emotional Agents

24
 Embodied Conversational Interface Agents (ECIAs)

Both types aim to enhance user experience by making computer systems more intuitive,
interactive, and emotionally responsive.

Emotional Agents

Emotional agents are intelligent software entities or virtual characters that are capable of
recognizing, processing, simulating, and sometimes responding to emotional states. Their
purpose is to simulate emotions or emotional responses to create more natural, empathetic, and
human-like interactions.

Key Characteristics:

1. Emotion Modeling: Emotional agents often include internal models of emotion (e.g.,
using psychological theories like the OCC model—Ortony, Clore, Collins) to generate or
simulate emotional responses to environmental stimuli or user input.
2. Affective Computing: These agents use affective computing techniques to detect users'
emotions via facial recognition, voice tone, or physiological signals and adjust their
behavior accordingly.
3. Expressiveness: Emotional agents are designed to express emotions through facial
expressions, tone of voice, body language, or language choice.
4. Purpose: Emotional agents are particularly useful in environments where empathy,
engagement, or user motivation is important (e.g., education, therapy, customer service,
gaming).

Applications:

 Virtual therapists or companions for mental health support.


 Customer service bots that respond sympathetically to user frustration.
 Educational agents that adapt teaching strategies based on student emotions.

Benefits:

 Improve user trust and emotional comfort.


 Foster longer user engagement.
 Support personalized interactions based on mood and context.

Challenges:

 Emotion detection accuracy is still a technical challenge.


 Ethical concerns over emotional manipulation or misrepresentation.

🧍 Embodied Conversational Interface Agents (ECIAs)

25
Embodied Conversational Interface Agents are virtual agents with a visual presence (a digital
body or avatar) that can engage in interactive dialogue with users through natural language,
gestures, facial expressions, and body movements. These agents combine conversational
capabilities with physical embodiment, making interactions more natural and engaging.

Key Characteristics:

1. Embodiment: Unlike disembodied voice assistants (e.g., Siri, Alexa), ECIAs have a
visible form—either 2D or 3D avatars—that simulate human-like appearance and
behavior.
2. Multimodal Communication: ECIAs use a combination of text, voice, facial expressions,
gestures, and body language to communicate.
3. Dialogue Management: They possess conversational engines that allow for multi-turn,
context-sensitive dialogues.
4. Personality & Presence: These agents often have defined personalities and behavioral
traits to enhance believability and user connection.
5. Contextual Awareness: ECIAs may be equipped with sensors or data inputs to recognize
environmental or user context, enhancing relevance in interactions.

Applications:

 Virtual receptionists or concierges in kiosks and online platforms.


 Language learning tutors that respond with voice, gestures, and facial feedback.
 Healthcare support avatars that guide patients through treatments or procedures.
 Retail agents that simulate human shopping assistants.

Benefits:

 Human-like interaction improves understanding and retention.


 Visual cues support non-verbal communication, which is critical in effective dialogue.
 Increases accessibility for users with literacy or language limitations.

Challenges:

 Requires complex integration of AI, animation, and natural language processing.


 Uncanny valley effect—overly realistic avatars may cause discomfort.
 Technical limitations in maintaining real-time responsiveness across modalities.

User Frustrations in HCI

User frustration refers to the negative emotional response that occurs when users encounter
obstacles while interacting with a system or application. These obstacles might be technical (e.g.,
software errors), cognitive (e.g., complex interfaces), or emotional (e.g., feeling ignored or
confused). Frustration can lead to decreased productivity, user dissatisfaction, abandonment of
the system, or negative reviews. Understanding and mitigating user frustration is a key goal of
effective interaction design.

26
Common Causes of User Frustration

1. Poor Usability: Difficult-to-navigate interfaces, hidden functions, and ambiguous icons


can confuse users and hinder task completion.
2. System Errors: Frequent crashes, loading issues, or unhelpful error messages can lead
users to lose confidence in the system.
3. Lack of Feedback: When users perform an action but receive no indication of system
response, they may feel uncertain or ignored.
4. Unintuitive Workflows: If tasks require unnecessary steps or are not aligned with user
expectations, users can feel lost or overwhelmed.
5. Slow Performance: Lagging systems or applications that take too long to load content
can frustrate users, especially when working under time constraints.
6. Information Overload: Presenting too much data or too many choices at once can
overwhelm users, especially novice users.
7. Accessibility Barriers: Interfaces that do not consider the needs of users with disabilities
can exclude or alienate them.
8. Inconsistent Design: Interfaces that change behavior or appearance unpredictably
confuse users and force them to relearn tasks.

How to Deal with User Frustration

Designers, developers, and UX professionals can take several steps to minimize user frustration
and improve user experience:

1. User-Centered Design (UCD)

 Empathize with users through research, interviews, surveys, and usability testing.
 Involve users early in the design process to understand their goals, preferences, and pain
points.
 Create personas and scenarios to guide design decisions based on real-world needs.

2. Improve Usability and Learnability

 Ensure that interfaces are intuitive, with clear visual hierarchy and consistent design
patterns.
 Use meaningful labels, familiar icons, and logical groupings to help users easily find and
understand features.
 Minimize cognitive load by simplifying tasks and eliminating unnecessary steps.

3. Provide Effective Feedback

 Always inform users of system status (e.g., loading indicators, progress bars).
 Provide confirmations for successful actions and clear, constructive error messages when
something goes wrong.

27
 Offer visual or auditory cues when users interact with elements (e.g., button highlights,
sound effects).

4. Handle Errors Gracefully

 Design helpful and non-technical error messages that suggest corrective actions (e.g.,
"Please check your internet connection" instead of "Error 503").
 Implement undo and redo features so users can recover from mistakes.
 Use preventative design by disabling invalid inputs or providing suggestions (e.g.,
autocomplete, dropdowns).

5. Enhance Performance

 Optimize load times and system responsiveness.


 Use background loading or progressive loading techniques to reduce waiting times.
 Inform users of expected wait times and provide engaging content or animation during
loading.

6. Maintain Consistency

 Use consistent layout, color schemes, navigation structures, and language throughout the
interface.
 Ensure that similar actions produce similar results to avoid confusion.

7. Educate and Support Users

 Provide onboarding tutorials, tooltips, and help documentation to assist new users.
 Offer in-app guidance or chat support for real-time help.
 Use FAQs and forums to empower users to solve common problems independently.

8. Conduct Continuous Testing and Feedback Collection

 Perform regular usability tests with diverse user groups.


 Use feedback tools (e.g., surveys, ratings, feedback forms) to gather user sentiments.
 Monitor analytics to identify drop-off points or frequently abandoned tasks.

Justification for the Application of Anthropomorphism to Interaction Design

Anthropomorphism refers to the attribution of human traits, emotions, intentions, or behaviors to


non-human entities such as animals, objects, or—most relevant to Human-Computer Interaction
(HCI)—technological systems like software interfaces, robots, and virtual assistants. In
interaction design, anthropomorphism is used to make digital systems more relatable, intuitive,
and emotionally engaging for users.

28
1. Improves Emotional Engagement and Trust

One of the most compelling justifications for using anthropomorphism is that it makes systems
appear more "human-like," thus improving emotional connection. When users interact with
systems that smile, respond politely, or speak in a conversational tone, they tend to feel more at
ease and develop a level of trust, which can encourage continued use and loyalty.

Example:

 Virtual assistants like Apple’s Siri or Amazon’s Alexa use human-like voices, tones, and
personalities. These traits make interactions more pleasant and relatable, fostering a sense
of companionship or helpfulness.

2. Enhances Learnability and Usability

Anthropomorphic elements help users understand how a system functions by tapping into
familiar social behaviors. When systems behave like humans—providing feedback, making eye
contact (in the case of robots or avatars), or even using gestures—users can draw on their
existing social knowledge to navigate the interface more intuitively.

Example:

 A chatbot that uses friendly greetings and conversational turn-taking makes it easier for
users to understand how to interact with the system, even without prior training.

3. Reduces User Anxiety and Frustration

Interacting with machines can be intimidating, especially for less tech-savvy users. By giving the
system a human face, voice, or behavior, anthropomorphism reduces the psychological distance
between humans and technology. This can decrease user anxiety, especially in high-stress
environments such as customer service, healthcare, or educational platforms.

Example:

 Educational software that includes animated characters offering encouragement (e.g.,


“Great job!” or “Let’s try again!”) helps students feel supported rather than judged by a
rigid machine.

4. Encourages Natural Interaction

Anthropomorphism enables the design of systems that support natural language processing,
gesture recognition, and expressive communication. This allows users to interact with the system
as they would with another person, removing the need for complex commands or technical
jargon.

Example:

29
 Voice-enabled systems like Google Assistant allow users to ask questions or give
commands in everyday language rather than typing structured queries or navigating
menus.

5. Aids in Error Recovery and User Support

When systems respond in a way that mimics human empathy—such as acknowledging


frustration or apologizing for mistakes—it can soften the negative impact of errors and motivate
users to continue using the system rather than abandoning it.

Example:

 A system that encounters an error might say: “Oops, something went wrong. Let me try
that again for you,” which is more comforting than a cold technical error code.

6. Increases Accessibility

For users with disabilities, anthropomorphic features such as speech interfaces, animated avatars,
or virtual agents can provide a more accessible and inclusive user experience. These features can
substitute for traditional text-based interfaces and cater to users with visual, motor, or cognitive
impairments.

Example:

 An assistive robot with facial expressions and voice output can help guide elderly users
through medication reminders or emergency instructions more effectively than plain text
notifications.

7. Encourages Social Behaviors That Benefit the Interaction

Anthropomorphism can influence users to exhibit socially desirable behaviors such as politeness,
patience, and attention to feedback. These behaviors, in turn, can enhance the quality and
efficiency of human-computer interactions.

Example:

 Users are more likely to say “please” and “thank you” when interacting with voice
assistants, creating a respectful and pleasant interaction loop.

Ethical and Design Considerations

While anthropomorphism has many benefits, it should be applied thoughtfully:

 Avoid deception: Users should not be misled into believing the system has real emotions
or consciousness.

30
 Maintain functionality: Aesthetic and emotional appeal should not come at the cost of
system performance or efficiency.
 Respect user preferences: Not all users appreciate anthropomorphism—especially in
professional or utilitarian contexts.

General Design Concerns of Virtual Characters

Designing virtual characters goes beyond technical implementation; it involves ensuring that the
characters evoke trust, relatability, and utility from users. A poorly designed character can
diminish user engagement, while a well-crafted one can enhance interaction, learning, or
entertainment. The four major design concerns are:

1. Believability of Virtual Characters

Believability refers to how convincingly a virtual character mimics human or lifelike qualities in
its interaction, expression, and responsiveness. A believable character is one that users perceive
as consistent, emotionally aware, and contextually appropriate—even if they know it’s artificial.

Key Elements of Believability:

 Consistency: The character should behave consistently with its personality, role, and
context. Erratic or unpredictable behavior breaks the illusion.
 Intentionality: The character’s actions should appear purposeful rather than random or
robotic.
 Emotional Responsiveness: The character should respond appropriately to user input or
environmental changes (e.g., smiling in response to praise).
 Personality: Embedding distinct traits and conversational tone helps users relate to the
character and feel more immersed in the interaction.
 Suspension of Disbelief: The goal is not necessarily to convince the user that the
character is human, but to make them emotionally and cognitively engage with it as if it
were.

Challenges:

 Avoiding the “Uncanny Valley,” where characters appear almost human but elicit
discomfort.
 Balancing realism with computational limitations.

2. Appearance

Appearance deals with the visual design and representation of virtual characters. This includes
their form, attire, facial expressions, and overall aesthetic styling. Appearance is often the first
thing users notice and plays a major role in shaping expectations.

31
Key Considerations:

 Stylization vs. Realism: Some applications benefit from cartoon-like avatars (e.g.,
education, games), while others may need realistic avatars (e.g., medical simulations).
 Cultural Sensitivity: Appearance should reflect or respect the cultural background and
expectations of the user audience.
 Expressiveness: Facial features and body postures must be capable of conveying a wide
range of emotions and reactions.
 Gender and Diversity: Characters should be inclusive, avoid stereotypes, and represent
diverse user groups when necessary.
 Adaptive Appearance: In some cases, allowing users to customize the character's look
enhances identification and comfort.

Challenges:

 High-fidelity modeling is resource-intensive.


 Realism can increase expectations for interaction quality.

3. Behavior

Behavior is how the virtual character acts and reacts in response to stimuli, including user
interaction, environmental events, or internal programming. This includes verbal
communication, gestures, emotions, and task execution.

Key Components:

 Social Norms: Characters should respect social cues such as turn-taking in conversation,
personal space, and polite expressions.
 Context Awareness: Behavior should adapt to different situations (e.g., professional tone
in business settings, friendly tone in games).
 Animation and Motion: Movements should be smooth and meaningful. Sudden or
unnatural motion can break immersion.
 Goal-Oriented Actions: Characters should exhibit behavior aligned with their intended
function (e.g., a teaching agent patiently guides; a security agent strictly enforces rules).
 Feedback Mechanisms: Behaviors that show understanding or confirmation of user input
(e.g., nodding, saying “I understand”) improve communication.

Challenges:

 Predicting all possible user actions and inputs.


 Implementing behavior trees or machine learning for dynamic responses.

4. Mode of Interaction

Mode of interaction refers to how users communicate or engage with the virtual character. It
defines the channels of input/output and the responsiveness of the character.

32
Types of Interaction Modes:

 Text-based Interaction: Characters interact via written text in chat windows—suitable for
constrained environments.
 Voice Interaction: Allows for natural and hands-free interaction, ideal for accessibility
and realism.
 Gesture-based Interaction: Using body movement or hand gestures, often in VR/AR
environments.
 Touch or Click Interfaces: Users interact through selecting options or dragging/dropping
on-screen items.
 Multimodal Interaction: Combining multiple modes (e.g., voice + gesture + facial
recognition) for rich user experiences.

Design Considerations:

 Accessibility: Interaction modes should accommodate users with different abilities (e.g.,
hearing or vision impairments).
 Simplicity: Avoid overcomplicating the interface—interactions should be intuitive and
user-friendly.
 Latency and Responsiveness: The system should respond quickly to user actions to avoid
confusion or frustration.
 Consistency: The interaction pattern should be predictable and align with user
expectations based on real-world experiences.

Challenges:

 Integrating multiple modes can increase system complexity.


 Natural language understanding and emotion recognition remain technically difficult.

33
CHAPTER SIX
PROCESS OF INTERACTION DESIGN

Waterfall Model, Iterative Model, and Spiral Model of Interaction Design

1. Waterfall Model

The Waterfall Model is one of the earliest process models introduced in software engineering. It
is a linear sequential model that divides software development into distinct phases.

Phases include:

 Requirement Analysis
 System Design
 Implementation (coding)
 Testing
 Deployment
 Maintenance

In the context of interaction design, this model assumes that user requirements can be fully
captured upfront, and the design process flows in a one-directional manner—like a waterfall.

Advantages:

 Simple and easy to understand and manage.


 Well-suited for projects with clearly defined requirements.

Disadvantages:

 Little room for flexibility or changes after the design phase.


 Users are involved only in the beginning and at the end—less opportunity for feedback
during development.
 Not ideal for dynamic environments where user needs evolve.

2. Iterative Model

The Iterative Model focuses on the cyclic nature of design, development, and evaluation. It
emphasizes refining and improving the product through successive versions.

Key Characteristics:

 Develop → Test → Evaluate → Refine (Repeat)


 Constant user feedback loop
 Supports incremental improvement

34
Advantages:

 Encourages user feedback throughout the design process.


 Allows flexibility and gradual evolution of the system.
 Defects can be identified early.

Disadvantages:

 May lead to scope creep without proper control.


 Project planning can be challenging due to changing requirements.

3. Spiral Model

The Spiral Model combines the ideas of the iterative approach with risk analysis. It is best suited
for large, high-risk projects. Each loop in the spiral represents a development phase.

Phases in Each Spiral Loop:

 Planning
 Risk Analysis
 Engineering (design & development)
 Evaluation

Key Features:

 Focuses on risk assessment and reduction.


 Encourages prototyping at early stages.
 Allows user involvement in all phases.

Advantages:

 Risk-driven model ensures stability.


 Flexible to changes and refinements.
 Supports continual user involvement and validation.

Disadvantages:

 Complex and expensive to manage.


 Not suitable for small-scale projects due to overhead costs.

Life Cycle Models in Software Engineering and HCI

Life cycle models guide the process of software and interface development. While software
engineering models focus on technical aspects, HCI life cycle models center on user involvement
and usability.

35
Key Life Cycle Models:

a) Software Engineering Life Cycle Models:

 Waterfall Model
 V-Model (Verification and Validation)
 Agile Development Model
 Spiral Model
 Incremental Model

b) HCI Life Cycle Models:

HCI life cycles include steps to understand users and tasks, design solutions, and evaluate
usability.

Common Phases:

 Requirement Gathering (user goals, tasks, context)


 Design (conceptual, interactive, and visual design)
 Prototyping (low/high-fidelity)
 Evaluation (usability testing, user feedback)
 Implementation
 Deployment and Maintenance

Differences:

 HCI models are user-centered and emphasize iteration and evaluation.


 Software models may focus more on technical feasibility and efficiency.

6.3 User Testing

User testing is a process of evaluating a product by testing it with real users. It helps uncover
usability problems and areas of improvement.

Types of User Testing:

a) Usability Testing:

 Observing users as they attempt tasks using the system.


 Measures time taken, errors made, success rate, and user satisfaction.

b) A/B Testing:

 Comparing two versions (Version A vs. Version B) to see which performs better based
on specific metrics.

36
c) Remote Testing:

 Testing performed with users in different locations, often using screen-sharing or


automated tools.

Goals:

 Identify friction points in the interface.


 Improve user experience.
 Validate design decisions.

Benefits:

 Provides empirical data.


 Reduces development costs by catching issues early.
 Enhances user satisfaction.

Formative Evaluation, Field Study, and Controlled Experiment

These are techniques used to evaluate the design and performance of an interactive system.

Formative Evaluation

Formative evaluation is a crucial part of the design and development process, especially in
human-computer interaction (HCI) and software development. It is conducted during the early or
middle phases of a project to gather feedback that helps improve the design before the final
product is completed. Unlike summative evaluation, which measures the effectiveness of a
finished product, formative evaluation focuses on identifying usability problems and design
flaws while there is still flexibility to make changes.

This type of evaluation often involves real users interacting with prototypes, mockups, or early
versions of the system. The feedback collected is typically qualitative, focusing on understanding
where users struggle, what confuses them, and how intuitive the interface is. Common
techniques used in formative evaluation include think-aloud protocols, where users verbalize
their thoughts while using the system, heuristic evaluations conducted by usability experts, and
cognitive walkthroughs that simulate user problem-solving processes.

Formative evaluation encourages iterative design, where each cycle of testing informs
subsequent revisions. This continuous feedback loop helps designers create more user-friendly
interfaces and ensures that the system aligns with user needs and expectations. Because
formative evaluation is flexible and informal, it can be adapted to various stages of development
and different types of systems. Ultimately, it helps to prevent costly redesigns later by catching
problems early.

Methods:

37
 Think-aloud protocol
 Paper prototyping
 Heuristic evaluation

Field Study

Field studies provide an in-depth understanding of how users interact with technology in their
natural environments. Instead of controlling or simulating the environment, field studies observe
real users performing their tasks where the system will actually be used—whether that be in a
workplace, home, or public space. This contextual approach allows designers and researchers to
gain insights into how environmental factors, social dynamics, and physical conditions affect
user behavior.

One of the major strengths of field studies is their ability to reveal the complexities and nuances
of real-world usage that laboratory tests may miss. For instance, a user might behave differently
when under time pressure, when distracted by coworkers, or when using the system alongside
other tools. Field studies often involve ethnographic methods such as participant observation,
interviews, diary studies, or video recordings that document natural interactions over time.

Because of their immersive nature, field studies can uncover unexpected needs and challenges,
making them invaluable for designing systems that fit seamlessly into users’ workflows and
lifestyles. They also provide insights into user attitudes, motivations, and the broader social and
organizational context, which can significantly impact technology adoption and success.

Example:

 Studying how nurses use electronic health records during a shift.

Benefits:

 Reveals contextual factors that affect usability.


 Captures authentic user behavior.

Controlled Experiment

Controlled experiments are a fundamental scientific method used in HCI to rigorously test
hypotheses about design choices, interface features, or system performance. Unlike formative
evaluation and field studies, controlled experiments take place in a carefully managed
environment where variables can be manipulated, and external factors minimized. This
controlled setting allows researchers to isolate specific factors and measure their direct impact on
user performance or satisfaction.

In a controlled experiment, participants are typically assigned to different groups or conditions


randomly to ensure that results are not biased by participant characteristics. Researchers may
vary interface designs, interaction methods, or task instructions and then measure outcomes such
as task completion time, error rates, accuracy, or subjective satisfaction ratings. This approach

38
yields quantitative data that can be statistically analyzed to determine which design alternatives
perform better and why.

Controlled experiments are especially useful when precise, objective comparisons are needed—
such as deciding between two competing interface layouts or testing the effectiveness of a new
feature. The results provide strong evidence for making design decisions and are often used to
validate findings from more exploratory methods like formative evaluations and field studies.
However, because experiments are conducted in artificial settings, their findings may sometimes
lack ecological validity; that is, they might not fully capture how users behave in real-world
contexts.

Example:

 Testing two menu designs to determine which leads to faster task completion.

Benefits:

 Allows statistical comparison.


 Isolates specific design factors.

Basic Issues in Designing and Testing Typical Tasks

When designing and testing tasks within interactive systems, several fundamental issues must be
carefully considered to ensure that the tasks are realistic, meaningful, and useful for evaluating
the system’s usability. These issues revolve around defining tasks that truly represent user goals,
ensuring tasks are measurable, and creating test environments that yield valid and reliable data.
Here are some of the key issues:

1. Task Representativeness

One of the first and most important issues in task design is ensuring that the tasks chosen for
testing are representative of the real-world activities users will perform with the system. Tasks
should reflect actual user goals, workflows, and contexts rather than artificial or trivial actions
that may not reveal meaningful usability insights.

If tasks do not closely mirror real usage, the results of testing may not generalize well to actual
user experience. For example, testing a word processor with simple text entry tasks may miss
critical issues related to formatting or collaboration that users encounter daily. Designers must
therefore analyze user requirements and domain activities carefully to define representative tasks
that capture the diversity and complexity of real-world usage.

2. Task Complexity and Scope

39
Balancing the complexity and scope of test tasks is another essential concern. Tasks that are too
simple may fail to reveal meaningful usability problems because they don’t challenge the system
or the user enough. On the other hand, tasks that are too complex can overwhelm users or
introduce confounding factors, making it difficult to isolate specific usability issues.

Choosing tasks of varying complexity allows evaluators to test different aspects of system
performance, from basic operations to more advanced or combined functions. It is also important
to define clear start and end points for tasks, so performance can be reliably measured and
compared.

3. Task Clarity and Instruction

Tasks must be clearly and unambiguously defined so that users understand what they are
expected to do during testing. Poorly worded or ambiguous instructions can confuse participants
and skew results. Providing written instructions, demonstration, or training beforehand can help
users focus on task performance rather than figuring out what the task entails.

Moreover, the wording should avoid biasing users toward particular strategies or solutions.
Instructions should be neutral, encouraging natural interaction rather than leading users toward
expected outcomes.

4. Measuring Task Performance

Another crucial issue is deciding how to measure task performance effectively. Common
measures include task completion time, error rates, success/failure rates, and subjective
satisfaction ratings. Selecting appropriate metrics depends on the goals of the evaluation. For
instance, speed may be more critical in time-sensitive applications, while accuracy might be
paramount in safety-critical systems.

In addition, evaluators should consider collecting qualitative data such as user comments,
observations, or think-aloud protocols to supplement quantitative measures. This combined
approach provides a richer understanding of where and why users face difficulties.

5. Environmental and Contextual Factors

The environment in which task testing occurs can significantly affect user performance.
Laboratory settings provide control but may lack ecological validity because they do not
replicate real-world distractions, interruptions, or multitasking conditions. Conversely, field
testing captures natural context but introduces variability that can complicate data interpretation.

Designers must decide the appropriate balance between control and realism, considering factors
such as lighting, noise, device types, and user state (e.g., fatigue or stress) when planning task
tests. Where possible, simulating typical user environments helps ensure findings are applicable
in practice.

6. User Diversity and Task Variation

40
Users differ in their experience levels, cognitive abilities, preferences, and goals. Thus, tasks
should be tested with a diverse user population to ensure the system accommodates a broad
range of needs. Additionally, tasks might need to vary in ways that reflect this diversity, such as
offering multiple ways to accomplish the same goal or providing customizable options.

Ignoring user diversity risks designing systems that work well only for a narrow segment,
limiting usability for others. Inclusive task design and testing help create systems that are robust
and accessible.

7. Task Sequencing and Learning Effects

When multiple tasks are tested in one session, the order in which tasks are presented can
influence performance due to learning or fatigue effects. Users may improve simply because they
become more familiar with the system (learning effect), or their performance may decline due to
tiredness or boredom.

To minimize these biases, task sequences can be randomized or counterbalanced across


participants. Rest breaks may also be necessary to maintain user engagement and performance
consistency.

8. Ethical and Practical Constraints

Finally, ethical considerations such as informed consent, privacy, and minimizing user stress
must be addressed in task design and testing. Tasks should not cause undue frustration or
discomfort. Practical constraints, including time limits, resource availability, and technical
limitations, also impact how tasks are designed and conducted.

Evaluators must strike a balance between thoroughness and feasibility, ensuring testing sessions
are productive yet respectful of participants’ time and well-being.

Domain Analysis of Users

Domain analysis involves researching and understanding the environment, needs, tasks, tools,
and behaviors of target users. It is a critical first step in user-centered design.

Purpose:

 Ensure that the design aligns with the actual work and goals of users.
 Reduce assumptions and guesswork in design.

Components of Domain Analysis:

a) User Goals:

41
 What are users trying to accomplish?
 Are their goals strategic (long-term) or tactical (short-term)?

b) User Tasks:

 Specific actions users perform to achieve goals.


 Tasks must be observed and documented accurately.

c) Work Environment:

 Physical, social, and technological context in which users operate.


 Includes noise, lighting, devices used, etc.

d) Constraints and Limitations:

 Time pressure, accessibility, regulations, etc.

e) Personas and Scenarios:

 Personas are fictional characters representing user types.


 Scenarios describe use-cases to help in understanding user flows.

Benefits:

 Builds empathy with users.


 Leads to more intuitive and usable systems.
 Informs better requirement definitions.

Basic Issues in Selecting Typical Users and Their Domains

Selecting typical users and their corresponding domains is a crucial step in designing and
evaluating interactive systems. The success of user-centered design and usability testing largely
depends on how well the chosen users represent the actual user base and how accurately their
real-world environments and tasks (domains) are understood and incorporated. Several
fundamental issues arise in this selection process:

1. Identifying Representative Users

One of the primary issues is to ensure the users selected for design or testing represent the broad
population of end-users. “Typical users” means those who are most likely to use the system
regularly and whose needs, skills, and behaviors reflect the diversity of the user base.

 User Diversity: Users vary widely in experience, expertise, age, cultural background,
disabilities, cognitive abilities, and motivations. Selecting only expert users or those
familiar with technology can bias the results and lead to designs that exclude novices or
other groups.

42
 User Roles: In many systems, different users have different roles (e.g., administrators,
end-users, technicians), each with distinct tasks and requirements. Selecting users across
these roles ensures that the system accommodates various perspectives and needs.

2. Understanding the User Domain

The domain refers to the real-world environment, context, and tasks in which users operate the
system. Accurately capturing this domain is essential for meaningful design and evaluation.

 Domain Complexity: Domains may range from simple (e.g., using a calculator) to
highly complex (e.g., air traffic control, medical diagnosis). Understanding domain-
specific constraints, workflows, and goals is vital to selecting users who truly represent
typical domain challenges.
 Context of Use: The physical, social, and organizational contexts influence how users
interact with technology. For example, a mobile worker’s domain may involve outdoor
environments, variable lighting, and intermittent connectivity, whereas an office user’s
domain may involve collaborative workflows and multi-tasking.

3. Recruitment Challenges

Finding and recruiting typical users can be difficult and resource-intensive, which poses practical
challenges:

 Access and Availability: Some user groups might be hard to reach due to geographic,
institutional, or privacy barriers. For example, recruiting healthcare professionals or
children requires permissions and specialized approaches.
 Willingness to Participate: Users may be reluctant to participate due to time constraints,
distrust, or lack of interest, potentially leading to a sample biased toward more motivated
individuals.
 Sample Size and Representativeness: Small sample sizes limit the generalizability of
findings. However, very large samples may be impractical. Balancing these factors is
essential to get meaningful yet feasible results.

4. Avoiding Bias in User Selection

Bias can creep in at multiple stages:

 Self-Selection Bias: When users volunteer themselves, the sample might over-represent
enthusiastic or tech-savvy individuals.
 Selection Bias: Choosing users based on convenience (e.g., colleagues, friends) may not
reflect the broader user base.

43
 Stereotyping: Assuming certain users fit a stereotype can lead to ignoring diversity and
unique needs.

Designers must use careful sampling strategies, such as stratified or purposive sampling, to
mitigate bias.

5. Understanding User Goals and Tasks

Selecting users involves understanding their goals and typical tasks within the domain:

 Goal Alignment: Users should be chosen whose goals align with the system’s purpose to
ensure relevance.
 Task Variety: Users performing different tasks or using different features provide
comprehensive insights into system usability.
 Skill Levels: Including users with varying expertise (novices, intermediates, experts)
helps evaluate how well the system supports learning and expert use.

6. Cultural and Societal Considerations

Users come from diverse cultural backgrounds influencing expectations, interaction styles, and
acceptance of technology.

 Localization Needs: User interfaces and workflows may need adaptation based on
language, cultural norms, and societal practices.
 Ethical Concerns: Respecting cultural sensitivities and privacy during recruitment and
testing is paramount.

7. Dynamic User Populations

Users and their domains evolve over time due to changes in technology, work practices, or
societal trends.

 Future-Proofing: Selecting users who represent anticipated future trends or tasks can
help design adaptable systems.
 Continuous Engagement: Ongoing involvement of users in design cycles ensures the
system remains relevant and usable.

Issues in Preparing Test Conditions

Preparing test conditions is a critical phase in usability testing and system evaluation, where the
environment, tasks, tools, and protocols are set up so that user interactions can be observed and
analyzed effectively. However, there are several challenges and issues that arise during this
preparation, which, if not properly addressed, can compromise the validity, reliability, and
usefulness of the test outcomes.

1. Defining Realistic and Relevant Tasks

44
 Task Representativeness: The tasks users perform during testing should closely reflect
real-world activities. Designing artificial or trivial tasks may not reveal genuine usability
issues or user behavior.
 Task Complexity: Tasks that are too simple may not stress the system or reveal usability
problems; tasks too complex may overwhelm users or introduce confounding variables.
 Task Clarity: Tasks must be clearly described and understandable to participants to
avoid confusion and inconsistent performance.
 Balancing Standardization and Flexibility: Tests need consistent tasks for comparison
across users, yet should allow some flexibility to capture natural user behaviors.

2. Simulating the Realistic Environment

 Physical Environment: The testing environment should resemble the actual usage
context as much as possible. For example, testing a mobile app in a quiet lab room differs
from testing it outdoors on a noisy street.
 Technological Setup: Hardware, software, network conditions, and peripherals should
match the real-life scenarios users experience to prevent artificial results.
 Distraction and Interruptions: Real environments often have interruptions; overly
controlled settings may ignore these factors that affect usability.
 Ergonomics: Seating, lighting, screen size, and input devices can affect user comfort and
performance, and thus need careful consideration.

3. User Selection and Preparation

 User Familiarity: Deciding whether users should be novices or experienced, and


ensuring they have the right background to complete the tasks.
 Training and Instruction: Providing enough instruction without biasing or over-
coaching users is tricky. Users must understand how to use the system without being
influenced in ways that distort natural usage.
 Motivation and Engagement: Users need to be motivated to perform the tasks seriously;
artificial tasks or test settings may reduce engagement.
 Ethical Considerations: Ensuring informed consent, privacy, and comfort for users,
especially when observing or recording their interactions.

4. Tool and Equipment Preparation

 Reliable Data Collection Tools: Tools for recording screen actions, mouse clicks,
keystrokes, eye tracking, or verbal protocols must be tested beforehand to ensure accurate
data capture.
 Backup Plans: Equipment failure or software crashes can disrupt tests; having backups
and contingency plans is essential.
 Calibration: Some equipment, like eye trackers or biometric sensors, require precise
calibration to work correctly.

45
5. Timing and Scheduling

 Test Duration: Ensuring that the test is neither too long (which may cause fatigue) nor
too short (which may miss important behaviors).
 Scheduling Conflicts: Coordinating test times that suit participants and testers, while
avoiding rushed or tired users.
 Breaks and Rest: Planning for breaks during longer sessions to maintain user focus and
comfort.

6. Controlling External Variables

 Minimizing Bias: Testers must avoid influencing users with verbal or non-verbal cues.
Instructions should be neutral.
 Consistency Across Sessions: Keeping conditions consistent across different test
participants to ensure comparable results.
 Randomization: If multiple tasks or conditions are tested, randomizing order to avoid
learning effects or fatigue biases.

7. Ethical and Privacy Concerns

 User Consent: Users must be informed about what data will be collected and how it will
be used.
 Data Confidentiality: Ensuring personal data and test recordings are securely stored and
anonymized if necessary.
 User Comfort: Avoiding situations that cause stress or discomfort during testing.

8. Preparing for Unexpected Scenarios

 Handling User Errors: Designing test conditions that anticipate users making mistakes
without invalidating the test.
 Dealing with Technical Problems: Having procedures to pause, restart, or reschedule
tests if technical issues arise.
 Adaptability: Being ready to adjust tasks or conditions if initial plans prove ineffective.

46
47

You might also like