Com 325 HCI
Com 325 HCI
1
designed interfaces can lead to serious implications and will not be able to engage the end user
for the longer period and will reduce the users in future. A poor design may lead to loses money
as its workforce is less productive.
Components of HCI
Components of HCI Below are the components of HCI:
User
The Machine
Interaction
User
We may use the word ‘User’ to refer to an individual user or a community of users working
together. It is critical to understand how people’s sensory systems (sight, hearing, and touch)
transmit information. Furthermore, different users develop different conceptions or mental
models about their experiences, and they acquire and retain information in different ways.
Cultural and national variations also play a role.
The Machine
When we say ‘Computer,’ we are talking to a wide range of technology, from desktop computers
to large-scale computer systems. If we were talking about the creation of a Website, for example,
the Website would be referred to as the machine. Mobile phones and VCRs are examples of
devices that can be called computers.
Interaction
Humans and robots have distinct characteristics. Despite this, HCI takes every effort to ensure
that they get along and communicate effectively. You must apply what you know about humans
and computers to build a functional framework, and you must communicate with possible users
during the design process. The schedule and budget are essential in real-world processes, and
they are critical.
The Goals of HCI
The aim of HCI is to create systems that are both useful and safe, as well as functional. To create
usable computer systems, developers must try to understand the elements that influence how
people use technology, develop tools and approaches to aid in the development of appropriate
systems, and achieve efficient, effective, and safe interaction by putting people first. Underlying
the whole theme of Human computer interaction is the belief that people using a computer
2
system should come first. Their needs, capabilities, and preferences for conducting various tasks
should direct developers in the way that they design systems. People should not have to change
the way that they use a system to fit in with it. Instead, the system should be designed to match
their requirements.
Much of the research in the field of Human Computer Interaction (HCI) takes an interest in:
Methods for designing new computer interfaces, thereby optimizing a design for a desired
property, such as learnability, findability, the efficiency of use.
Methods for implementing interfaces, e.g., by means of software libraries.
Methods for evaluating and comparing interfaces with respect to their usability and other
desirable properties.
Methods for studying human computer use and its sociocultural implications more broadly.
Methods for determining whether the user is human or computer.
Models and theories of human computer use as well as conceptual frameworks for the design
of computer interfaces, such as cognitivist user models, Activity Theory or ethno-methodological
accounts of human computer use.
Perspectives that critically reflect upon the values that underlie computational design, computer
use and HCI research practice.
Visions of what researchers in the field seek to achieve might vary. When pursuing a cognitivist
perspective, researchers of HCI may seek to align computer interfaces with the mental model
that humans have of their activities. When pursuing a post-cognitivist perspective, researchers of
HCI may seek to align computer interfaces with existing social practices or existing sociocultural
values.
Researchers in HCI are interested in developing design methodologies, experimenting with
devices, prototyping software, and hardware systems, exploring interaction paradigms, and
developing models and theories of interaction.
Importance of HCI
Below are few features of Human Computer Interface (HCI).
User-Friendly Applications The most significant benefit of implementing human computer
interaction is the creation of more user-friendly applications. You can make computers and
systems more responsive to the user’s needs, resulting in a better user experience. This goal-
oriented design makes it easier to attain your objectives. That, in turn, will lead to higher
company success, which is the most important benefit of HCI.
Increase Customer Acquisition A strong user experience will help in attracting and retaining
customers. The more customer acquisition will help in building the trust and so will be helpful in
retaining for the longer period.
Optimize Resources, Development Time and Costs A well-designed application or Website
work for end users for longer period. It helps in optimizing the resources. The optimizing
resource utilization then help in reducing the time and cost of development. On the other hand,
3
poorly, designed application leads to rework of the same application again and again and
increase the development cost and time.
Increased Productivity HCI helps in development of effective, user friendly and easy to use
interface. This help in increasing the productivity and so also help in increasing the
organization’s business. Human computer interaction helps in reducing the errors hence helps in
the smoother workflow. A great UI/UX system can also lessen errors and promote a smoother
workflow for employees. An effective tip in this regard would be using light colors and
highlighting relevant content so that users can see the important information briefly. This will
help them focus on the most pertinent information without getting distracted.
Software Success Human Computer Interaction principles not only important for the end user,
but also is an extremely high priority for software development companies. If a software product
is unusable and causes frustration, no person will use the program by choice, and as a result sale
will be negatively affected.
Improved Accessibility There are many people with different disabilities. HCI helped in
development of such software which can be accessed by not just the normal people but also by
the disabled peoples.
4
use the Website or app, they will not use the product or they will overwhelm technical support
with costs, ballooning costs.
Connector and a Separator The interface should act as both a connector and a separator: a
connector in that it links the user to the computer’s control, and a separator in that it reduces the
risk of the participants hurting one another.
Clarity in Design any interface’s first and most critical task is to provide clarity. People must
be able to understand an interface you have built to use it effectively. Clarity inspires confidence
and encourages continued use. One hundred uncluttered screens are superior to one cluttered
screen.
Interfaces Exist to Enable Interaction between humans and our world is facilitated by
interfaces. They can help us explain, illuminate, allow, display relationships, bring us together,
separate us, manage expectations, and provide access to services. The best user interfaces can
encourage the good connection to the world.
Consistency Screen elements do not appear to be consistent with one another unless they
behave in the same way. Elements that behave in the same way should have the same
appearance. However, unlike elements must appear unlike (inconsistent) just as much as like
elements must appear consistent. Re-use code helps in maintaining the consistency.
Visual Order and Viewer focus The essential and significant elements of the show must be
drawn attention to at the appropriate time. It must be obvious that these items can be picked, as
well as how to pick them.
Effective Visual Contrast To achieve this goal, effective visual contrast between different
components of the screen is used. Sound and animation are also used to attract focus. The user
must also be given feedback.
Importance of User Interface (UI)
User Interface (UI) design is the link between users and application which includes the basic
design elements. These design elements need to be present to help the end user to navigate
through the application. It makes the relationship of the end user with the application strong and
friendly. A good UI then enhance the communication, visibility, and productivity. A user
interface is that portion of an interactive computer system that communicates with the user.
Design of the user interface includes any aspect of the system that is visible to the user. Because
the user interface design includes everything that is visible to the user, it is deeply ingrained in
the overall design of the interactive system. A good user interface cannot be added to a system
after it has been developed; it must be designed from the start. A well-designed user interface
can significantly reduce training time and improve performance. The design of a user interface
can have a significant impact on training time, performance speed, mistake rates, user happiness,
and the user’s long-term retention of operations knowledge. In the past, shoddy designs gave
way to sophisticated systems.
Design of a User Interface (UI)
Design of a User Interface or UI begins with task analysis—an understanding of the user’s
underlying tasks and the problem domain. The user interface should be designed in terms of the
5
user’s terminology and conception of his or her job, rather than the programmer’s. There are
several levels of design which are explained below.
Design Levels of a User Interface (UI)
It is helpful to think about the user interface at many levels of abstraction and come up with a
design and implementation for each one. This breaks down the developer’s workload into
smaller chunks, making it easier to manage. The User Interface (UI) is basically into the
following:
Conceptual Level: The conceptual level describes the basic entities underlying the user’s view
of the system and the actions possible upon them.
Semantic Level: The semantic level describes the functions performed by the system. This
corresponds to a description of the functional specifications of the system, but it does not address
how the user will invoke the functions.
Syntactic Level: The syntactic level describes the input and output sequences required to
invoke the described functions. A related method is the syntactic semantic object-action model,
which separates the task and computer concepts (i.e., the semantics in the previous paragraph)
from the task syntax.
Lexical Level: The lexical level determines how raw hardware operations are transformed into
inputs and outputs.
HCI improves user-computer experiences by making machines more accessible and responsive
to the user’s needs. HCI is important because it would be necessary for goods to be more
effective, healthy, and useful. It will make the user’s experience more pleasurable in the long
run. As a result, having someone with HCI skills involved in all phases of any product or device
creation is critical. HCI is often crucial to prevent goods or programmes from failing entirely.
Interaction technique
An interaction technique or user interface technique is a combination of input and output
consisting of hardware and software elements that provides a way for computer users to
accomplish a simple task. For example, one can go back to the previously visited page on a Web
6
browser by either clicking a button, hitting a key, performing a mouse gesture or uttering a
speech command. The computing perspective of interaction technique: Here, an interaction
technique involves one or several physical input devices, including a piece of code which
interprets user input into higher-level commands, possibly producing user feedback and one or
several physical output devices. Consider for example, the process of deleting a file using a
contextual menu. This first requires a mouse and a screen (input/output devices). Then, a piece of
code needs to paint the contextual menu on the screen and animate the selection when the mouse
moves (user feedback). The software also needs to send a command to the file system when the
user clicks on the "delete" item (interpretation).
7
CHAPTER TWO
CONCEPTUALIZE INTERACTION
In Human-Computer Interaction (HCI), a problem space refers to the complete set of goals,
tasks, processes, constraints, and requirements that need to be considered in order to design an
effective interactive system.
It includes:
Why is it Important?
Example:
Problem Space: Users want to browse food menus, place orders, and make payments
easily using their smartphones within minutes, with secure payment options and reliable
delivery tracking.
A conceptual model is a high-level description of how a system is organized and operates from
the user’s perspective. It bridges the gap between system functionalities and user expectations.
These models are structured around the tasks or operations users need to perform.
Key Features:
Task-centered
Supports workflows and sequences of actions
Focuses on how users accomplish goals
8
Examples:
Benefits:
These models focus on the data or entities that the system manipulates, rather than the actions
performed.
Key Features:
Object-oriented
Supports data management and relationships
Emphasizes interaction with components
Examples:
Benefits:
A. Interface Metaphors
An interface metaphor uses familiar concepts or objects from the real world to help users
understand and navigate a digital interface.
Purpose:
9
Simplifies user learning
Provides predictability in interactions
Enhances usability through familiarity
Common Types:
Benefits:
Limitations:
B. Interaction Paradigms
An interaction paradigm defines a general model or approach to how users interact with a
computer system. It sets the tone for designing interaction strategies and interface elements.
Choosing a Paradigm:
10
Influenced by device type (desktop vs. mobile)
Informed by design goals (efficiency vs. ease of use)
CHAPTER THREE
11
COGNITION IN HCI
Cognition refers to the mental processes involved in acquiring knowledge and understanding
through thought, experience, and the senses. These processes include attention, perception,
memory, reasoning, problem-solving, and decision-making.
In HCI, cognition plays a central role in how users interact with technology. An understanding of
cognitive processes allows designers to create systems that support rather than hinder human
thought and behavior.
Mental Models
Information Processing
External Cognition
A. MENTAL MODELS
A mental model is the user’s internal representation or understanding of how a system works.
These models are formed through experience, instruction, and interaction with similar systems.
📌 Key Characteristics:
Example:
When using a microwave, users may think it works like an oven (heat comes from coils), when
in fact it uses electromagnetic waves. Their mental model affects how they set time and power
levels.
Relevance to HCI:
12
Interfaces should align with users’ mental models to reduce learning curves.
Consistency in layout and function reinforces accurate mental models.
Unexpected behavior (e.g., unclear error messages) can lead to confusion and frustration.
Design Implications:
This framework likens the human brain to a computer, processing input data into usable
output through a series of cognitive stages.
Stage Description
Sensory Memory Briefly stores raw sensory input (visual, auditory, etc.)
Perception Filters and interprets sensory data into meaningful patterns
Working Memory (STM) Temporarily holds and manipulates information for current tasks
Long-Term Memory (LTM) Stores knowledge and experiences for future retrieval
Response Execution Executes actions based on decisions and input
Example in HCI:
Design Tips:
13
C. EXTERNAL COGNITION
External cognition refers to how people use the environment—especially external representations
such as tools, notes, diagrams, or screens—to support and extend their thinking.
Tool/Medium Function
Notes or To-Do Lists Offload memory, organize thoughts
Diagrams/Flowcharts Visualize relationships and processes
Maps Support spatial reasoning
GUIs & Dashboards Help users monitor and interact with data
Example:
Using a calendar app to track deadlines allows users to externalize their scheduling rather than
remember each date.
Benefits:
Relevance in HCI:
Tools and interfaces should enhance users' ability to think and reason.
Use visual hierarchies and layout to make information digestible.
Encourage users to externalize tasks (e.g., reminders, bookmarks, breadcrumbs).
Design Considerations:
14
Conceptual frameworks help to organize our understanding of how users think and interact with
systems. In HCI, the three primary cognitive frameworks are:
A. Mental Models
B. Information Processing
C. External Cognition
A. Mental Models
Mental models are internal representations of how users believe a system works. These models
are built from prior experiences, observation, and instruction. They allow users to predict the
outcomes of their actions when interacting with systems.
Example:
If a user believes the "Trash Bin" on a desktop permanently deletes files (instead of moving them
for later recovery), they may avoid using it, based on their mental model.
Design Implications:
B. Information Processing
The Information Processing Model describes human cognition as a sequence of steps through
which information flows — similar to how a computer processes data.
Stage Description
Sensory Memory Captures raw sensory input briefly
Perception Interprets the sensory input into meaningful information
Working Memory Temporarily holds information for immediate tasks
Long-Term Memory Stores experiences and knowledge for future use
Response Execution User takes action based on processed information
15
Example:
A user reads a confirmation dialog box. They perceive the text, hold the information in working
memory, compare it with their goal, then click “OK” or “Cancel.”
🧠 Design Implications:
C. External Cognition
External cognition describes how users utilize physical tools, symbols, and representations
outside the mind to support their thinking.
Design Implications:
Informal design is the process of applying theoretical knowledge — especially from cognitive
psychology — into real-world system development without strict adherence to formal models or
engineering processes.
In HCI, understanding cognition allows designers to apply design principles that intuitively
support users’ mental and perceptual processes.
16
User personas and scenarios (to model mental models)
Sketches and wireframes (to externalize cognitive processing)
Heuristic evaluations (based on cognitive heuristics like visibility, feedback, consistency)
📌 Example:
Designers building a mobile banking app consider that users may forget their transaction history
(limited memory). So, they add a “Recent Transactions” panel, aligning with both information
processing theory and the need for external cognition.
CHAPTER FOUR
Collaboration and Communication
17
Human interaction with computers increasingly involves not just individual use, but also
collaborative work. Social mechanisms are the frameworks, norms, and behaviors that allow
people to interact and coordinate effectively.
1. Turn-Taking
This refers to the structured pattern in which people take turns while speaking or interacting.
Design Implication: In collaborative software (e.g., Zoom, Google Docs), clear visual cues
(like a raised-hand icon) support orderly interaction.
2. Awareness
The knowledge of who is doing what in a shared system.
Design Implication: Presence indicators and real-time updates (e.g., “John is editing cell
B3”) in collaborative tools.
3. Shared Context
Participants need a mutual understanding of the task and environment.
Design Implication: Interfaces should provide shared workspaces or annotations that
maintain a common frame of reference.
4. Grounding
The process of establishing shared knowledge or agreement during communication.
Design Implication: Chat confirmations, color-coded comments, and notifications that help
users reach agreement.
5. Coordination
The management of interdependent tasks, timing, and resources.
Design Implication: Features like task assignment, shared calendars, and synchronized
editing.
6. Example in Technology:
Slack or Microsoft Teams supports turn-taking, grounding (via threads), and shared context
(via channels and file sharing).
Ethnography in HCI is a qualitative research method used to study how people use technology in
their natural settings. It provides deep insight into the social and organizational context of users.
18
Key Ethnographic Issues Include:
1. Context of Use
– Understanding the environment in which collaboration occurs (e.g., office, factory,
hospital).
– Ethnographers observe how people really work, not how they say they work.
2. Unspoken Practices
– Many collaborative behaviors are implicit or informal.
– Ethnography helps uncover these practices (e.g., body language, shared jokes, rituals).
3. Workarounds
– Users often adapt or modify systems to fit their needs.
– Studying these practices reveals design flaws or opportunities.
4. Communication Patterns
– Who communicates with whom, when, and how.
– Ethnography tracks verbal and non-verbal exchanges to understand collaboration flow.
Example:
An ethnographic study in a hospital might observe how nurses share information during shift
changes—possibly revealing that much coordination is done informally, leading to
improvements in handoff software.
Language Framework
Language is central to human interaction. In HCI, understanding the structure and function of
language helps improve command languages, user interfaces, and error messages.
Concepts:
Natural Language: Speech or text as used in daily communication (e.g., voice assistants
like Siri).
Command Language: Syntax and rules used to issue instructions to the system (e.g.,
terminal commands).
Controlled Vocabulary: A restricted set of terms used to minimize ambiguity (e.g., in
menu selections).
Design Implication:
Interfaces should align with users' linguistic expectations. For instance, a voice-enabled ATM
should recognize common banking terms like "balance" or "transfer."
19
B. Distributed Cognition
Distributed cognition is the theory that cognitive processes are not confined to an individual’s
mind but are spread across people, tools, artifacts, and environments.
Example:
Design Implications:
CHAPTER FIVE
20
AFFECTIVE AND EXPRESSIVE INTERFACE
Affective and expressive interfaces are elements of user interface design that go beyond mere
functionality to consider the emotional responses and feelings of users. The term "affective"
pertains to emotions and moods, while "expressive" relates to how those emotions are conveyed
or perceived through the interface. These interfaces are crucial in enhancing user engagement,
satisfaction, and overall user experience.
Affective interfaces aim to detect, interpret, and respond to users’ emotional states. This can be
done using a variety of technologies such as emotion recognition software, facial expression
analysis, voice tone detection, and physiological sensors. For example, an e-learning application
might detect when a user is frustrated or confused based on their facial expressions or time taken
on a question and adapt its responses accordingly—perhaps offering hints, encouragement, or
changing the difficulty level.
Expressive interfaces, on the other hand, focus on the system’s ability to project emotions or
personality traits. These are often implemented using avatars, emotive icons, animations, sound
effects, or tone of voice in speech interfaces. Expressive elements make interactions more
human-like and relatable, helping users to form a more natural and engaging relationship with
the system. For instance, a virtual assistant that uses cheerful language and animations can make
an application feel more friendly and welcoming.
Designing affective and expressive interfaces involves understanding the psychological and
social dimensions of human emotion. It also requires careful balancing to avoid unintended
reactions. Overly expressive systems may come across as annoying or distracting, while poorly
implemented affective responses may seem insincere or robotic. Therefore, emotional design
must be based on thorough research, testing, and an understanding of the cultural and individual
variability in emotional expression.
One of the key benefits of anthropomorphism in interaction design is its ability to reduce the
learning curve associated with new technologies. When systems behave in familiar, human-like
ways, users find it easier to predict their responses and understand how to interact with them.
21
This is particularly beneficial in contexts where users may have low technical skills, such as
elderly users or children.
Anthropomorphic design can also increase emotional engagement and trust. For instance, users
may feel more comfortable and supported when interacting with a healthcare robot that exhibits
empathetic behaviors, such as nodding, smiling, or using a soothing tone. Similarly, customer
service chatbots that mimic polite human dialogue can lead to more positive user experiences.
Virtual characters and agents are computer-generated entities designed to simulate human-like
interaction within a digital environment. They are widely used in gaming, education, customer
service, training simulations, and virtual reality (VR) environments.
A virtual character is typically an animated figure that may represent a human, animal, or
fictional creature. These characters can display gestures, facial expressions, and voice outputs to
simulate realistic communication. They often function as avatars, guides, or participants within a
digital space, helping to create immersive and interactive user experiences. For example, in a
language learning application, a virtual tutor might provide instructions, feedback, and
encouragement to learners.
A virtual agent, on the other hand, is an autonomous software entity capable of perceiving its
environment, making decisions, and taking actions to achieve specific goals. Virtual agents are
often embedded with artificial intelligence (AI) and natural language processing (NLP)
capabilities that allow them to understand user input and generate appropriate responses. Virtual
agents may or may not have a visual representation (i.e., they can be embodied or disembodied).
For instance, a virtual customer support agent on a website may answer user queries through a
chat interface without having a visible avatar.
These characters and agents can be designed with affective and expressive capabilities to further
humanize interactions. For example, an agent in a virtual therapy app might be programmed to
recognize signs of user distress and respond with empathetic language and tone. Similarly,
virtual characters in educational games can use facial expressions and body language to maintain
learner engagement and convey emotions like excitement or disappointment.
22
Virtual characters and agents are digital entities that interact with users in human-like or animal-
like ways within software applications, websites, simulations, video games, educational
environments, and artificial intelligence (AI) systems. They are often powered by AI
technologies and designed to perform tasks, assist users, or create immersive environments by
simulating behavior, speech, and emotional expression.
They play a key role in Human-Computer Interaction (HCI), making interactions more engaging,
personal, and natural.
There are two primary kinds of virtual characters commonly used in interaction design and
intelligent systems:
1. Synthetic Characters
Synthetic characters are computer-generated personas that simulate lifelike human or animal
behavior. They may include facial expressions, gestures, emotions, and conversational skills.
These characters are often embedded in simulations, video games, virtual environments, or social
robots.
Characteristics:
Often have human-like features such as faces, voices, and body language.
May use Natural Language Processing (NLP) to converse with users.
Designed to appear intelligent and emotionally responsive.
Can be embodied (physically represented, e.g., in robots) or disembodied (on screens or
in virtual spaces).
Examples:
Benefits:
23
Challenges:
2. Animated Agents
Animated agents are computer-generated characters with movement and graphical animations
that perform specific roles in software environments. They can move, point, talk, express
emotions, or demonstrate tasks using pre-programmed or AI-driven animation sequences.
Characteristics:
Examples:
Benefits:
Challenges:
Emotional Agents
24
Embodied Conversational Interface Agents (ECIAs)
Both types aim to enhance user experience by making computer systems more intuitive,
interactive, and emotionally responsive.
Emotional Agents
Emotional agents are intelligent software entities or virtual characters that are capable of
recognizing, processing, simulating, and sometimes responding to emotional states. Their
purpose is to simulate emotions or emotional responses to create more natural, empathetic, and
human-like interactions.
Key Characteristics:
1. Emotion Modeling: Emotional agents often include internal models of emotion (e.g.,
using psychological theories like the OCC model—Ortony, Clore, Collins) to generate or
simulate emotional responses to environmental stimuli or user input.
2. Affective Computing: These agents use affective computing techniques to detect users'
emotions via facial recognition, voice tone, or physiological signals and adjust their
behavior accordingly.
3. Expressiveness: Emotional agents are designed to express emotions through facial
expressions, tone of voice, body language, or language choice.
4. Purpose: Emotional agents are particularly useful in environments where empathy,
engagement, or user motivation is important (e.g., education, therapy, customer service,
gaming).
Applications:
Benefits:
Challenges:
25
Embodied Conversational Interface Agents are virtual agents with a visual presence (a digital
body or avatar) that can engage in interactive dialogue with users through natural language,
gestures, facial expressions, and body movements. These agents combine conversational
capabilities with physical embodiment, making interactions more natural and engaging.
Key Characteristics:
1. Embodiment: Unlike disembodied voice assistants (e.g., Siri, Alexa), ECIAs have a
visible form—either 2D or 3D avatars—that simulate human-like appearance and
behavior.
2. Multimodal Communication: ECIAs use a combination of text, voice, facial expressions,
gestures, and body language to communicate.
3. Dialogue Management: They possess conversational engines that allow for multi-turn,
context-sensitive dialogues.
4. Personality & Presence: These agents often have defined personalities and behavioral
traits to enhance believability and user connection.
5. Contextual Awareness: ECIAs may be equipped with sensors or data inputs to recognize
environmental or user context, enhancing relevance in interactions.
Applications:
Benefits:
Challenges:
User frustration refers to the negative emotional response that occurs when users encounter
obstacles while interacting with a system or application. These obstacles might be technical (e.g.,
software errors), cognitive (e.g., complex interfaces), or emotional (e.g., feeling ignored or
confused). Frustration can lead to decreased productivity, user dissatisfaction, abandonment of
the system, or negative reviews. Understanding and mitigating user frustration is a key goal of
effective interaction design.
26
Common Causes of User Frustration
Designers, developers, and UX professionals can take several steps to minimize user frustration
and improve user experience:
Empathize with users through research, interviews, surveys, and usability testing.
Involve users early in the design process to understand their goals, preferences, and pain
points.
Create personas and scenarios to guide design decisions based on real-world needs.
Ensure that interfaces are intuitive, with clear visual hierarchy and consistent design
patterns.
Use meaningful labels, familiar icons, and logical groupings to help users easily find and
understand features.
Minimize cognitive load by simplifying tasks and eliminating unnecessary steps.
Always inform users of system status (e.g., loading indicators, progress bars).
Provide confirmations for successful actions and clear, constructive error messages when
something goes wrong.
27
Offer visual or auditory cues when users interact with elements (e.g., button highlights,
sound effects).
Design helpful and non-technical error messages that suggest corrective actions (e.g.,
"Please check your internet connection" instead of "Error 503").
Implement undo and redo features so users can recover from mistakes.
Use preventative design by disabling invalid inputs or providing suggestions (e.g.,
autocomplete, dropdowns).
5. Enhance Performance
6. Maintain Consistency
Use consistent layout, color schemes, navigation structures, and language throughout the
interface.
Ensure that similar actions produce similar results to avoid confusion.
Provide onboarding tutorials, tooltips, and help documentation to assist new users.
Offer in-app guidance or chat support for real-time help.
Use FAQs and forums to empower users to solve common problems independently.
28
1. Improves Emotional Engagement and Trust
One of the most compelling justifications for using anthropomorphism is that it makes systems
appear more "human-like," thus improving emotional connection. When users interact with
systems that smile, respond politely, or speak in a conversational tone, they tend to feel more at
ease and develop a level of trust, which can encourage continued use and loyalty.
Example:
Virtual assistants like Apple’s Siri or Amazon’s Alexa use human-like voices, tones, and
personalities. These traits make interactions more pleasant and relatable, fostering a sense
of companionship or helpfulness.
Anthropomorphic elements help users understand how a system functions by tapping into
familiar social behaviors. When systems behave like humans—providing feedback, making eye
contact (in the case of robots or avatars), or even using gestures—users can draw on their
existing social knowledge to navigate the interface more intuitively.
Example:
A chatbot that uses friendly greetings and conversational turn-taking makes it easier for
users to understand how to interact with the system, even without prior training.
Interacting with machines can be intimidating, especially for less tech-savvy users. By giving the
system a human face, voice, or behavior, anthropomorphism reduces the psychological distance
between humans and technology. This can decrease user anxiety, especially in high-stress
environments such as customer service, healthcare, or educational platforms.
Example:
Anthropomorphism enables the design of systems that support natural language processing,
gesture recognition, and expressive communication. This allows users to interact with the system
as they would with another person, removing the need for complex commands or technical
jargon.
Example:
29
Voice-enabled systems like Google Assistant allow users to ask questions or give
commands in everyday language rather than typing structured queries or navigating
menus.
Example:
A system that encounters an error might say: “Oops, something went wrong. Let me try
that again for you,” which is more comforting than a cold technical error code.
6. Increases Accessibility
For users with disabilities, anthropomorphic features such as speech interfaces, animated avatars,
or virtual agents can provide a more accessible and inclusive user experience. These features can
substitute for traditional text-based interfaces and cater to users with visual, motor, or cognitive
impairments.
Example:
An assistive robot with facial expressions and voice output can help guide elderly users
through medication reminders or emergency instructions more effectively than plain text
notifications.
Anthropomorphism can influence users to exhibit socially desirable behaviors such as politeness,
patience, and attention to feedback. These behaviors, in turn, can enhance the quality and
efficiency of human-computer interactions.
Example:
Users are more likely to say “please” and “thank you” when interacting with voice
assistants, creating a respectful and pleasant interaction loop.
Avoid deception: Users should not be misled into believing the system has real emotions
or consciousness.
30
Maintain functionality: Aesthetic and emotional appeal should not come at the cost of
system performance or efficiency.
Respect user preferences: Not all users appreciate anthropomorphism—especially in
professional or utilitarian contexts.
Designing virtual characters goes beyond technical implementation; it involves ensuring that the
characters evoke trust, relatability, and utility from users. A poorly designed character can
diminish user engagement, while a well-crafted one can enhance interaction, learning, or
entertainment. The four major design concerns are:
Believability refers to how convincingly a virtual character mimics human or lifelike qualities in
its interaction, expression, and responsiveness. A believable character is one that users perceive
as consistent, emotionally aware, and contextually appropriate—even if they know it’s artificial.
Consistency: The character should behave consistently with its personality, role, and
context. Erratic or unpredictable behavior breaks the illusion.
Intentionality: The character’s actions should appear purposeful rather than random or
robotic.
Emotional Responsiveness: The character should respond appropriately to user input or
environmental changes (e.g., smiling in response to praise).
Personality: Embedding distinct traits and conversational tone helps users relate to the
character and feel more immersed in the interaction.
Suspension of Disbelief: The goal is not necessarily to convince the user that the
character is human, but to make them emotionally and cognitively engage with it as if it
were.
Challenges:
Avoiding the “Uncanny Valley,” where characters appear almost human but elicit
discomfort.
Balancing realism with computational limitations.
2. Appearance
Appearance deals with the visual design and representation of virtual characters. This includes
their form, attire, facial expressions, and overall aesthetic styling. Appearance is often the first
thing users notice and plays a major role in shaping expectations.
31
Key Considerations:
Stylization vs. Realism: Some applications benefit from cartoon-like avatars (e.g.,
education, games), while others may need realistic avatars (e.g., medical simulations).
Cultural Sensitivity: Appearance should reflect or respect the cultural background and
expectations of the user audience.
Expressiveness: Facial features and body postures must be capable of conveying a wide
range of emotions and reactions.
Gender and Diversity: Characters should be inclusive, avoid stereotypes, and represent
diverse user groups when necessary.
Adaptive Appearance: In some cases, allowing users to customize the character's look
enhances identification and comfort.
Challenges:
3. Behavior
Behavior is how the virtual character acts and reacts in response to stimuli, including user
interaction, environmental events, or internal programming. This includes verbal
communication, gestures, emotions, and task execution.
Key Components:
Social Norms: Characters should respect social cues such as turn-taking in conversation,
personal space, and polite expressions.
Context Awareness: Behavior should adapt to different situations (e.g., professional tone
in business settings, friendly tone in games).
Animation and Motion: Movements should be smooth and meaningful. Sudden or
unnatural motion can break immersion.
Goal-Oriented Actions: Characters should exhibit behavior aligned with their intended
function (e.g., a teaching agent patiently guides; a security agent strictly enforces rules).
Feedback Mechanisms: Behaviors that show understanding or confirmation of user input
(e.g., nodding, saying “I understand”) improve communication.
Challenges:
4. Mode of Interaction
Mode of interaction refers to how users communicate or engage with the virtual character. It
defines the channels of input/output and the responsiveness of the character.
32
Types of Interaction Modes:
Text-based Interaction: Characters interact via written text in chat windows—suitable for
constrained environments.
Voice Interaction: Allows for natural and hands-free interaction, ideal for accessibility
and realism.
Gesture-based Interaction: Using body movement or hand gestures, often in VR/AR
environments.
Touch or Click Interfaces: Users interact through selecting options or dragging/dropping
on-screen items.
Multimodal Interaction: Combining multiple modes (e.g., voice + gesture + facial
recognition) for rich user experiences.
Design Considerations:
Accessibility: Interaction modes should accommodate users with different abilities (e.g.,
hearing or vision impairments).
Simplicity: Avoid overcomplicating the interface—interactions should be intuitive and
user-friendly.
Latency and Responsiveness: The system should respond quickly to user actions to avoid
confusion or frustration.
Consistency: The interaction pattern should be predictable and align with user
expectations based on real-world experiences.
Challenges:
33
CHAPTER SIX
PROCESS OF INTERACTION DESIGN
1. Waterfall Model
The Waterfall Model is one of the earliest process models introduced in software engineering. It
is a linear sequential model that divides software development into distinct phases.
Phases include:
Requirement Analysis
System Design
Implementation (coding)
Testing
Deployment
Maintenance
In the context of interaction design, this model assumes that user requirements can be fully
captured upfront, and the design process flows in a one-directional manner—like a waterfall.
Advantages:
Disadvantages:
2. Iterative Model
The Iterative Model focuses on the cyclic nature of design, development, and evaluation. It
emphasizes refining and improving the product through successive versions.
Key Characteristics:
34
Advantages:
Disadvantages:
3. Spiral Model
The Spiral Model combines the ideas of the iterative approach with risk analysis. It is best suited
for large, high-risk projects. Each loop in the spiral represents a development phase.
Planning
Risk Analysis
Engineering (design & development)
Evaluation
Key Features:
Advantages:
Disadvantages:
Life cycle models guide the process of software and interface development. While software
engineering models focus on technical aspects, HCI life cycle models center on user involvement
and usability.
35
Key Life Cycle Models:
Waterfall Model
V-Model (Verification and Validation)
Agile Development Model
Spiral Model
Incremental Model
HCI life cycles include steps to understand users and tasks, design solutions, and evaluate
usability.
Common Phases:
Differences:
User testing is a process of evaluating a product by testing it with real users. It helps uncover
usability problems and areas of improvement.
a) Usability Testing:
b) A/B Testing:
Comparing two versions (Version A vs. Version B) to see which performs better based
on specific metrics.
36
c) Remote Testing:
Goals:
Benefits:
These are techniques used to evaluate the design and performance of an interactive system.
Formative Evaluation
Formative evaluation is a crucial part of the design and development process, especially in
human-computer interaction (HCI) and software development. It is conducted during the early or
middle phases of a project to gather feedback that helps improve the design before the final
product is completed. Unlike summative evaluation, which measures the effectiveness of a
finished product, formative evaluation focuses on identifying usability problems and design
flaws while there is still flexibility to make changes.
This type of evaluation often involves real users interacting with prototypes, mockups, or early
versions of the system. The feedback collected is typically qualitative, focusing on understanding
where users struggle, what confuses them, and how intuitive the interface is. Common
techniques used in formative evaluation include think-aloud protocols, where users verbalize
their thoughts while using the system, heuristic evaluations conducted by usability experts, and
cognitive walkthroughs that simulate user problem-solving processes.
Formative evaluation encourages iterative design, where each cycle of testing informs
subsequent revisions. This continuous feedback loop helps designers create more user-friendly
interfaces and ensures that the system aligns with user needs and expectations. Because
formative evaluation is flexible and informal, it can be adapted to various stages of development
and different types of systems. Ultimately, it helps to prevent costly redesigns later by catching
problems early.
Methods:
37
Think-aloud protocol
Paper prototyping
Heuristic evaluation
Field Study
Field studies provide an in-depth understanding of how users interact with technology in their
natural environments. Instead of controlling or simulating the environment, field studies observe
real users performing their tasks where the system will actually be used—whether that be in a
workplace, home, or public space. This contextual approach allows designers and researchers to
gain insights into how environmental factors, social dynamics, and physical conditions affect
user behavior.
One of the major strengths of field studies is their ability to reveal the complexities and nuances
of real-world usage that laboratory tests may miss. For instance, a user might behave differently
when under time pressure, when distracted by coworkers, or when using the system alongside
other tools. Field studies often involve ethnographic methods such as participant observation,
interviews, diary studies, or video recordings that document natural interactions over time.
Because of their immersive nature, field studies can uncover unexpected needs and challenges,
making them invaluable for designing systems that fit seamlessly into users’ workflows and
lifestyles. They also provide insights into user attitudes, motivations, and the broader social and
organizational context, which can significantly impact technology adoption and success.
Example:
Benefits:
Controlled Experiment
Controlled experiments are a fundamental scientific method used in HCI to rigorously test
hypotheses about design choices, interface features, or system performance. Unlike formative
evaluation and field studies, controlled experiments take place in a carefully managed
environment where variables can be manipulated, and external factors minimized. This
controlled setting allows researchers to isolate specific factors and measure their direct impact on
user performance or satisfaction.
38
yields quantitative data that can be statistically analyzed to determine which design alternatives
perform better and why.
Controlled experiments are especially useful when precise, objective comparisons are needed—
such as deciding between two competing interface layouts or testing the effectiveness of a new
feature. The results provide strong evidence for making design decisions and are often used to
validate findings from more exploratory methods like formative evaluations and field studies.
However, because experiments are conducted in artificial settings, their findings may sometimes
lack ecological validity; that is, they might not fully capture how users behave in real-world
contexts.
Example:
Testing two menu designs to determine which leads to faster task completion.
Benefits:
When designing and testing tasks within interactive systems, several fundamental issues must be
carefully considered to ensure that the tasks are realistic, meaningful, and useful for evaluating
the system’s usability. These issues revolve around defining tasks that truly represent user goals,
ensuring tasks are measurable, and creating test environments that yield valid and reliable data.
Here are some of the key issues:
1. Task Representativeness
One of the first and most important issues in task design is ensuring that the tasks chosen for
testing are representative of the real-world activities users will perform with the system. Tasks
should reflect actual user goals, workflows, and contexts rather than artificial or trivial actions
that may not reveal meaningful usability insights.
If tasks do not closely mirror real usage, the results of testing may not generalize well to actual
user experience. For example, testing a word processor with simple text entry tasks may miss
critical issues related to formatting or collaboration that users encounter daily. Designers must
therefore analyze user requirements and domain activities carefully to define representative tasks
that capture the diversity and complexity of real-world usage.
39
Balancing the complexity and scope of test tasks is another essential concern. Tasks that are too
simple may fail to reveal meaningful usability problems because they don’t challenge the system
or the user enough. On the other hand, tasks that are too complex can overwhelm users or
introduce confounding factors, making it difficult to isolate specific usability issues.
Choosing tasks of varying complexity allows evaluators to test different aspects of system
performance, from basic operations to more advanced or combined functions. It is also important
to define clear start and end points for tasks, so performance can be reliably measured and
compared.
Tasks must be clearly and unambiguously defined so that users understand what they are
expected to do during testing. Poorly worded or ambiguous instructions can confuse participants
and skew results. Providing written instructions, demonstration, or training beforehand can help
users focus on task performance rather than figuring out what the task entails.
Moreover, the wording should avoid biasing users toward particular strategies or solutions.
Instructions should be neutral, encouraging natural interaction rather than leading users toward
expected outcomes.
Another crucial issue is deciding how to measure task performance effectively. Common
measures include task completion time, error rates, success/failure rates, and subjective
satisfaction ratings. Selecting appropriate metrics depends on the goals of the evaluation. For
instance, speed may be more critical in time-sensitive applications, while accuracy might be
paramount in safety-critical systems.
In addition, evaluators should consider collecting qualitative data such as user comments,
observations, or think-aloud protocols to supplement quantitative measures. This combined
approach provides a richer understanding of where and why users face difficulties.
The environment in which task testing occurs can significantly affect user performance.
Laboratory settings provide control but may lack ecological validity because they do not
replicate real-world distractions, interruptions, or multitasking conditions. Conversely, field
testing captures natural context but introduces variability that can complicate data interpretation.
Designers must decide the appropriate balance between control and realism, considering factors
such as lighting, noise, device types, and user state (e.g., fatigue or stress) when planning task
tests. Where possible, simulating typical user environments helps ensure findings are applicable
in practice.
40
Users differ in their experience levels, cognitive abilities, preferences, and goals. Thus, tasks
should be tested with a diverse user population to ensure the system accommodates a broad
range of needs. Additionally, tasks might need to vary in ways that reflect this diversity, such as
offering multiple ways to accomplish the same goal or providing customizable options.
Ignoring user diversity risks designing systems that work well only for a narrow segment,
limiting usability for others. Inclusive task design and testing help create systems that are robust
and accessible.
When multiple tasks are tested in one session, the order in which tasks are presented can
influence performance due to learning or fatigue effects. Users may improve simply because they
become more familiar with the system (learning effect), or their performance may decline due to
tiredness or boredom.
Finally, ethical considerations such as informed consent, privacy, and minimizing user stress
must be addressed in task design and testing. Tasks should not cause undue frustration or
discomfort. Practical constraints, including time limits, resource availability, and technical
limitations, also impact how tasks are designed and conducted.
Evaluators must strike a balance between thoroughness and feasibility, ensuring testing sessions
are productive yet respectful of participants’ time and well-being.
Domain analysis involves researching and understanding the environment, needs, tasks, tools,
and behaviors of target users. It is a critical first step in user-centered design.
Purpose:
Ensure that the design aligns with the actual work and goals of users.
Reduce assumptions and guesswork in design.
a) User Goals:
41
What are users trying to accomplish?
Are their goals strategic (long-term) or tactical (short-term)?
b) User Tasks:
c) Work Environment:
Benefits:
Selecting typical users and their corresponding domains is a crucial step in designing and
evaluating interactive systems. The success of user-centered design and usability testing largely
depends on how well the chosen users represent the actual user base and how accurately their
real-world environments and tasks (domains) are understood and incorporated. Several
fundamental issues arise in this selection process:
One of the primary issues is to ensure the users selected for design or testing represent the broad
population of end-users. “Typical users” means those who are most likely to use the system
regularly and whose needs, skills, and behaviors reflect the diversity of the user base.
User Diversity: Users vary widely in experience, expertise, age, cultural background,
disabilities, cognitive abilities, and motivations. Selecting only expert users or those
familiar with technology can bias the results and lead to designs that exclude novices or
other groups.
42
User Roles: In many systems, different users have different roles (e.g., administrators,
end-users, technicians), each with distinct tasks and requirements. Selecting users across
these roles ensures that the system accommodates various perspectives and needs.
The domain refers to the real-world environment, context, and tasks in which users operate the
system. Accurately capturing this domain is essential for meaningful design and evaluation.
Domain Complexity: Domains may range from simple (e.g., using a calculator) to
highly complex (e.g., air traffic control, medical diagnosis). Understanding domain-
specific constraints, workflows, and goals is vital to selecting users who truly represent
typical domain challenges.
Context of Use: The physical, social, and organizational contexts influence how users
interact with technology. For example, a mobile worker’s domain may involve outdoor
environments, variable lighting, and intermittent connectivity, whereas an office user’s
domain may involve collaborative workflows and multi-tasking.
3. Recruitment Challenges
Finding and recruiting typical users can be difficult and resource-intensive, which poses practical
challenges:
Access and Availability: Some user groups might be hard to reach due to geographic,
institutional, or privacy barriers. For example, recruiting healthcare professionals or
children requires permissions and specialized approaches.
Willingness to Participate: Users may be reluctant to participate due to time constraints,
distrust, or lack of interest, potentially leading to a sample biased toward more motivated
individuals.
Sample Size and Representativeness: Small sample sizes limit the generalizability of
findings. However, very large samples may be impractical. Balancing these factors is
essential to get meaningful yet feasible results.
Self-Selection Bias: When users volunteer themselves, the sample might over-represent
enthusiastic or tech-savvy individuals.
Selection Bias: Choosing users based on convenience (e.g., colleagues, friends) may not
reflect the broader user base.
43
Stereotyping: Assuming certain users fit a stereotype can lead to ignoring diversity and
unique needs.
Designers must use careful sampling strategies, such as stratified or purposive sampling, to
mitigate bias.
Selecting users involves understanding their goals and typical tasks within the domain:
Goal Alignment: Users should be chosen whose goals align with the system’s purpose to
ensure relevance.
Task Variety: Users performing different tasks or using different features provide
comprehensive insights into system usability.
Skill Levels: Including users with varying expertise (novices, intermediates, experts)
helps evaluate how well the system supports learning and expert use.
Users come from diverse cultural backgrounds influencing expectations, interaction styles, and
acceptance of technology.
Localization Needs: User interfaces and workflows may need adaptation based on
language, cultural norms, and societal practices.
Ethical Concerns: Respecting cultural sensitivities and privacy during recruitment and
testing is paramount.
Users and their domains evolve over time due to changes in technology, work practices, or
societal trends.
Future-Proofing: Selecting users who represent anticipated future trends or tasks can
help design adaptable systems.
Continuous Engagement: Ongoing involvement of users in design cycles ensures the
system remains relevant and usable.
Preparing test conditions is a critical phase in usability testing and system evaluation, where the
environment, tasks, tools, and protocols are set up so that user interactions can be observed and
analyzed effectively. However, there are several challenges and issues that arise during this
preparation, which, if not properly addressed, can compromise the validity, reliability, and
usefulness of the test outcomes.
44
Task Representativeness: The tasks users perform during testing should closely reflect
real-world activities. Designing artificial or trivial tasks may not reveal genuine usability
issues or user behavior.
Task Complexity: Tasks that are too simple may not stress the system or reveal usability
problems; tasks too complex may overwhelm users or introduce confounding variables.
Task Clarity: Tasks must be clearly described and understandable to participants to
avoid confusion and inconsistent performance.
Balancing Standardization and Flexibility: Tests need consistent tasks for comparison
across users, yet should allow some flexibility to capture natural user behaviors.
Physical Environment: The testing environment should resemble the actual usage
context as much as possible. For example, testing a mobile app in a quiet lab room differs
from testing it outdoors on a noisy street.
Technological Setup: Hardware, software, network conditions, and peripherals should
match the real-life scenarios users experience to prevent artificial results.
Distraction and Interruptions: Real environments often have interruptions; overly
controlled settings may ignore these factors that affect usability.
Ergonomics: Seating, lighting, screen size, and input devices can affect user comfort and
performance, and thus need careful consideration.
Reliable Data Collection Tools: Tools for recording screen actions, mouse clicks,
keystrokes, eye tracking, or verbal protocols must be tested beforehand to ensure accurate
data capture.
Backup Plans: Equipment failure or software crashes can disrupt tests; having backups
and contingency plans is essential.
Calibration: Some equipment, like eye trackers or biometric sensors, require precise
calibration to work correctly.
45
5. Timing and Scheduling
Test Duration: Ensuring that the test is neither too long (which may cause fatigue) nor
too short (which may miss important behaviors).
Scheduling Conflicts: Coordinating test times that suit participants and testers, while
avoiding rushed or tired users.
Breaks and Rest: Planning for breaks during longer sessions to maintain user focus and
comfort.
Minimizing Bias: Testers must avoid influencing users with verbal or non-verbal cues.
Instructions should be neutral.
Consistency Across Sessions: Keeping conditions consistent across different test
participants to ensure comparable results.
Randomization: If multiple tasks or conditions are tested, randomizing order to avoid
learning effects or fatigue biases.
User Consent: Users must be informed about what data will be collected and how it will
be used.
Data Confidentiality: Ensuring personal data and test recordings are securely stored and
anonymized if necessary.
User Comfort: Avoiding situations that cause stress or discomfort during testing.
Handling User Errors: Designing test conditions that anticipate users making mistakes
without invalidating the test.
Dealing with Technical Problems: Having procedures to pause, restart, or reschedule
tests if technical issues arise.
Adaptability: Being ready to adjust tasks or conditions if initial plans prove ineffective.
46
47