Unity University
Department Of Computer Science
HCI - Assignment
Group Members #ID
1. Yohannes Taye……………. UU89555R
2. Zelalem Tegene…………… UU92267R
3. Michael Solomon…………. UU91768R
4. Yonathan Wendesen………. UU90052R
5. Kibrom Hailemariam……… UU90077R
1/22/2025
Addis Ababa, Ethiopia.
7.1 Evaluation Techniques
Evaluation in Human-Computer Interaction (HCI) is a crucial phase in the design and
development of interactive systems. It involves assessing a system’s usability, functionality,
and overall user experience to ensure that it meets the needs and expectations of users. The
goal is to identify usability issues, improve design, and enhance user satisfaction. Evaluation
techniques in HCI can be broadly categorized into formative evaluation (conducted during
design and development) and summative evaluation (conducted after system
implementation).
Types of HCI Evaluation Techniques
HCI evaluation methods can be classified into qualitative and quantitative approaches, each
with different goals and data collection methods.
a) User-Based Evaluation (Empirical Evaluation)
This involves real users interacting with the system, and their performance, feedback, and
behaviours are observed and analysed.
1. Usability Testing
o Observes users performing specific tasks with the system.
o Metrics collected: task completion time, error rate, satisfaction levels, and
efficiency.
o Methods include think-aloud protocols (users verbalize thoughts while using
the system).
o Tools used: eye-tracking systems, usability labs.
o Example: Evaluating an e-commerce website by asking users to search for and
purchase a product.
2. Field Studies
o Conducted in the user's natural environment rather than in a controlled setting.
o Helps understand real-world usage patterns and contextual influences.
o Techniques: direct observation, shadowing, or diary studies.
o Example: Observing how office workers use collaborative tools in their daily
workflow.
3. A/B Testing
o Compares two or more design variations to determine which one performs
better based on user interaction data.
o Metrics analysed: click-through rates, conversion rates, and task efficiency.
o Example: Testing two different versions of a landing page to see which
attracts more sign-ups.
4. Interviews and Surveys
o Collect subjective user opinions and experiences through structured or semi-
structured questions.
o Surveys gather responses from a large sample size, while interviews provide
in-depth insights.
o Example: Conducting a survey to assess user satisfaction after using a mobile
banking app.
b) Expert-Based Evaluation (Analytical Evaluation)
These evaluations do not require end users but instead rely on experts who assess the
interface based on established usability principles and guidelines.
1. Heuristic Evaluation
o Usability experts evaluate the system using a set of predefined usability
heuristics (e.g., Nielsen’s heuristics).
o Helps identify usability issues early in the design phase.
o Example heuristics include consistency, error prevention, and visibility of
system status.
o Example: Reviewing an e-learning platform to ensure compliance with
usability heuristics.
2. Cognitive Walkthrough
o Experts simulate a user’s thought process to determine whether a new user can
complete tasks without training.
o Focuses on learnability and ease of use for first-time users.
o Example: Evaluating the sign-up process of a healthcare portal for new users.
3. Guidelines and Standards Inspection
o Checks the system against established design guidelines and accessibility
standards (e.g., WCAG for accessibility).
o Example: Assessing a website for compliance with accessibility standards for
visually impaired users.
c) Automated Evaluation
Tools and software analyse the interface for usability issues without human intervention.
1. Automated Usability Testing Tools
o Software tools analyse aspects such as layout, colour contrast, and page
loading speed.
o Example tools: Google Lighthouse for web usability testing.
o Example: Evaluating the accessibility of a government website using
automated tools.
2. Eye-Tracking and Biometric Analysis
o Monitors user eye movements, facial expressions, or physiological responses
to measure cognitive load and attention.
o Example: Using eye-tracking software to see how users scan a product page.
d) Longitudinal Studies
1. Beta Testing
o A group of real users use the system for an extended period and provide
feedback.
o Helps in identifying long-term usability issues and patterns.
o Example: Conducting beta testing for a new social media platform before its
public launch.
2. Analytics and Logs Analysis
o Analysing user interaction data over time through log files, heatmaps, and
usage reports.
o Example: Tracking how frequently users access a feature in a mobile app.
2. Evaluation Metrics in HCI
When conducting evaluations, certain metrics help measure usability and performance,
including:
1. Efficiency – Time taken to complete a task.
2. Effectiveness – Accuracy and completeness of task completion.
3. Satisfaction – User perceptions of ease of use and experience.
4. Error Rate – Frequency and types of errors users encounter.
5. Learnability – How easily a new user can become proficient.
6. Retention – How often users return to use the system.
Choosing the Right Evaluation Technique
The selection of evaluation techniques depends on several factors, such as:
• Stage of Development:
o Early stages: Heuristic evaluation and cognitive walkthrough.
o Later stages: Usability testing and field studies.
• Resources Available:
o Budget and time constraints influence whether expert-based or user-based
methods are used.
• Target Audience:
o The choice of methods should align with the intended user group and their
technical proficiency.
7.2 What is Evaluation
Evaluation is the systematic process of assessing something to determine its value,
effectiveness, or quality based on specific criteria or standards. It involves gathering and
analyzing information to make informed judgments, improve performance, or guide decision-
making. Evaluation is used in various fields, such as education, business, healthcare, and
software development, to assess programs, projects, systems, or products.
• To measure success or effectiveness.
• To identify areas for improvement.
• To inform future decisions or actions.
Types of Evaluation:
• Formative Evaluation: Conducted during development to improve the process.
• Summative Evaluation: Conducted at the end to assess overall effectiveness or impact.
• Process Evaluation: Focuses on how something is being implemented.
• Outcome Evaluation: Measures the results and impact achieved.
8.1 User Support
User support in HCI refers to the assistance provided to users to help them effectively interact
with a system or application. It ensures that users can achieve their goals efficiently, even
when they face difficulties or unfamiliarity with the system. Good user support enhances
usability, satisfaction, and overall user experience (UX).
Types of User Support:
1. Proactive Support:
2. Reactive Support:
3. Embedded Support:
4. External Support: