Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
19 views41 pages

SEN 103 - Usability Engineering - Lecture Note 2

The document discusses usability engineering, emphasizing the importance of software usability in terms of performance, user satisfaction, and ease of learning. It outlines various models and standards, including Shackel's, Nielsen's, and ISO 9241-11, which provide frameworks for measuring usability through factors such as effectiveness, efficiency, and satisfaction. The document highlights the need for usability evaluation methods to ensure software meets user expectations and performs effectively in real-world contexts.

Uploaded by

okereebube87
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views41 pages

SEN 103 - Usability Engineering - Lecture Note 2

The document discusses usability engineering, emphasizing the importance of software usability in terms of performance, user satisfaction, and ease of learning. It outlines various models and standards, including Shackel's, Nielsen's, and ISO 9241-11, which provide frameworks for measuring usability through factors such as effectiveness, efficiency, and satisfaction. The document highlights the need for usability evaluation methods to ensure software meets user expectations and performs effectively in real-world contexts.

Uploaded by

okereebube87
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

SEN 103: USABILITY ENGINEERING

A software that is highly usable can perform maximally in a given task in term of response

time (how fast can the software complete a task). According to Smith and Williams (2011),

performance is the degree to which a system meets its objectives for timeliness. It is the

capability of a given application to response quickly to a given task (Rory, 2015; Beekums,

2017). For routine task, good performance depends on the efficient and execution of task to

yield quality result. This is achieved through task completion time which is an attribute of

usability. In essence, performance ensures that information is available on demand or

accessible without delay.

Although performance depends on the efficiency and execution of sequence of processes

which produce desired result (that is, a service that fulfills its basic purpose), however, it does

not explain users’ view of difficulty in achieving performance. Tan (2009) opined that usability

can be used in contexts such as execution time, performance, user-satisfaction and ease of

learning. Performance is used to refer to the extent to which a software system can be used by

users to achieve specified goals without hindrance and deriving satisfaction in a specified

context of use. Thus, performance would be used to describe attributes for measuring quality of

interactive software which is commonly known as usability. Usability was coined to replace

the term user friendly (Bevan, Kirakowski & Maissel, 1991) and has long being recognized as

an important quality attribute for interactive systems. Usability of a system is the ease with

which the system is used by its users (Simoes-Marques and Nunes, 2012). An interactive

system should be easy and pleasant to use (Cockton, 2013). Thus, usability is a feature of

interaction between the system and the user.

1
In corroboration to the above submission, any software that support user’s task should provide

the users with whatever assistance needed to perform their functions (services), be easy and

efficient in accomplishing task and pleasing when performing their activities. In other words, a

software is usable when users can perform and complete a task in a successful way without any

frustration. This has made software engineers to developed software that will meet user

expectations and are user-friendly (Constantine, & Lockwood., 1999). ISO 9241-11 and

researchers like Nielson, (1993) and Shackel (1991) have developed various variety of models,

standards, guidelines to help measure and evaluate the quality of software (system). These

standards are applicable in every stage of a system lifecycle.

The International Organization for Standardization (ISO 9241-11, 1998) defined usability as

the “extent to which a product can be used by specified group of users to achieve specified

goals with effectiveness, efficiency and satisfaction in a specified context of use”. ISO 9241-11

(1998) emphasizes that usability of a system is dependent on the context of use and the context

of use are users, equipment (software, hardware), tasks, physical and social environment. Also,

Nielson (2012) described usability in term of five (5) quality components which include

learnability, efficiency, memorability, errors, and subjective satisfaction. Usability and utility

are important and both determine whether a software product is useful or not. A software

product that is difficult to use even if it fit its purpose is likely not going to be used and a

software product that cannot fulfill its users’ requirement even if it is easy it will hardly be

used.

Central to the above definitions, is how well software fit its context of use, interactive and

satisfy users. The quality in use of a software can be measured as the outcome of interactions

with a computer system, which include whether intended goals of the software are achieved

2
with effectiveness and with appropriate expenditure of resources (such as time, mental effort)

in a way that the user finds acceptable (satisfactory). This can be determined through usability

evaluation.

Consequently, usability evaluation methods are grouped into usability inspection methods,

usability testing with users, assessment of the use of existing software and questionnaires and

surveys (Usability Professionals Association, 2012). Also, Blecken, Bruggemann & Marx

(2010) categorized usability evaluation methods into user-based (require a user to test the

software and it mainly consists of usability tests and questionnaires) and expert-based (an

empirical evaluation method applied when the system is already in use and its goal is to

determine the overall usability of the system) methods. When usability is evaluated, the focus

is on improving the user interaction, while the context of use is treated as a given. This implies

that the level of usability achieved depend on the quality of the product. However, when

quality in use is evaluated, any component of the context of use may be subject to modification

(Seffah, Donyaee, Kline & Padda, 2015). In this regard, quality is measured as the outcome of

the interaction with software which includes whether intended goals of the system are achieved

in a way the user finds satisfaction. The purpose is to find out and conclude the extent to which

a system possesses the goals predefined and to identify usability problems from user’s

perspective. Similarly, usability evaluation can be performed by real users doing real work to

assure good usability once the system moves into the real context of use. Hence, it is

paramount to evaluate the usability of software products.

Usability (also known as quality in use) is one of the important attributes to consider when

dealing with products that are widely used or will be widely used (Tijani, 2014). Usability has

been defined in different ways by Researchers in Human Computer Interaction (HCI) and

3
Software Engineers. Different models have been developed for the quantifying and assessing

usability of system and human computer interaction. Some models have similar structure while

others established their own parameters using terms that are peculiar but have the same or

similar meaning. The identified models and standards used to assess or measure usability of

application system are discussed below:

Shackel’s Model of Usability

Shackel’s model was developed by Brain Shackel in 1991. According to Shackel’s model,

usability is an attribute that describe a system’s acceptability. For a system to be acceptable, it

must be functional, suitable for the user and balance in term of its cost. This means that the

degree of acceptability is directly related to the level of system utility, usability, likability and

cost. Shackel described usability as the capability (in human functional terms) to be used easily

and effectively by specified users, given specified training and user support, to fulfill specified

tasks within specified range of environmental scenarios. This definition emphasizes the human

aspects of interaction which is determined by the context of use. Four dimensions which

include effectiveness, learnability, flexibility and attitude are considered in Shackel’s model.

According to Shackel (1991) utility is the ability to do the needed functionality (that is, will

the system do what is needed functionally?); usability (will the user actually work with it

successfully?); likability (will the users feel it is suitable?) and must be balance in term of cost

(what are the capital and running cost and what are the social and organizational

consequences); to arrive at a decision about acceptability of a system.

Also specified, is the metrics for measuring each of the four dimensions of usability. The

specifications are speed and free from errors (effectiveness) in terms of performance; time

4
required to learn and retain (learnability) that is performing tasks within a specified time from

installation, training and user support; adaptation to tasks and environment/suitability for

intended users and can the system be customized (flexibility) and likability (attitude) that is,

within acceptable level of human cost in terms of tiredness, discomfort, frustration and

personal effort (Madan & Dubey, 2012, Weichbroth, 2018). Shackel described how each

dimension can be measured. The measurement involves a number of human factors relating to

performance and attitude (Leventhal & Barnes, 2008; Madan & Dubey, 2012). For instance,

to measure learnability, a user would have learned a set of defined skills within some

specified time after installation and training. If users of the system become competent with the

system within a given time, the system is said to be easy to learn. Shackle measurement is

based on a number of human factor which are related to performance and attitude rather than

weight the dimensions in quantifiable or measurable terms, which may be different from one

project to another (Leventhal & Barnes, 2008; Madan & Dubey, 2012). Weichbroth, 2018

argued that flexibility is difficult to specify, communicate and test in a real software system

environment.

5
Speed
Effectiveness
Errors
Utility
Time to
learn
Usability Learnability
Acceptance Retension
Likability Flexibility

Cost Attitude

Figure 1: Shachel Model of Usability


Source: Shackel (1991)

Nielsen’s Model of Usability:

Jakob Nielson developed a usability model known as Nielson’s model of usability in 1993.

The model considered usability as an integral part of system usefulness. Practical and social

acceptability contribute to overall acceptability of a system. The model divided acceptability

into practical and social acceptability. Practical acceptability is further subdivided into

reliability, cost, compatibility and usefulness. These factors collectively contribute to system

acceptability. Usability merge with utility of system can make a system attained its usefulness.

While utility describe whether the functionality of the system can perform what is needed,

usability describes how well users can exploit the functionality. By implication, any system

that does not meet its users’ needs and requirement is not useful whether the system is usable

or not, because users will not accept it.

6
Nielson’s model identified five important characteristics of usability to include easy to learn

(learnability), efficient to use (efficiency), easy to remember (memorability), few errors (low

error rate) and subjectively pleasing (satisfaction). The characteristics are defined as

embedded

➢ Learnability- the system must be easy to learn, to allow inexperience users to be able to

work with it satisfactorily

➢ Efficiency of use- the system must perform or function efficiently, to allow high

productivity, in term of the resources spent to achieve the goals with accuracy and

completeness

➢ Memorability- the system must be easy to remember, after a period of interregnum

➢ Error frequency- the accuracy and completeness with which users achieve specific

objectives. It is a measure of usage, which involves how well users can perform their

task. For instance, the physical or cognitive skills necessary to achieve objectives from

a set of action

➢ Satisfaction – the attitude of users toward the system, which involves desirable, positive

attitude and lack of discomfort. It measures the degree to which each user enjoys

interacting with the system.

Also suggested is how each characteristic can be measured. For instance, to measure

learnability, an evaluator should select new users or novices and measure how long it takes

the new users or novices to reach proficiency level with a system (Nielson, 1993). Like

Shackel’s model, Neilson’s model does not weight the dimensions (characteristics) in

quantifiable or measurable termss, rather recognizes the importance of each of the dimensions

which may differ from one project to another. (Leventhal & Barnes, 2008; Madan & Dubey,

7
2012). However, system acceptability is related to usability and utility. By implication, a

system can be usable even if it has no utility, or a system can meet the users requirement but

not usable.

Learnability

Efficiency

Usability Memorability
Usefullness
Utility Errors
Cost
Practical Satisfaction
Acceptability Compatibility
Social
Reliability

Figure 2: Nielsen’s Model of Usability


Source: Nielsen (1993)

ISO 9241-11 Model

Part 11 of ISO 9241 model was originally titled Ergonomic requirements for office work with

visual display terminals (VDTs) later it was changed to human system interaction in 2006 (ISO

9241 -11, 1998). The 1998 ISO 9241 -11 model considered usability as a factor and is further

subdivided into three sub-factors: effectiveness, efficiency and satisfaction. The factor -

Usability is the extent to which a product can be used by specified users to achieve specific

goals with effectiveness, efficiency and satisfaction in a specified context of use. The factor

considered user (the person who interacts with the product); goal (the intended outcome) and

the context of use (which include the users, tasks, equipment - hardware, software and

8
materials, and the physical and social environments in which a product is used). The above

factors may have impact on the overall design of the product and in particular will affect how

the user will interact with the system (Harrison, Flood, & Duce, 2013).

The sub-factors are defined as follows:

➢ Effectiveness is the accuracy and completeness with which users achieve specified

goals.

➢ Efficiency is the resources expended in relation to the accuracy and completeness with

which users achieve goals.

➢ Satisfaction is the freedom from discomfort and attitudes towards product use.

ISO’s view is concern with the outcome of using the product even though it is broad because

it is intended to be used for procurement, design, development, communication and

evaluation. This implies that intended product for general application and specific product can

use this model for its evaluation. The ISO 9241-11 model is centered on performance which

involves effectiveness and efficiency in system usage and satisfaction in a specified context of

use. Context of use includes users, tasks, equipment (hardware and software), physical and

social environment. The model also identifies usability aspects and context of use component

to be taken into consideration during usability evaluation among other things (such as

specification and design).

Unlike Shackel (1991); Nielsen (1993) and Constantine and Lockwood (1999) model of

usability, ISO 9241 -11 (1998) standard did not consider Learnability, Memorability and

Errors to be attributes of a product’s usability although it could be argued that they are

included within the definitions of effectiveness, efficiency and satisfaction. To measure

usability, it is necessary to identify the goals and to separate effectiveness, efficiency and

9
satisfaction and the components of the context of use into subcomponents with measurable

and verifiable attributes (Aspiazu, 2013).

Effectiveness

Usability Efficiency

Satisfaction

Figure 3 ISO 9241-11 Usability Model


Source: ISO 9241-11 (1998)

Measuring usability of effectiveness, efficiency and satisfaction can be specified for overall

goals (ISO 9241-11, 1998).

Table1: Overall usability measure from ISO 9241-11


Usability Effectiveness Efficiency Satisfaction
objective measures measures measures
Percentage of goals Time to complete Rating scale for
achieved a task satisfaction

Overall Percentage of users Tasks completed Frequency of


usability who successfully per unit time discretionary
completed task use

Average accuracy of Monetary costs of Frequency of


completed tasks performing the complaints
task

Source: ISO 9241-11 (1998)

10
Other additional measures that may be required for system (ISO 9241-11, 1998) is presented

in table 2.2 below:

Table 2 ISO 9241-11 usability for a particular property


Usability objective Effectiveness Efficiency measures Satisfaction
measures measures
Satisfaction of needs Number of power Relative efficiency Rating scale for
of trained users tasks performed; compared with an satisfaction with
Percentage of expert user power features
relevant functions
used
Satisfaction of needs Percentage of tasks Time taken on first Rate of voluntary use
to walk up and use completed attempt;
successfully on first Relative efficiency on
attempt first attempt
Satisfaction of needs Percentage of tasks Time spent re- Frequency of use
for infer completed learning
quent or intermittent successfully after a functions;
use specified period of Number of persistent
non-use errors
Minimization of Number of references Productive time; Rating scale for
support requirements to documentation; Time to learn to satisfaction with
Number of calls to criterion support facilities
support; Number of
accesses to
help
Learnability Number of functions Time to learn to Rating scale for ease
learned; criterion; of learning
Percentage of users Time to re-learn to
who manage to learn criterion;
to criterion Relative efficiency
while learning
Error tolerance Percentage of errors Time spent on Rating scale for error
corrected or reported correcting errors handling
by the system;
Numbers of user
errors tolerated
Legibility Percentage of words Time to correctly Rating scale for effort
read correctly at read a specified
normal viewing number of characters
distance
Source: ISO 9241-11 (1998)

11
The ISO/IEC 9126-1 (2001) is a multi-part software quality product model called software

engineering – product quality. The standard is in four parts, namely, quality model, ISO/IEC

9126-1 (Part 1); External metrics, ISO/IEC 9126 2 (Part 2); Internal metrics, ISO/IEC 9126 -3

(Part 3) and quality-in-use metrics, ISO/IEC 9126 -4 (Part 4)

ISO/IEC 9126-1 Model

The ISO/IEC 9126-1 (2001) model titled quality model, describes a two-part model for

software quality which includes internal and external quality; and quality in use. The internal

and external quality model describes six characteristics of a quality product to include

functionality, reliability, efficiency, maintainability, portability and usability. Usability (ease

of use) according to ISO/IEC 9126-1 (2001) is a characteristic, which is further subdivided

into understandability, learnability, operability, attractiveness and usability compliance.

Understandability refer to the capability of software to enable user to understand whether the

software is suitable, and how it can be used for particular tasks and conditions of use;

Learnability is the capability of software to enable the user to learn its application; Operability

refer to the capability of software that enable user to operate and control it, attractiveness is

the capability of software to be attractive to the user and computer compliance is the

capability of the software to adhere to standards, conventions, style guides or regulations

relating to usability. ISO/IEC 9126-1 (ISO/IEC, 2001) defined and identify software quality

characteristic and its sub-characteristics, but did not described how these sub-characteristics

can be measured.

12
Functionality

Reliability Learnability

Efficiency
Understandability
Internal and
external
Usability Operability

Maintainability Attractiveness

Portability Computer
compliance

Figure 4: ISO/IEC 9126-1 Quality Model


Source: ISO/IEC (2001)

Although ISO/IEC 9126-1 (ISO/IEC, 2001) did not describe how any of the usability sub-

characteristics can be measured, other parts of ISO/IEC 9126 defined internal metrics and

external metrics for measuring the characteristics or the sub-characteristics which can be

applied for the measurement. Internal metrics measure the actual software, external metrics

measure the behaviour of the computer-based system that includes the software, and quality-in-

use metrics measure the effects of using the software in a specific context of use. Internal

metrics are static measures that do not rely on software execution, whereas external metrics are

applicable to running software (dynamic measures), as pointed out by Bucur (2006). Quality-

in-use metrics are only applicable when the final product is used in real conditions (Colin et al.,

13
2008). Internal metrics may be applied to a non-executable software product (such as request

for proposal, requirements definition, design specification or source code) during its

development. Internal metrics provide users with the opportunity to measure the quality of the

intermediate deliverables and thereby predict the quality of the final product. This allows users

to identify quality issues and take corrective action as early as possible in the development life

cycle.

External metrics may be used to measure the quality of the software product by measuring the

behaviour of the system of which it is a part. External metrics can only be used during the

testing stages of the life cycle process and during any operational stages. Although this metric

emphasized a measurement of software system behaviour, external metrics can also be used to

measured users interaction with the software system. The measurement is performed when

executing the software product in the system environment in which it is intended to operate.

Part 2 and 3 of ISO 9126 contained examples of metrics for characteristics which can be used

to specify and evaluate usability criteria. Table 2.3 shows some examples of metrics that are

applicable to a usability characteristic of the internal and external quality model.

14
Table 3: Internal and external metrics
Internal metrics External metrics
Characteris Name (purpose) Measurement Name (purpose) Measurement
tics
Understanda Completeness of Number of functions Completeness of Number of functions
bility description described in the description (What understood divided by
(What proportion of product description proportion of total number of
functions is divided by total functions is functions
described in the number of functions understood after
product reading the product
description?) description?)
Learnability Completeness of Number of functions Help frequency Number of accesses to
user described divided by (How frequently help until a user
documentation total of number of does a user have to completes his/her task
and/or help facility functions provided access help to learn
(What proportion of operation to
functions is complete his/her
described in the user work task?)
documentation
and/or help facility?)
Operability User operation Number of Default value Number of times users
undoability implemented availability in use fail to establish or to
(What proportion of functions which can (Can users easily select parameter
functions can be be undone by the select parameter values divided by total
undone user divided by values for number of times that
number of functions convenient users attempt to
operation?) establish or to select
parameter values
Attractiveness User interface Number of types of Interface Number of interface
appearance interface elements appearance elements whose
customizability that can be customizability appearance is
(What proportion of customized divided (What proportion customized to user’s
user interface by total number of of the appearance satisfaction divided by
elements can be types of interface of interface number of interface
customized in elements elements can be elements that the user
appearance?) customized to the wished to customize
user’s satisfaction?)
Usability Usability Number of correctly Usability Number of specified
compliance compliance (How implemented items compliance (How usability compliance
compliant is the related to usability completely does the items that have not
product to compliance software adhere to been implemented
applicable confirmed in the standards, better during testing
regulations, evaluation divided conventions, style divided by total
standards and by total number of guides or number of specified
conventions for compliance items regulations relating usability compliance
usability?) to usability?) items
Source: ISO/IEC (2003)

15
ISO/IEC 9126-2 Quality in use model

The second part of ISO/IEC 9126 model describes usability in term of quality-in-use of a

system with characteristics. The characteristics include effectiveness, productivity, safety and

satisfaction. The model defined the characteristics identified in quality-in-use model as:

➢ Effectiveness: the capability of software to enable users to achieve specified goals with

accuracy and completeness in a specified context of use.

➢ Productivity: the capability of software to enable users to expend appropriate amounts

of resources in relation to the effectiveness achieved in a specified context of use.

➢ Safety: the capability of software to achieve acceptable levels of risk of harm to people

or the environment in a specified context of use.

➢ Satisfaction: the capability of software to satisfy users in a specified context of use.

Effectiveness

Productivity

Quality in use
Safety

Satisfaction

Figure 5 ISO/IEC 9126-2 Quality in Use Model


Source: ISO/IEC (2001)

16
ISO/IEC 9126-2 defines quality-in-use metrics for measuring the characteristics and the sub-

characteristics. Table 2.4 shows the example of metrics that are applicable to quality-in-use

Table 4: ISO/IEC 9126-2 Quality-in-use metrics


Quality-in-use metric

Characteristic Name (purpose) Measurement


Effectiveness Task completion (What Number of tasks completed
proportion of the tasks are divided by total number of tasks
completed?) attempted

Productivity Productive proportion (What Productive time divided by task


proportion of the time is the user time, where productive time = task
performing productive time – help time – error time – search
time
Safety Safety of people affected by Number of people put at risk
system use (What is the ratio of divided by total number of people
risk to people affected by system potentially affected by the system
use?)
Satisfaction Satisfaction scale (How satisfied Questionnaire producing
is the user?) psychometric scales divided by
population average
Source: ISO/IEC 9126-2 (2004).

According to ISO/IEC 9126-4 (2001), the difference between usability and quality in use lies

on the context of use. When usability is evaluated, the focus is on improving the user interface

while the context of use is treated as a given. This implies that the degree to which usability is

achieved depend on specific circumstances in which the product is used. On the other hand,

when quality in use is evaluated, any component of context of use may be altered or modified.

ISO/IEC 25010 Quality model

The ISO/IEC 25010 standard (ISO/IEC, 2011) also known as quality model originated from

ISO/IEC 9126-1 (ISO/IEC, 2001). The standard describes a two-part model for software and

system quality requirement and evaluation (SQuaRE) which include product quality model and

17
quality in-use model. Product quality model consist of eight characteristics which are

functional suitability, reliability, performance efficiency, usability, security, compatibility,

maintainability and portability. These relate to static properties of the software and dynamic

properties of the computer system. Each characteristic is further sub-divided into a set of

related sub characteristics that can be measured by internal or external measures.

In his explanation for the change from ISO/IEC 9126-1 to ISO/IEC 25010, Bevan (2009)

stated that a Common Industry Format (CIF) for usability test reports was adopted by ISO as

part of the revised Software product Quality Requirements and Evaluation (SQuaRE) set of

standards. With higher profile of usability in industry, there was pressure to align SQuaRE

definition with CIF, this made it possible to define usability as a characteristic of quality in use,

with sub-characteristics of effectiveness, efficiency and satisfaction. The alignment establishes

operability as a characteristic, which is sub-divided into appropriateness recognisability,

learnability, ease of use, attractiveness, technical accessibility and operability compliance.

Usability and sub-characteristics are described, thus:

Operability is the degree to which the product has attributes that enable it to be understood,

learned, used and attractive to the user, when used under specified conditions. Users may

include operators, end users and indirect users who are under the influence of or dependent on

the use of the software; appropriateness recognisability is the degree to which the product

provides information that enables users to recognize whether the software is appropriate for

their needs; learnability is the degree to which the product enables users to learn its application;

ease of use is the degree to which users find the product easy to operate and control;

attractiveness is the degree to which the product is attractive to the user; technical accessibility

18
is the degree to which users with specified disabilities can operate the product and operability

compliance is the degree to which the product adheres to standards, conventions, style guides

or regulations in laws and similar prescriptions relating to operability

The second part of the model is quality in-use model. This is the degree to which a product can

be used by specific users to meet their needs that is, to achieve specific goals with

effectiveness, efficiency, freedom from risk and satisfaction in specific contexts of use. It

consists of effectiveness, efficiency, flexibility, safety and satisfaction in specific context of

use. The standard identifies three characteristics that include usability, flexibility and safety.

Sub-characteristics of usability are effectiveness, efficiency, usability compliance and

satisfaction. Effectiveness is defined as the accuracy and completeness with which users

achieve specified goals; efficiency is the resources expended in relation to the accuracy and

completeness with which users achieve goals; usability compliance is the degree to which the

product adheres to standards or conventions relating to usability; and satisfaction is the extent

to which users are satisfied in a specific context of use. Satisfaction is further subdivided into

likability (cognitive satisfaction), pleasure (emotional satisfaction), comfort (physical

satisfaction) and trust (satisfaction with security).

19
Effectiveness
Likability
Efficiency
usability Pleasure
Satisfaction
Quality in
flexibility Confortability
use
usability
compliance
safety Trust

Figure 6: ISO/IEC 25010 Quality in use model (QUIM)


Source: ISO/IEC 25010 (2011)

Constantine and Lockwood Model (1999)

In addition to above model, Constantine and Lockwood (1999) argued that a software that is

usable is easy to learn and use, enabling the user to use it and improving their productive.

Constantine and Lockkwood (1999) model identified five attributes of a software system. The

attributes includes, efficient in use, learnability, rememberability, reliability in use and user

satisfaction. The authors further explained that software which reliable in use allows users to

make fewer mistakes, and enabling users to function and promote reliable human performance.

Reliability in use is related to user interface design which contributes to usability of software

(Weichbroth, 2018).

20
Table 5 A summary of usability models
Model Usability Criteria Metrics
Name factors
Shackel’s Effectiveness Speed for performance, The percentage of the specified
model errors range of tasks completed by the user
in term of speed and errors
Learnability Time to learn, retention Time required for a user to learn
how to accomplish a task on a
system and retain over time
Flexibility Adaptation
Attitude Likeability Rating scale for satisfaction

Nielson’s Learnability Ease of learning success rate: whether users can


model perform task in a specified minimum
time
Efficiency Efficiency of use The time it takes to perform a task
or set of tasks
Memorability Easy to remember after a User performance and memory test
period of interregnum (an
interval of no use)
Error Error tolerance. Number of errors a user perform
frequency while attempting to accomplish a
task/time it takes a user to recover
from error while attempting to
accomplish a task
Satisfaction Attitude either positive or Rating scale for satisfaction
desirable while interacting
with the system
ISO 9241- Effectiveness Accuracy and Percentage of goals achieved
11 model completeness
Efficiency Resources expended in Time to complete a task(s)
relation to accuracy and
completeness with which
users achieve goals
Satisfaction Comfort and attitudes Rating scale for satisfaction
ISO/IEC Learnability Help frequency Number of accesses to help until a
9126-1 user completes his/her task
model Understandabi Completeness of Number of functions understood
lity description-what divided by total number of functions
proportion of function is
understood after reading
the product description
Operability Default value availability Number of times users fail to
in use establish or to select parameter
values divided by total number of

21
times that users attempt to establish
or to select parameter values
Attractiveness Appealing /appearance Number of interface elements whose
appearance is customized to user’s
satisfaction divided by number of
interface elements that the user
wished to customize
Computer Standards, conventions, Number of specified usability
compliance guides compliance items that have not been
implemented better during testing
divided by total number of specified
usability compliance items
ISO/IEC Effectiveness Accuracy and Number of tasks completed divided
9126-2 completeness by total number of tasks attempted
model Productivity Effectiveness achieved Productive time divided by task
time. Where productive time =task
time - help time – error time –
search time

Safety Risk acceptance Number of people put at risk divided


by total number of people
potentially affected by the system
Satisfaction Comfort and attitudes Questionnaire producing
psychometric scales divided by
population average
ISO/IEC Effectiveness Accuracy and Number of task completed divided
25010-2 completeness by total number of task attempted
model Efficiency Resources expended in Time to complete a task(s)
relation to the accuracy and
completeness
Satisfaction Likability, pleasure, Rating scale for satisfaction
comfortability, trust
Usability Adherence to Standards, Number of specified usability
compliance conventions, guide compliance items that have not been
implemented better during testing
divided by total number of specified
usability compliance items

The word software was first coined and used by John W. Tukey in 1958 in his article published

in the ‘American Mathematical Monthly’ (Agrwal, 2014). The early software were called

computer program and code. They were installed in the computer during their configuration

and were difficult to change, delete, uninstall and reinstall on computer. This means that

22
software were initially part of computers, so they were not available separately. In 2003,

Ceruzzi defined software as a single entity, separate from computer’s hardware that works with

the hardware to solve a given problem. According to Imo and Igbo (2011), software is defined

as program designed to perform specific functions. Central to these definitions is the capability

of software to run on computer hardware to effectively carryout specific task(s) of the library.

Software is an electronic program that allows hardware to perform a set of functions (Chauhan,

2010). This suggests that an instruction is needed to accomplish a given task.

Basically, software is classified into system software and application software. System

software: consists of one or more programs that are designed to control the operating system of

a computer. These programs include operating systems which controls the overall performance

of a computer or any program that support application software and utility programs. Creating

a file, controlling the input/output device, executing other programs, memory management are

controlled by an operating system. DOS, Windows, Mac, UNIX, Linux are examples of system

software (Edem, 2016). Application software is a computer program designed to help users

perform specific task. These software systems are menu driven and common among them are

relational database management software, Microsoft office, word processing, database

management, spreadsheet and so on. Software cannot achieve the purpose for which it has been

designed until it runs on hardware and produce required result. Therefore, a computer works in

response to instructions provided.

Depending on the nature of source code, software packages can be divided into two distinct

categories- closed source software and open source software. Closed software is commercial

(proprietary) software developed and supported by profit agencies that sell licenses for the use

23
of their software and it is driven by maximizing profits. Open /free software are dedicated to

communities of developers who contribute modification to improve the product continually

and decide on the course of the software based on the needs of the community. Open source

software is free and distributed at no cost under a licensing agreement which allows source

code to be shared, viewed and modified by users (Gauri and Shipra, 2016; Marshall, 2017;

Saltis, 2017). The implication is that the program can be read, so that users can improve on it

over time.

Source code is a program written in a programming language in which the format is written

and readable by human. It is important to have source code so that users can improve on its

features to suit their purpose. This is because there is no software that has all the features you

need, Libraries especially academic whose wealth of resources consist of print and electronic

can adopt open software in order to modify or improve it to suite their purpose.

Usability of Software Product

Effort to defined usability was first attempted by Miller in 1971 in terms of measures for ease

of use (Weichbroth, 2018). Software and hardware which were easy to use were said to user

friendly but the term could not prevail because efficiency of computer programs were

deliberated and the issues were recognized together by several researchers (Weichbroth, 2018).

As a significant factor that defines the success of software, website or any other product

(Baguma, Kiprono & Kirui 2016, 46; Dubey & Saxena 2013, 48; Hayat, Lock & Murray 2015;

Nielsen 2012; Pratas 2014). Usability experts have attempted to define usability from the

simplified (ease of use) to the complex concept of usability which describes the successful

completion of a task by the user. Simoes-Marques and Nunes (2012) defined usability as the

ease with which the system is used by its users.

24
Similarly, Nielson (2012) described usability as a quality attributes that assesses how easy user

interfaces are to use. Nielson further defined the terms by five (5) quality components which

include learnability, efficiency, memorability, errors, and subjective satisfaction. Learnability

is emphasized as the fundamental attribute of usability, since software needs to be learnt before

it is use. Given the first experience that people have with new a new software product is to

learn to use it. Nielson also relate this to novice ability to reach a reasonable level of r

proficiency which indicates direct relation between learnability and efficiency. Thus, the user

interface should be easy to learn so that real users (library staff and patrons) can be able to

complete a given task successfully.

In addition, Nielson’s model does not consider utility to be part of usability but a separate

attribute of a system. but both utility and usability combine to determine whether a system is

useful. According to Nielsen utility is the ability of a system to meet the needs of the user. If a

product fails to provide utility then it does not offer the features and functions required; then,

the usability of a system becomes superfluous as it will not allow the user to achieve their

goals. When usability is reduce to ease of use, it does not provide adequate information that

will help guide the user-centered design tasks to achieve the goal of usable products

(Quesenbery, 2001). Usability goes beyond ease of use.

Bevan, Kirakowski & Maissel (1991) described usability of any system as a function of a

particular user being studied, the task they perform and the environment in which they work.

This view encompasses the product-oriented which state that usability can be measured in term

of ergonomic attributes of the product; the user oriented which can be measured in terms of

mental effort and attitudes of the user and the user performance can be determined by users

25
interacts with the system in term of ease of use (whether the software can easily be used) and

or acceptability (whether the software will be used in the real world). So, users can be observed

to see how they can quickly and easily use the system to accomplish desire tasks.

According to Shackel (1991) usability is the capability in human functional terms to be used

easily and effectively by specified users, given specified training and user support, to fulfill

specified tasks, within specified range of scenarios. Shackles definition emphasizes interaction

and that usability is largely determined by the context in which a system is supposed to

operate, not by presence or absence of certain features. Thus a system that is deemed usable in

one context, might indeed prove to be less usable in another where different users or tasks

come into play. His idea of an acceptable system is one that satisfies its users’ requirements for

utility, usability, and cost.

ISO 9142-11 (1998) along with Bevan (1995) consider effectiveness, efficiency and

satisfaction as usability measures. These views regard usability as a high quality objective

which is reflected in the definition that states that usability is the extent to which a system can

be used by specified users to achieve specified goals with effectiveness, efficiency and

satisfaction in a specified context of use. ISO 9142 stressed that usability is not an attribute of

a system but can contribute to the product being usable (ISO 2018). By implication, a system

should be used for its intended purpose in real work environment. Also emphasized by ISO

9241-11 is that usability is dependent on the context of use (users, tasks and environments).

Thus, the quality of use (measured as satisfaction, efficiency and effectiveness) is a result of

the outcome of interaction between the user and the system while performing a task in a

physical, social and organizational environment (ISO, 2018; Bevan, 1995). Bevan and

26
Macleod 1995 posited that quality can be measured as the outcome of interactions with

software system including whether goals of the software system are achieved (effectiveness)

with appropriate expenditure of resources (time, effort put) in a way the user finds acceptable

(satisfaction).

Usability is characterized by good interactions between users and tools that are suitable and

satisfactory to use (Fontdevila, Genero and Oliveros, 2017). In addition, it determined the

degree to which it can be successfully integrated to perform tasks in work environment

(Mifsud, 2011). Bell and Morgan (2015) asserted that designers developed software for

hardware device that require users to spend time to learn how to use the software, thinking the

software are easy to use, even when they are not convenient. However, users want to get task

done quickly without spending much time to learn (Inostroza, Collazos, Roccagliolo, & Rusu,

2016). In this regard, software products have to user friendly since they are designed for end

users who have little or no knowledge of the software but Haklay and Tobón (2010) opined

that a lot of software requires significant knowledge to operate. For utilization of these

software, they must be simple. Simplicity in software allows a user to accomplish task with

ease. Lee, Kim and Yi (2015) noted that to achieve simplicity of a software, the application

must contain only essentials features, which must be structured in a way that is logical to the

user, forming a coherent unit of simple tasks. This suggests thoughtful software for users that

needs to navigate through the modules and fields with certain ease of use.

Usable software increases productivity and reduces costs with satisfaction however advances in

software technology have enabled a wide range of applications to be developed that are

difficult to use and tend to waste user’s time, cause frustration and discourage users from

27
further use (Nielson, 2012; Hall, 2015). Applications that use interactive interfaces, are

apparently complex and very often, they are faced with usability issues such as information

overload, lack of adequate task support, screen cluster and limited interaction mechanism

(Bahruddin, Singh and Razali, 2013). Usability is not a functional evaluation of software

features rather a determination of software operability (Hall, 2015).

Usability is a process of determining the usability goals of a particular software by applying a

set of methods and techniques. Usability engineering exist to improve the user interface of a

targeted system (Lecerof and Paterno, 1998; Nielsen, 1993). This implies that usability aim to

identify usability issues of software. The process involves specifying usability criteria and then

assessing the software against such criteria (Preece, Rogers & Sharp, 2015; Dix, Finlay,

Abowd & Beale, 2003). So, enhancing usability, requires that human interaction software

holds a major role in attaining the goal of improving user performance and satisfaction (Sung

& Mayer, 2012). Thus, good human interaction software system enable users to effectively use

products to accomplish tasks with satisfaction and ease of use.

Usability metrics are used to measure usability aspects of a system (Dix, Finlay, Abowd &

Beale, 2003). ISO 9241- 11 standard (1998) identified criteria and metric for assessing

usability, for instance, effectiveness (accuracy and completeness), efficiency (time), and

satisfaction. This can be used to determine the level of usability of a software system.

Although Dix, Finlay, Abowd & Beale (2003) suggested that usability assessment should be

done during the developmental stage in order to judge the final product against pre-defined

usability criteria, these systems are designed to be used in real life work environment, therefore

requires end users who interact with the software system such as library staff and patron to

28
assess their usability. Usability criteria and metrics values differ from one software system to

another resulting to the possibility of a higher priority usability aspects of a system become

lower priority aspects in another system (Lecerof & Paterno, 1998)

Usability Evaluation of Software Products

Usability evaluation is an important activity that focuses on the assessment of the usability of

software products. It is a process of assessing the extent to which software products enable

users to achieve their goals, how fast these goals can be achieved, how easy the software is to

learn and how satisfactory users are when it is in use. Usability Professionals Association

(2012) defined usability evaluation as the process of assessing the usability of a system with

the purpose of identifying usability problems and/or obtaining usability measures. It

determines how well users are able to use system to meet their expectation. Bourque (2014)

asserted that usability evaluation assesses how easy it is for end users to learn and to use a

particular software.

The goal of usability evaluation is to identify usability problems, improve the product and

thereby help the developers to fulfill the users’ requirements (Riihiaho, 2015) which can be

achieved through formative and summative evaluation. Formative evaluation involves

monitoring the process, products development and gathering user feedback for use in

modification and product development (Riihiaho, 2015; Bourque, 2014). The purpose of

formative evaluation is to identify and eliminate usability problems during development

process. So, feedback is provided to software developers concerning usability problems in

order to improve the software (Rubin, 2008). Summative evaluation assesses the extent to

which usability objectives have been achieved (Riihiaho, 2015). The aim is to evaluate the

29
usability of a completed product under realistic condition (real world) to determine if the

product meets specific measurable performance and satisfaction goals or establish usability

benchmark and also make comparisons (Sauro, 2010). Usability Professionals Association

(2012) identified three usability evaluation methods to include usability inspection methods,

usability testing with users and questionnaires and surveys among others.

Usability inspection approach – this approach is commonly used by experienced evaluators

(usability specialist/experts, software developers and experienced professionals) to examine

usability related issues of user interface. Thus, Evaluators such as research librarians can

inspect the library management software system based on guidelines and their own judgments

to identify usability problems and possibly get quantitative measures about them (Bligard,

2013). Nielsen (1993) opines that the methods in this approach are easy to learn, inexpensive,

fast to apply, do not require special equipment however, they are performed by software

developers and experienced/ expert evaluators because it requires participant with usability

knowledge to perform the evaluation. This means that a user who is not a software developer

might find it difficult to use these methods to evaluate the software product. The approach

consist of evaluation methods such as heuristic, pluralistic walkthrough and cognitive

walkthrough (Usability Professionals Association, 2012)

In heuristic evaluation, usability experts compare a software, hardware or documentation

product to a set of list or design principles (heuristic) and identify where the product does not

follow those recognized principles (User Experience Professional, 2010). According to Muniz

(2016) experts evaluate user interface of a product against accepted usability principles.

Nielsen (1994) developed ten heuristic principles to serve as guidelines for usability evaluation

that is commonly used among experienced/ expert researchers to discover system usability

30
problems. The set of heuristic principles for user interface design by Nielson (1995) includes

visibility of system status, Match between system and the real world, User control and

freedom, Consistency and standards, Error prevention, Recognition rather than recall,

Flexibility and efficiency of use, Aesthetic and minimalist design, Help users recognize,

diagnose and recover from errors and Help and documentation. Thus, each evaluator examines

each dialogue element several times, comparing with the set of guidelines.

It has been observed in heuristic evaluations, that single evaluators miss most of the problems

in their evaluation, although different evaluators find different problems (Lizano and 2014).

Therefore, better results are obtained by combining information from several evaluators. In

study conducted by Nielsen (1992), the interface was subjected to heuristic evaluation by three

different groups of evaluators: (i) novices (those who have knowledge about computers but no

usability expertise), (ii) single experts (usability specialists but not specialized in the domain of

the interface) and (iii) double experts (expertise in both usability and the domain of the

interface being evaluated). The revealed that novices detected 22% of the problems in the

interface, single experts 41% and double experts 60% of them. The study concluded that best

results are obtained by using double experts as evaluators but recommended the use of single

experts for practical purposes and use of double experts when optimal performance is

necessary. Usability inspection approach can also be achieved through cognitive and pluralistic

walk through methods.

Cognitive Walkthrough method: is an inspection method used for evaluating usability in a

user interface of system (Bligard and Osualder, 2013) but the emphasis is on tasks (Wharton,

Rieman, Lewis & Poison, 1994). According to Ghalibaf, Jangi, Habibi, Zangouei and Khajover

31
(2018) Cognitive Walkthrough is a task-based, expert-centered, analytical usability evaluation

method that tries to identify problems through simulating end-users’ cognitive abilities. The

expert(s) assess the degree of difficulty users may experience while learning to operate an

application to perform a given task. The idea is to identify users’ goals, how they attempt to

achieve the goals using the system. During a cognitive walkthrough, evaluators inspect an

interface in the context of specified tasks by adopting the role of the targeted end-user and

consider each action necessary to accomplish the task.

The evaluator first determines the exact sequence of correct task performance, and then

estimates the correct task performance that would be followed by users (Lewis, 1997). For

each step in the task performance, the evaluators consider four important questions intended

to stimulate user’s interaction with the system which include

1. Will the users try to achieve the right effect? For instance, if the task is to open a new

document, then the first thing the user must do is open the word processing program.

2. Will the user notice that the correct action is available? If the action is to select from a

visible menu, is the action legible, located in an easily viewable location or a location

where the user expects it to be? If the word processing icon is hidden or buried under

many menu layers, then he/she may never see it as a possible action.

3. Will the user associate the correct action with the effect trying to be achieved? If there is a

menu option that says, “word processor,” the user should have little difficulty associating

the option with the goal. If the menu option is not so obvious, the user may have

difficulty.

4. If the correct action is performed, will the user see that progress is being made toward

solution of the task? If after selecting the word processing program and the system

32
provides a dialog that states, “word processor opening,” the user will understand that the

action initiation was successful. Confusion may ensue when there is no feedback.

While attempting to answer each of the above questions, evaluators document usability

problems encountered and possibly reasons for those problems (Lewis, 1997).

Pluralistic walkthrough method: is a group activity that is based on participation and is

characterized as an inspection method where a group of stakeholders such as users,

management and developers collaborate with varying competence to review a product

(Thorvald, Lindblom and Schmitz, 2015). In this method, systematic group evaluation of a

system in which usability experts serving as walkthrough administrators guide users through

tasks simulated on hard-copy and facilitate feedback about those tasks while developers and

other members of the product team address concerns or questions about the interface (Usability

Professional Association, 2010).

Usability testing with users: this approach is based on the participation of users in order to

provide useful feedback related to their experiences while using the software. It is a process

whereby the intended users of a system perform predetermined task on the system while the

users are being observed by researchers (evaluators) and recorded (Tullis, 2002). So,

representative users work on typical tasks using the system, while the evaluators use the

results to see how the user interface are usable and support the users to do their tasks.

Rubin and Chisnell (2008) described usability testing as the process that employs participants

who are representative of the target audience to evaluate the degree to which a product meets

specific usability criteria. The participants should represent real users and they should perform

the tasks that the real users perform, only that way will the test give the developers meaningful

33
results (Dumas and Redish, 1999). Usability testing involves activity that focuses on observing

users working with a product, performing tasks that are real to them (Barnum, 2011). The

above definition does not include the real users of the system.

The classic method of ‘usability testing in the laboratory’ is normally considered as the clearest

example of these methods. Here, the test is conducted by evaluation staff comprising of a test-

moderator and observers. The users perform several usability tasks by following the test-

moderator's instructions. During the session, each user follows a specific protocol (normally

the ‘thinking aloud’ protocol) (Nielsen, 2012) in order to provide feedback to the evaluation

staff regarding their experiences with the software. This feedback is systematically collected

and analyzed in order to produce a list of usability problems (Tullis et al. 2002; Rubin &

Chisnell, 2008).

Usability testing is the effective way of evaluating the usability of an application through

testing and the testing can be performed throughout the development lifecycle, starting from

the early stage of the product (Petrie & Bevan, 2009; Rubin & Chisnell, 2008). During the

development phase, developers will get to know to what extent users are getting satisfied with

the product and the result obtained can be used to improve usability of the system. This

approach enables developers to detect and fix usability issues as soon as they appear, thus

improve overall usability of a system (Sauro 2014). Usability testing can also be done using

comparative, remote and think-aloud testing methods.

Comparative usability testing: this is a method that compares two or more different existing

systems or products. The method evaluates similar features and allow for discovering of

features and interactive design of systems that works better (Loranger, 2014). The focus is on

34
testing where pros and cons of two or more systems or prototype based on user’s experience

are evaluated (Ross, 2017; Loranger, 2014). Comparative usability testing compare existing

systems (user interfaces) with each other, using quantitative metrics such as task completion

and error rates (Ross, 2017), Ross further stated that participants perform the test tasks without

interruption and do not think aloud and the evaluator observed how each person interact with

competing systems. Various aspects of navigation, interaction, visual presentation and textual

information are analyzed with emphasis on critical aspects of goal achievement of the users.

With this method, a product can be tested against several competitors’ products.

Remote usability testing: in remote usability testing, the researcher conduct usability test with

participants in their natural environment by using screen sharing software system. Remote

usability testing is a method that exploits user work environment (home or office) and

transforms it into a usability laboratory where users are observed with screen sharing

applications (Usability Testing, 2012). Thus, during the remote usability testing participants

and researchers are located separately (Barnum 2011). The aim of remote testing is to interact

or reach out to users around the world without necessarily being present (Baker 2014).

Remote usability testing can be categories into moderated and unmoderated (Baker 2014).

Moderated remote usability testing involves the moderator who instructs and guides the

participants remotely throughout the test session (Baker, 2014; Barnum, 2011, Schade, 2013).

This method is also called synchronous because the data is collected in the real time, although

the facilitator and the participant are physically separated (Barnum, 2011). Remote moderated

usability testing allows more flexibility as a moderator can alter the process thereby allowing a

better task control and helping to receive insightful data.

35
On the other hand, unmoderated remote usability testing requires participants to accomplish

predetermined set of tasks without a moderator present (Albert & Tullis, 2013; Baker, 2014).

Unmoderated testing is also known as automated owing to the data presented and collected

through software tool (Barnum, 2011; Soucy, 2010). This kind of test does not require real-

time human interaction, therefore, the data recorded test is examined later by usability

professionals (Schade 2013a). With this method, a significant number of participants can

complete test sessions concurrently (Albert & Tullis, 2013; Baker, 2014). Remote unmoderated

usability testing grants possibility to test hundreds of participants simultaneously (Soucy,

2010).

Think-aloud testing: is also called Think-aloud protocol. It is a method that involves users

vocalizing their thoughts while performing required tasks (Bergstrom & Olmsted-Hawala,

2012; Chisnell & Rubin 2008; Nielsen, 2012). This is one of the common usability testing

method used by most usability experts (usability practitioners) due to its low-cost

implementation, flexibility and simplicity. Nielson (1993) recognized it as the most valuable

method that enables researcher (usability expert) to reveal what users actually keep in mind

while interacting with a product (software or hardware system). In a think aloud test, the

researcher (usability expert) ask test participants to use the software (or system) continuously

while encouraging them speak out loud to verbalize their thoughts, feelings and what they are

doing as they interact with the user interface (Nielsen, 2012). Think-aloud testing assist

developers to understand the way end-users think and get feedback directly from end-users

(Nielsen, 2012).

36
Think aloud can be conducted in two ways, either concurrently or retrospectively. During the

concurrent think-aloud test, participants can ask questions, describe their thoughts and feelings

simultaneously (Rubin & Chisnell, 2008; Bergstrom & Olmsted-Hawala, 2012). Although

some participants might be silent, researchers should are to encourage such participants to

continue talking (Nielson, 2012). Haak, Jong & Schellens (2003) noted that concurrent

thinking aloud can affect performance thereby increases the time taken to complete tasks, but

McDonald & Petrie, (2013) revealed in their study disagree with this position, their study

revealed that concurrent thinking aloud has no influence on performance, rather, it increases

frustration and effort on tasks.

On the other hand, retrospective think aloud requires participants interacting quietly with

system and say their thoughts after the performance is over (Bergstrom & Olmsted-Hawala,

2012, Elling, Lentz. & Jong, 2011). Retrospective think aloud prolong the period of testing

session, because participants will complete the test tasks before reviewing the performance

procedures and reporting (Isbister & Schaffer, 2008; Rubin & Chisnell, 2008). In addition,

participant may forget the information or reconstruct the information which may lead to

different interpretation of performed actions (Haak et al. 2003, 341; Rubin & Chisnell, 2008).

Thus, concurrent think aloud protocol is more reliable, when compared with retrospective think

aloud.

Questionnaires and survey method: Another method used to assess usability of

software is the use of questionnaires and surveys which facilitate the collection of feedback

from participants during and after the usability testing (Barnum 2011). Questionnaires can be

open-ended or closed-ended. While open-ended questionnaires allow participants to response

37
and express their view in their own words, closed-ended questionnaires limit the participants to

the options provided (Sauro & Lewis, 2012). An example is the system usability scale (SUS)

method. This method provides a set of statements related to a particular topic (Albert &

Tullis, 2013; Usability Professionals Association, 2012). The use of this method, enable

participants to express their extent of agreement or disagreement with each sentence by using a

five point scale. Data obtained are later quantified in order to analyze the status of the usability

of a software (Brooke, 1996).

Table 6: Usability attributes by different models and standard

ISO 9241-11 Shackel Nielson Constantine & ISO/IEC ISO/IEC ISO/IEC Proposed
(1998) Model (1991) Model (1993) Lockwood 9126-1 9126-2 25010 dimension
(1999) (2001) (2001) Quality- in-
use (2011)
Effectiveness Effectiveness Effectiveness Effectiveness Effectiveness
Learnability Learnability Learnability Learnability Learnability
Efficiency Effectiveness Efficiency Efficiency in use Productivity Efficiency Efficiency
(speed)
Learnability Memorability Rememberability Retention
(Retention)
Effectiveness Error Reliability in use
(error) frequency
Flexibility
Satisfaction Attitude Satisfaction User satisfaction Attractiveness Satisfaction Satisfaction Satisfaction
Understandability

Operability
Safety
Usability Usability
compliance compliance

From table 2.6 above, factors from different models and standard are merged based on their

views for consideration.

Effectiveness used in Shackel (1991) is concerned with the speed and free of errors when users

perform tasks. On the other hand, Nielson’s (1993) view error frequency to be concerned with

the accuracy and completeness with which users achieve specific objectives. In other word, it

38
is how well users can perform their task. ISO/IEC 9126-2 (2001) and ISO/IEC 25010 Qual- in-

use (2011) definitions correspond to ISO 9241-11 (1998) view of effectiveness which states the

capability of the software to enable users to achieve specified tasks with accuracy and

completeness. Central to these views is the accuracy and completeness with which users

perform and achieve specific objectives with free or little errors. Thus, effectiveness will be

considered a dimension of usability of the proposed model.

Learnability is not included in ISO standard, but no matter how simple or easy a software is,

user must learn to operate it the first time the product is acquired before performing required

tasks with it. In view of the above, Shackel (1991), Nielson (1993) and ISO/IEC 9126-2 (2001)

identified learnability as a criteria of usability. According to these models and standards,

learnability is the ease with which users learn to use and become proficient with use of the

software.

Productivity and Efficiency as defined by Nielson (1993), ISO/IEC 25010 Qual- in-use (2011)

and ISO/IEC 9126-2 (2001) have the same meaning. Productivity is the level of effectiveness

achieved with respect to the resources (that is, time to complete tasks, user efforts, materials or

financial cost of usage) consumed by the users and the software. The speed used as criteria

used in Shackel Model (effectiveness) is not different from the efficiency described above.

Thus, efficiency is the capability of the software to allow users to expend appropriate amounts

of resources in relation to the effectiveness achieved in a specified context of use. It is the

speed (with accuracy) with which users accomplish task. An efficient task is an accurate and

complete task, which is the same as the definition of productivity. This is the basis for lumping

39
productivity and efficiency together. Therefore, efficiency describes how quickly a task can be

completed

Although, error frequency is not included in ISO 9241-11 (1998), Nielson (1993) identified

error frequency as an attribute while Shackel (1991) discussed it as sub attribute of

effectiveness. Shackel (1991) named the term “free from errors” as a sub-attribute of usability.

Of interest is the view of Constantine and Lockwood (1999) on “reliability in use” to denote

error rate (depicted in table 2.6). A software system that is reliable in use has a low error rate

(Ferre, Juristo, Wiindl and Constantine, 2001). Hence, all the views focused on the number of

errors the users make while performing a task.

Shackel Model (1991) identified retention as sub characteristics for learnability and Nielson’s

view of memorability is its ease to remember. Similarly Constantine and Lockwood (1999)

considered rememberability as an attribute of usability of software. All the views centered on

how easy it is for users to recollect and recognize the processes learnt to accomplish or perform

task Retention describes whether a user who has not used software to perform task after a

period of time can retain all the procedure needed to perform tasks when the user want to

accomplish a given task. This study considers retention since users (librarians) can be deployed

from one department to another and library patrons sometimes may not use the system for a

period of time.

Satisfaction: Nielson (1993), ISO/IEC 9126-2 (2001), Constantine and Lockwood (1999) and

ISO/IEC 25010 Qual- in-use (2011) views of satisfaction corresponds to ISO 9241-11 (1998)

but ISO/IEC 9126-1 (2001) used the term attractiveness to connote satisfaction. Nielson

(1993) described satisfaction as the attitude of users toward a system, Shackel (1991) refer to it

40
as attitude, ISO/IEC 9126-2 ((2001) view it as the capability of software to satisfy users in a

specified context of use and ISO/IEC 25010 (2011) defined satisfaction as the extend to which

users are satisfied in a specific context of use. Satisfaction is subdivided into likability

pleasure, comfort and trust (security) (ISO/IEC 25010). These sub attributes are embedded in

ISO 9241-11 standard which described satisfaction as freedom from discomfort and attitudes

towards product use.

Usability compliance of ISO/IEC 9126-1 and usability compliance of ISO/IEC 25010 have the

same definition but while ISO /IEC 9126-1 usability compliance is described as both internal

and external quality, ISO/IEC 25010 described it as quality in use attribute. They are merged

together because of their definition and can be used as internal and external attribute associated

with usability or quality in use.

41

You might also like