Empirical Software Engineering Guide
Empirical Software Engineering Guide
Introduction
As the size and complexity of software is increasing, software organizations are facing
the pressure of delivering high-quality software within a specific time, budget, and avail-
able resources. The software development life cycle consists of a series of phases, includ-
ing requirements analysis, design, implementation, testing, integration, and maintenance.
Software professionals want to know which tools to use at each phase in software devel-
opment and desire effective allocation of available resources. The software planning team
attempts to estimate the cost and duration of software development, the software testers
want to identify the fault-prone modules, and the software managers seek to know which
tools and techniques can be used to reduce the delivery time and best utilize the man-
power. In addition, the software managers also desire to improve the software processes
so that the quality of the software can be enhanced. Traditionally, the software engineers
have been making decisions based on their intuition or individual expertise without any
scientific evidence or support on the benefits of a tool or a technique.
Empirical studies are verified by observation or experiment and can provide powerful
evidence for testing a given hypothesis (Aggarwal et al. 2009). Like other disciplines, soft-
ware engineering has to adopt empirical methods that will help to plan, evaluate, assess,
monitor, control, predict, manage, and improve the way in which software products are
produced. An empirical study of real systems can help software organizations assess
large software systems quickly, at low costs. The application of empirical techniques is
especially beneficial for large-scale systems, where software professionals need to focus
their attention and resources on various activities of the system under development.
For example, developing a model for predicting faulty modules allows software organiza-
tions to identify faulty portions of source code so that testing activities can be planned
more effectively. Empirical studies such as surveys, systematic reviews and experimental
studies, help software practitioners to scientifically assess and validate the tools and tech-
niques in software development.
In this chapter, an overview and the types of empirical studies are provided, the phases
of the experimental process are described, and the ethics involved in empirical research
of software engineering are summarized. Further, this chapter also discusses the key con-
cepts used in the book.
1
2 Empirical Research in Software Engineering
within a specified time and budget. Fritz Bauer coined the term software engineering in 1968 at
the first conference on software engineering and defined it as (Naur and Randell 1969):
The establishment and use of sound engineering principles in order to obtain economically
developed software that is reliable and works efficiently on real machines.
The software engineering discipline facilitates the completion of the objective of delivering
good quality software to the customer following a systematic and scientific approach.
Empirical methods can be used in software engineering to provide scientific evidence on
the use of tools and techniques.
Harman et al. (2012a) defined “empirical” as:
“Empirical” is typically used to define any statement about the world that is related to
observation or experience.
Empirical software engineering (ESE) is an area of research that emphasizes the use of empir-
ical methods in the field of software engineering. It involves methods for evaluating, assess-
ing, predicting, monitoring, and controlling the existing artifacts of software development.
ESE applies quantitative methods to the software engineering phenomenon to understand
software development better. ESE has been gaining importance over the past few decades
because of the availability of vast data sets from open source repositories that contain
information about software requirements, bugs, and changes (Meyer et al. 2013).
Empirical studies are important in the area of software engineering as they allow software
professionals to evaluate and assess the new concepts, technologies, tools, and techniques
in scientific and proved manner. They also allow improving, managing, and controlling
the existing processes and techniques by using evidence obtained from the empirical
analysis. The empirical information can help software management in decision making
Introduction 3
Empirical
study
• Research questions
• Hypothesis formation
• Data collection
• Data analysis
• Model development and
validation
• Concluding results
FIGURE 1.1
Steps in empirical studies.
and improving software processes. The empirical studies involve the following steps
(Figure 1.1):
Empirical study allows to gather evidence that can be used to support the claims of
efficiency of a given technique or technology. Thus, empirical studies help in build-
ing a body of knowledge so that the processes and products are improved resulting in
high-quality software.
Empirical studies are of many types, including surveys, systematic reviews, experi-
ments, and case studies.
www.allitebooks.com
4 Empirical Research in Software Engineering
In qualitative research, the researchers study human behavior, preferences, and nature.
Qualitative research provides an in-depth analysis of the concept under investigation
and thus uses focused data for research. Understanding a new process or technique in
software engineering is an example of qualitative research. Qualitative research provides
textual descriptions or pictures related to human beliefs or behavior. It can be extended
to other studies with similar populations but generalizations of a particular phenomenon
may be difficult. Qualitative research involves methods such as observations, interviews,
and group discussions. This method is widely used in case studies.
Qualitative research can be used to analyze and interpret the meaning of results produced
by quantitative research. Quantitative research generates numerical data for analysis,
whereas qualitative research generates non-numerical data (Creswell 1994). The data of
qualitative research is quite rich as compared to quantitative data. Table 1.1 summaries
the key differences between quantitative and qualitative research.
The empirical studies can be further categorized as experimental, case study, systematic
review, survey, and post-mortem analysis. These categories are explained in the next sec-
tion. Figure 1.2 presents the quantitative and qualitative types of empirical studies.
1.3.1 Experiment
An experimental study tests the established hypothesis by finding the effect of variables of
interest (independent variables) on the outcome variable (dependent variable) using statis-
tical analysis. If the experiment is carried out correctly, the hypothesis is either accepted or
rejected. For example, one group uses technique A and the other group uses technique B,
which technique is more effective in detecting a larger number of defects? The researcher
may apply statistical tests to answer such questions. According to Kitchenham et al. (1995),
the experiments are small scale and must be controlled. The experiment must also con-
trol the confounding variables, which may affect the accuracy of the results produced by
the experiment. The experiments are carried out in a controlled environment and often
referred to as controlled experiments (Wohlin 2012).
The key factors involved in the experiments are independent variables, dependent vari-
ables, hypothesis, and statistical techniques. The basic steps followed in experimental
TABLE 1.1
Comparison of Quantitative and Qualitative Research
Quantitative Research Qualitative Research
Experiment
Survey research
Quantitative
Systematic
reviews
Empirical studies
Postmortem
analysis
FIGURE 1.2
Types of empirical studies.
Experiment
Experiment Experiment Experiment Experiment
conduct
definition design interpretation reporting
and analysis
FIGURE 1.3
Steps in experimental research.
research are shown in Figure 1.3. The same steps are followed in any empirical study
process however the content varies according to the specific study being carried out. In
first phase, experiment is defined. The next phase involves determining the experiment
design. In the third phase the experiment is executed as per the experiment design. Then,
the results are interpreted. Finally, the results are presented in the form of experiment
report. To carry out an empirical study, a replicated study (repeating a study with similar
settings or methods but different data sets or subjects), or to perform a survey of existing
empirical studies, the research methodology followed in these studies needs to be formu-
lated and described.
A controlled experiment involves varying the variables (one or more) and keeping every-
thing else constant or the same and are usually conducted in small or laboratory setting
(Conradi and Wang 2003). Comparing two methods for defect detection is an example of a
controlled experiment in software engineering context.
FIGURE 1.4
Case study phases.
Introduction 7
outcome variable, a researcher may want to explain why an independent variable affects
the outcome variable.
The purpose of a systematic review is to summarize the existing research and provide
future guidelines for research by identifying gaps in the existing literature. A systematic
review involves:
The systematic reviews are performed in three phases: planning the review, conducting
the review, and reporting the results of the review. Figure 1.5 presents the summary of the
phases involved in systematic reviews.
In the planning stage, the review protocol is developed that includes the following
steps: research questions identification, development of review protocol, and evaluation
of review protocol. During the development of review protocol the basic processes in
the review are planned. The research questions are formed that address the issues to be
• Documenting
Reporting
the results
FIGURE 1.5
Phases of systematic review.
8 Empirical Research in Software Engineering
answered in the systematic literature review. The development of review protocol involves
planning a series of steps—search strategy design, study selection criteria, study quality
assessment, data extraction process, and data synthesis process. In the first step, the search
strategy is described that includes identification of search terms and selection of sources to
be searched to identify the primary studies. The second step determines the inclusion and
exclusion criteria for each primary study. In the next step, the quality assessment criterion
is identified by forming the quality assessment questionnaire to analyze and assess the
studies. The second to last step involves the design of data extraction forms to collect the
required information to answer the research questions, and, in the last step, data synthesis
process is defined. The above series of steps are executed in the review in the conducting
phase. In the final phase, the results are documented. Chapter 2 provides details of sys-
tematic review.
Reporting
• Presenting
Results
interpretation the results
Research
conduct and • Theoretical and
Experiment analysis practical significance
design of results
Study • Descriptive
• Research • Limitations of the
definition statistics
questions work
• Attribute
• Scope • Hypothesis reduction
• Purpose formulation • Statistical
• Motivation • Defining analysis
variables • Model
• Context
• Data prediction and
collection validation
• Selection of • Hypothesis
data analysis testing
methods
• Validity
threats
FIGURE 1.6
Empirical study phases.
The scope of the empirical study defines the extent of the investigation. It involves listing
down the specific goals and objectives of the experiment. The purpose of the study may be
to find the effect of a set of variables on the outcome variable or to prove that technique A
is superior to technique B. It also involves identifying the underlying hypothesis that is
formulated at later stages. The motivation of the experiment describes the reason for con-
ducting the study. For example, the motivation of the empirical study is to analyze and
assess the capability of a technique or method. The object of the study is the entity being
examined in the study. The entity in the study may be the process, product, or technique.
Perspective defines the view from which the study is conducted. For example, if the study
is conducted from the tester’s point of view then the tester will be interested in planning
and allocating resources to test faulty portions of the source code. Two important domains
in the study are programmers and programs (Basili et al. 1986).
10 Empirical Research in Software Engineering
1. Research questions: The first step is to formulate the research problem. This step states
the problem in the form of questions and identifies the main concepts and relations
to be explored. For example, the following questions may be addressed in empirical
studies to find the relationship between software metrics and quality attributes:
a. What will be the effect of software metrics on quality attributes (such as fault
proneness/testing effort/maintenance effort) of a class?
b. Are machine-learning methods adaptable to object-oriented systems for pre-
dicting quality attributes?
c. What will be the effect of software metrics on fault proneness when severity of
faults is taken into account?
2. Independent and dependent variables: To analyze relationships, the next step is to
define the dependent and the independent variables. The outcome variable pre-
dicted by the independent variables is called the dependent variable. For instance,
the dependent variables of the models chosen for analysis may be fault proneness,
testing effort, and maintenance effort. A variable used to predict or estimate a
dependent variable is called the independent (explanatory) variable.
3. Hypothesis formulation: The researcher should carefully state the hypothesis to
be tested in the study. The hypothesis is tested on the sample data. On the basis
of the result from the sample, a decision concerning the validity of the hypothesis
(acception or rejection) is made.
Consider an example where a hypothesis is to be formed for comparing a num-
ber of methods for predicting fault-prone classes.
For each method, M, the hypothesis in a given study is the following (the
relevant null hypothesis is given in parentheses), where the capital H indicates
“hypothesis.” For example:
H–M: M outperform the compared methods for predicting fault-prone software classes
(null hypothesis: M does not outperform the compared methods for predicting fault-
prone software classes).
4. Empirical data collection: The researcher decides the sources from which the
data is to be collected. It is found from literature that the data collected is either
from university/academic systems, commercial systems, or open source software.
The researcher should state the environment in which the study is performed,
Introduction 11
programming language in which the systems are developed, size of the systems
to be analyzed (lines of code [LOC] and number of classes), and the duration for
which the system is developed.
5. Empirical methods: The data analysis techniques are selected based on the type
of the dependent variables used. An appropriate data analysis technique should
be selected by identifying its strengths and weaknesses. For example, a number of
techniques have been available for developing models to predict and analyze soft-
ware quality attributes. These techniques could be statistical like linear regression
and logistic regression or machine-learning techniques like decision trees, support
vector machines, and so on. Apart from these techniques, there are a new set of
techniques like particle swarm optimization, gene expression programming, and
so on that are called the search-based techniques. The details of these techniques
can be found in Chapter 7.
In the empirical study, the data is analyzed corresponding to the details given in the
experimental design. Thus, the experimental design phase must be carefully planned and
executed so that the analysis phase is clear and unambiguous. If the design phase does not
match the analysis part then it is most likely that the results produced are incorrect.
1. Descriptive statistics: The data is validated for correctness before carrying out the
analysis. The first step in the analysis is descriptive statistics. The research data
must be suitably reduced so that the research data can be read easily and can be
used for further analysis. Descriptive statistics concern development of certain
indices or measures to summarize the data. The important statistics measures used
for comparing different case studies include mean, median, and standard devia-
tion. The data analysis methods are selected based on the type of the dependent
variable being used. Statistical tests can be applied to accept or refute a hypothesis.
Significance tests are performed for comparing the predicted performance of a
method with other sets of methods. Moreover, effective data assessment should
also yield outliers (Aggarwal et al. 2009).
2. Attribute reduction: Feature subselection is an important step that identifies
and removes as much of the irrelevant and redundant information as possible.
The dimensionality of the data reduces the size of the hypothesis space and allows
the methods to operate faster and more effectively (Hall 2000).
3. Statistical analysis: The data collected can be analyzed using statistical analysis by
following the steps below.
12 Empirical Research in Software Engineering
a. Model prediction: The multivariate analysis is used for the model prediction.
Multivariate analysis is used to find the combined effect of each indepen-
dent variable on the dependent variable. Based on the results of performance
measures, the performance of models predicted is evaluated and the results
are interpreted. Chapter 7 describes these performance measures.
b. Model validation: In systems, where models are independently constructed from
the training data (such as in data mining), the process of constructing the model is
called training. The subsamples of data that are used to validate the initial analy-
sis (by acting as “blind” data) are called validation data or test data. The valida-
tion data is used for validating the model predicted in the previous step.
c. Hypothesis testing: It determines whether the null hypothesis can be rejected at
a specified confidence level. The confidence level is determined by the researcher
and is usually less than 0.01 or 0.05 (refer Section 4.7 for details).
1.4.5 Reporting
Finally, after the empirical study has been conducted and interpreted, the study is reported
in the desired format. The results of the study can be disseminated in the form of a confer-
ence article, a journal paper, or a technical report.
The results are to be reported from the reader’s perspective. Thus, the background,
motivation, analysis, design, results, and the discussion of the results must be clearly
documented. The audience may want to replicate or repeat the results of a study in a simi-
lar context. The experiment settings, data-collection methods, and design processes must
be reported in significant level of detail. For example, the descriptive statistics, statistical
tools, and parameter settings of techniques must be provided. In addition, graphical repre-
sentation should be used to represent the results. The results may be graphically presented
using pie charts, line graphs, box plots, and scatter plots.
4. Valid: The experiment conclusions should be valid for a wide range of population.
5. Unbiased: The researcher performing the study should not influence the results to sat-
isfy the hypothesis. The research may produce some bias because of experiment error.
The bias may be produced when the researcher selects the participants such that they
generate the desired results. The measurement bias may occur during data collection.
6. Control: The experiment design should be able to control the independent variables
so that the confounding effects (interaction effects) of variables can be reduced.
7. Replicable: Replication involves repeating the experiment with different data
under same experimental conditions. If the replication is successful then this indi-
cates generalizability and validity of the results.
8. Repeatable: The experimenter should be able to reproduce the results of the study
under similar settings.
TABLE 1.2
Examples of Unethical Research
S. No Problem
1 Employees misleading the manager to protect himself or herself with the knowledge of the researcher
2 Nonconformance to a mandatory process
3 Revealing identities of the participant or organization
4 Manager unexpectedly joining a group interview or discussion with the participant
5 Experiment revealing identity of the participants of a nonperforming department in an organization
6 Experiment outcomes are used in employee ratings
7 Participants providing information off the record, that is, after interview or discussion is over
www.allitebooks.com
14 Empirical Research in Software Engineering
The ethical threats presented in Table 1.2 can be reduced by (1) presenting data and
results such that no information about the participant and the organization is revealed,
(2) presenting different reports to stakeholders, (3) providing findings to the participants
and giving them the right to withdraw any time during the research, and (4) providing
publication to companies for review before being published. Singer and Vinson (2001)
identified that the engineering and science ethics may not be related to empirical research
in software engineering. They provided the following four ethical principles:
1. Research title: The title of the project must be included in the consent form.
2. Contact details: The contact details (including ethics contact) will provide the
participant information about whom to contact to clarify any questions or issues
or complaints.
3. Consent and comprehension: The participant actually gives the consent form in
this section stating that they have understood the requirement of the research.
4. Withdrawal: This section states that the participants can withdraw from the
research without any penalty.
5. Confidentiality: It states the confidentiality related to the research study.
6. Risks and benefits: This section states the risks and benefits of the research to the
participants.
7. Clarification: The participants can ask for any further clarification at any time
during the research.
8. Signature: Finally, the participant signs the consent form with the date.
Introduction 15
1.5.3 Confidentiality
The information shared by the participants should be kept confidential. The researcher
should hide the identity of the organization and participant. Vinson and Singer (2008) iden-
tified three features of confidentiality—data privacy, participant anonymity, and data ano-
nymity. The data collected must be protected by password and only the people involved
in the research should have access to it. The data should not reveal the information about
the participant. The researchers should not collect personal information of participant. For
example, participant identity must be used instead of the participant name. The partici-
pant information hiding is achieved by hiding information from colleagues, professors,
and general public. Hiding information from the manager is particularly essential as it
may affect the career of the participants. The information must be also hidden from the
organization’s competitors.
1.5.4 Beneficence
The participants must be benefited by the research. Hence, methods that protect the inter-
est of the participants and do not harm them must be adopted. The research must not pose
a threat to the researcher’s job, for example, by creating an employee-ranking framework.
The revealing of an organization’s sensitive information may also bring loss to the company
in terms of reputation and clients. For example, if the names of companies are revealed in
the publication, the comparison between the processes followed in the companies or poten-
tial flaws in the processes followed may affect obtaining contracts from the clients. If the
research involves analyzing the process of the organization, the outcome of the research or
facts revealed from the research can harm the participants to a significant level.
participants must be to protect the interests of the participants so that they are protected
from any harm. Becker-Kornstaedt (2001) suggests that the participant interests can be
protected by using techniques such as manipulating data, providing different reports to
different stakeholders, and providing the right to withdraw to the participants.
Finally, feedback of the research results must be provided to the participants. The opin-
ion of the participants about the validity of the results must also be asked. This will help
in increasing the trust between the researcher and the participant.
The predictive models constructed in ESE can be applied to future, similar industrial
applications. The empirical research enables software practitioners to use the results of the
experiment and ascertain that a set of good processes and procedures are followed dur-
ing software development. Thus, the empirical study can guide toward determining the
quality of the resultant software products and processes. For example, a new technique or
technology can be evaluated and assessed. The empirical study can help the software pro-
fessionals in effectively planning and allocating resources in the initial phases of software
development life cycle.
1.6.2 Academicians
While studying or conducting research, academicians are always curious to answer ques-
tions that are foremost in their minds. As the academicians dig deeper into their subject
or research, the questions tend to become more complex. Empirical research empowers
them with a great tool to find an answer by asking or interviewing different stakeholders,
Introduction 17
1.6.3 Researchers
From the researchers point of view, the results can be used to provide insight about exist-
ing trends and guidelines regarding future research. The empirical study can be repeated
or replicated by the researcher in order to establish generalizability of the results to new
subjects or data sets.
Purpose
Participants
Process Product
FIGURE 1.7
Elements of empirical research.
18 Empirical Research in Software Engineering
Process lays down the way in which the research will be conducted. It defines the
sequence of steps taken to conduct a research. It provides details about the techniques,
methodologies, and procedures to be used in the research. The data-collection steps,
variables involved, techniques applied, and limitations of the study are defined in this
step. The process should be followed systematically to produce a successful research.
Participants are the subjects involved in the research. The participants may be inter-
viewed or closely observed to obtain the research results. While dealing with participants,
ethical issues in ESE must be considered so that the participants are not harmed in any
way.
Product is the outcome produced by the research. The final outcome provides the
answer to research questions in the empirical research. The new technique developed or
methodology produced can also be considered as a product of the research. The journal
paper, conference article, technical report, thesis, and book chapters are products of the
research.
The typical evolution process is depicted in Figure 1.8. The figure shows that a change
is requested by a stakeholder (anyone who is involved in the project) in the project. The
second step requires analyzing the cost of implementing the change and the impact of
the change on the related modules or components. It is the responsibility of an expert
group known as the change control board (CCB) to determine whether the change must be
implemented or not. On the basis of the outcome of the analysis, the CCB approves or dis-
approves a change. If the change is approved, then the developers implement the change.
Finally, the change and the portions affected by the change are tested and a new version of
the software is released. The process of continuously changing the software may decrease
the quality of the software.
The main concerns during the evolution phase are maintaining the flexibility and qual-
ity of the software. Predicting defects, changes, efforts, and costs in the evolution phase
Introduction 19
Test Request
change change
Implement Analyze
change change
Approve/
deny
FIGURE 1.8
Software evolution cycle.
Defect prediction
• What are the defect-prone portions in the maintanence phase?
Change prediction
FIGURE 1.9
Prediction during evolution phase.
20 Empirical Research in Software Engineering
1. Functionality
2. Usability
3. Testability
4. Reliability
5. Maintainability
6. Adaptability
The attribute domains can be further divided into attributes that are related to software
quality and are given in Figure 1.10. The details of software quality attributes are given in
Table 1.3.
• Completeness
• Correctness
• Security
1 • Traceability
• Efficiency
Functionality
• Portability 6 2 • Learnability
• Interoperability Adaptability Usability • Operability
• User-friendliness
• Installability
Software
• Satisfaction
quality
attributes
• Agility
• Modifiability Maintainability • Verifiability
Testability
• Readability • Validatable
• Flexibility 5 3
Reliability
4 • Robustness
• Recoverability
FIGURE 1.10
Software quality attributes.
Introduction 21
TABLE 1.3
Software Quality Attributes
Functionality: The degree to which the purpose of the software is satisfied
1 Completeness The degree to which the software is complete
2 Correctness The degree to which the software is correct
3 Security The degree to which the software is able to prevent unauthorized access to the
program data
4 Traceability The degree to which requirement is traceable to software design and source code
5 Efficiency The degree to which the software requires resources to perform a software
function
Testability: The ease with which the software can be tested to demonstrate the faults
1 Verifiability The degree to which the software deliverable meets the specified standards,
procedures, and process
2 Validatable The ease with which the software can be executed to demonstrate whether the
established testing criteria is met
Maintainability: The ease with which the faults can be located and fixed, quality of the software can be
improved, or software can be modified in the maintenance phase
1 Agility The degree to which the software is quick to change or modify
2 Modifiability The degree to which the software is easy to implement, modify, and test in the
maintenance phase
3 Readability The degree to which the software documents and programs are easy to understand
so that the faults can be easily located and fixed in the maintenance phase
4 Flexibility The ease with which changes can be made in the software in the maintenance
phase
Adaptability: The degree to which the software is adaptable to different technologies and platforms
1 Portability The ease with which the software can be transferred from one platform to another
platform
2 Interoperability The degree to which the system is compatible with other systems
For example, a measure is the number of failures experienced during testing. Measurement
is the way of recording such failures. A software metric may be the average number of
failures experienced per hour during testing.
Fenton and Pfleeger (1996) has defined measurement as:
It is the process by which numbers or symbols are assigned to attributes of entities in
the real world in such a way as to describe them according to clearly defined rules.
22 Empirical Research in Software Engineering
Coupling?
<8 >8
Low High
FIGURE 1.11
Example of classification process.
Introduction 23
Validation
data
Predicts
New data
FIGURE 1.12
Steps in classification process.
analysis can be categorized by identifying patterns from the textual information. This can
be achieved by reading and analyzing texts and deriving logical categories. This will help
organize data in the form of categories. For example, answers to the following questions
are presented in the form of categories.
Text mining is another way to process qualitative data into useful form that can be used
for further analysis.
Experiment
Causes process Effect
(independent variables) (dependent variable)
FIGURE 1.13
Independent and dependent variables.
www.allitebooks.com
24 Empirical Research in Software Engineering
are input variables that are manipulated or controlled by the researcher to measure the
response of the dependent variable.
The dependent variable (or response variable) is the output produced by analyzing the
effect of the independent variables. The dependent variables are presumed to be influenced
by the independent variables. The independent variables are the causes and the depen-
dent variable is the effect. Usually, there is only one dependent variable in the research.
Figure 1.13 depicts that the independent variables are used to predict the outcome variable
following a systematic experimental process.
Examples of independent variables are lines of source code, number of methods, and
number of attributes. Dependent variables are usually measures of software quality attri-
butes. Examples of dependent variable are effort, cost, faults, and productivity. Consider
the following research question:
Do software metrics have an effect on the change proneness of a module?
Here, software metrics are the independent variables and change proneness is the
dependent variable.
Apart from the independent variables, unknown variables or confounding variables
(extraneous variables) may affect the outcome (dependent) variable. Randomization can
nullify the effect of confounding variables. In randomization, many replications of the
experiments are executed and the results are averaged over multiple runs, which may
cancel the effect of extraneous variables in the long course.
Open source software is usually a freely available software, developed by many develop-
ers from different places in a collaborative manner. For example, Google Chrome, Android
operating system, and Linux operating system.
Introduction 25
FIGURE 1.14
(a) Within-company versus (b) cross-company prediction.
26 Empirical Research in Software Engineering
1. Goal
2. Question
3. Metric
In GQM method, measurement is goal-oriented. Thus, first the goals need to be defined
that can be measured during the software development. The GQM method defines goals
that are transformed into questions and metrics. These questions are answered later to
determine whether the goals have been satisfied or not. Hence, GQM method follows
top-down approach for dividing goals into questions and mapping questions to metrics,
and follows bottom-up approach by interpreting the measurement to verify whether the
goals have been satisfied. Figure 1.15 presents the hierarchical view of GQM framework.
The figure shows that the same metric can be used to answer multiple questions.
For example, if the developer wants to improve the defect-correction rate during the
maintenance phase. The goal, question, and associated metrics are given as:
The goals are defined as purposes, objects, and viewpoints (Basili et al. 1994). In the above
example, purpose is “to improve,” object is “defects,” and viewpoint is “project manager.”
TABLE 1.4
Difference between Parametric and Nonparametric Tests
Parametric Tests Nonparametric Tests
Goal
Metric 3
Metric 4
FIGURE 1.15
Framework of GQM.
Planning Definition
Data
Interpretation
collection
• Answering
questions
• Measurement • Collecting data
• Goal evaluated
FIGURE 1.16
Phases of GQM.
Figure 1.16 presents the phases of the GQM method. The GQM method has the following
four phases:
• Planning: In the first phase, the project plan is produced by recognizing the basic
requirements.
• Definition: In this phase goals, questions, and relevant metrics are defined.
• Data collection: In this phase actual measurement data is collected.
• Interpretation: In the final phase, the answers to the questions are provided and
the goal’s attainment is verified.
28 Empirical Research in Software Engineering
Exercises
1.1 What is empirical software engineering? What is the purpose of empirical soft-
ware engineering?
1.2 What is the importance of empirical studies in software engineering?
1.3 Describe the characteristics of empirical studies.
1.4 What are the five types of empirical studies?
1.5 What is the importance of replicated and repeated studies in empirical software
engineering?
1.6 Explain the difference between an experiment and a case study.
1.7 Differentiate between quantitative and qualitative research.
1.8 What are the steps involved in an experiment? What are characteristics of a good
experiment?
Introduction 29
1.9 What are ethics involved in a research? Give examples of unethical research.
1.10 Discuss the following terms:
a. Hypothesis testing
b. Ethics
c. Empirical research
d. Software quality
1.11 What are systematic reviews? Explain the steps in systematic review.
1.12 What are the key issues involved in empirical research?
1.13 Compare and contrast classification and prediction process.
1.14 What is GQM method? Explain the phases of GQM method.
1.15 List the importance of empirical research from the perspective of software indus-
tries, academicians, and researchers.
1.16 Differentiate between the following:
a. Parametric and nonparametric tests
b. Independent, dependent and confounding variables
c. Quantitative and qualitative data
d. Within-company and cross-company analysis
e. Proprietary and open source software
Further Readings
Kitchenham et al. effectively provides guidelines for empirical research in software
engineering:
Juristo and Moreno explain a good number of concepts of empirical software engineering:
N. Mays, and C. Pope, “Qualitative research: Rigour and qualitative research,” British
Medical Journal, vol. 311, no. 6997, pp. 109–112, 1995.
A. Strauss, and J. Corbin, Basics of Qualitative Research: Techniques and Procedures for
Developing Grounded Theory, Sage Publications, Thousand Oaks, CA, 1998.
30 Empirical Research in Software Engineering
The detail about ethical issues for empirical software engineering is presented in:
Authors present detailed practical guidelines on the preparation, conduct, design, and
reporting of case studies of software engineering in:
The following research paper provides detailed explanations about software quality
attributes:
A. J. Albrecht, and J. E. Gaffney, “Software function, source lines of code, and devel-
opment effort prediction: A software science validation,” IEEE Transactions on
Software Engineering, vol. 6, pp. 639–648, 1983.
The following research papers provide a brief knowledge of quantitative and qualitative
data in software engineering:
A. Bryman, and B. Burgess, Analyzing Qualitative Data, Routledge, New York, 2002.
Introduction 31
Basili explain the major role to controlled experiment in software engineering field in:
The concept of proprietary, open source, and university software are well explained in the
following research paper:
The book by Solingen and Berghout is a classic and a very useful reference, and it gives
detailed discussion on the GQM methods:
R. Prieto-Díaz, “Status report: Software reusability,” IEEE Software, vol. 10, pp. 61–66,
1993.
2
Systematic Literature Reviews
Review of existing literature is an essential step before beginning any new research.
Systematic reviews (SRs) synthesize the existing research work in such a manner that can be
analyzed, assessed, and interpreted to draw meaningful conclusions. The aim of conducting
an SR is to gather and interpret empirical evidence from the available research with respect
to formed research questions. The benefit of conducting an SR is to summarize the existing
trends in the available research, identify gaps in the current research, and provide future
guidelines for conducting new research. The SRs also provide empirical evidence in sup-
port or opposition of a given hypothesis. Hence, the author of the SR must make all the
efforts to provide evidence that support or does not support a given research hypothesis.
In this chapter, guidelines for conducting SRs are given for software engineering research-
ers and practitioners. The steps to be followed while conducting an SR including planning,
conducting and reporting phases are described. The existing high-quality reviews in the
areas of software engineering are also presented in this chapter.
33
www.allitebooks.com
34 Empirical Research in Software Engineering
TABLE 2.1
Comparison of Systematic Reviews and Literature Survey
S. No. Systematic Review Literature Survey
1 The goal is to identify best practices, The goal is to classify or categorize existing
strengths and weaknesses of specific literature.
techniques, procedures, tools, or methods
by combining information from various
studies.
2 Focused on research questions that assess Provides an introduction of each paper in
the techniques under investigation. literature based on the identified area.
3 Provides a detailed review of existing Provides a brief overview of existing
literature. literature.
4 Extracts technical and useful metadata Extracts general research trends from the
from the contents. studies.
5 Search process is more stringent such that it Search process is less stringent.
involves searching references or
contacting researchers in the field.
6 Strong assessment of quality is necessary. Strong assessment of quality is not necessary.
7 Results are based on high-quality evidence Results only provide summary of existing
with the aim to answer research questions. literature.
8 Often uses statistics to analyze the results. Does not use statistics to analyze the results.
SRs summarize high-quality research on a specific area. They provide the best available
evidence on a particular technique or technology and produce conclusions that can be
used by the software practitioners and researchers to select the best available techniques
or methodologies. The studies included in the review are known as primary studies and
the SRs are known as secondary studies. Table 2.1 presents the summary of difference
between SR and literature survey.
1. It selects high-quality research papers and studies that are relevant, important,
and essential, which are summarized in the form of one review paper.
2. It performs a systematic search by forming a search strategy to identify primary
studies from the digital libraries. The search strategy is documented so that the
readers can analyze the completeness of the process and repeat the same.
3. It forms a valid review protocol and research questions that address the issues to
be answered in the SR.
4. It clearly summarizes the characteristics of each selected study, including aims,
techniques, and methods used in the studies.
5. It consists of a justified quality assessment criteria for inclusion and exclusion of
the studies in the SR so that the effectiveness of each study can be determined.
6. It uses a number of presentation tools for reporting the findings and results of the
selected studies to be included in the SR.
7. It identifies gaps in the current findings and highlights future directions.
Systematic Literature Reviews 35
The procedure followed in performing the SR is given by Kitchenham et al. (2007). The
process is depicted in Figure 2.1. In the first step, the need for the SR is examined and in the
second step the research questions are formed that address the issues to be answered in
the review. Thereafter, the review protocol is developed that includes the following steps:
search strategy design, study selection criteria, study quality assessment criteria, data
extraction process, and data synthesis process.
The formation of review protocol consists of a series of stages. In the first step, the
search strategy is formed, including identification of search terms and selection of
sources to be searched to identify the primary studies. The next step involves deter-
mination of relevant studies by setting the inclusion and exclusion criteria for select-
ing review studies. Thereafter, quality assessment criteria are identified by forming the
quality assessment questionnaire to analyze and assess the studies. The second to last
stage involves the design of data extraction forms to collect the required information
to answer the research questions, and in the final stage, methods for data synthesis are
devised. Development of review protocol is an important step in an SR as it reduces the
possibility and risk of research bias in the SR. Finally, in the planning stage, the review
protocol is evaluated.
The steps planned in the first stage are actually performed in the conducting stage that
includes actual collection of relevant studies by applying first the search strategy and then
the inclusion and exclusion criteria. Each selected study is ranked according to the qual-
ity assessment criteria, and the data extraction and data synthesis steps are followed from
only the selected high-quality primary studies. In the final phase, the results of the SR are
reported. This step further involves examining, presenting, and verifying the results.
36 Empirical Research in Software Engineering
Planning
2. Identify research questions
the review
Conducting
7. Study quality assessment
the review
8. Data extraction
9. Data synthesis
FIGURE 2.1
Systematic review process.
The above stages defined in the SR are iterative and not sequential. For example, the criteria
for inclusion and exclusion of primary studies must be developed prior to collecting the
studies. The criteria may be refined in the later stages.
1. How many primary studies are available in the software engineering context?
2. What are the strength and weaknesses of the existing SR (if any) in the software
engineering context?
3. What is the practical relevance of the proposed SR?
4. How will the proposed SR guide practitioners and researchers?
5. How can the quality of the proposed SR be evaluated?
Checklist is the most common mechanism used for reviewing the quality of the existing SR
in the same area. It may also identify the flaws in the existing SR. A checklist may consist
of a list of questions to determine the effectiveness of the existing SR. Table 2.2 shows an
example of the checklist to assess the quality of an SR. The checklist consists of questions
pertaining to the procedures and processes followed during an SR. The existing studies
may be rated on a scale of 1–12 so that the quality of each study can be determined.
38 Empirical Research in Software Engineering
TABLE 2.2
Checklist for Evaluating Existing SR
S. No. Questions
We may establish a threshold value to identify quality level of the study. If the rating of
the existing SR goes below the established threshold value, the quality of the study may be
considered as not acceptable and a new SR on the same topic may be conducted.
Thus, if an SR in the same domain with similar aims is located but it was conducted a
long time ago, then a new SR adding current studies may be justified. However, if the exist-
ing SR is still relevant and is of high quality, then a new SR may not be required.
• Which areas have already been explored in the existing reviews (if any)?
• Which areas are relevant and need to be explored/answered during the
proposed SR?
• Are the questions important to the researchers and software practitioners?
• Will the questions assess any similarities in the trends or identify any deviation
from the existing trends?
Systematic Literature Reviews 39
TABLE 2.3
Research Questions for SRML Case Study (Malhotra 2015)
RQ# Research Questions Motivation
RQ1 Which ML techniques have been used for SFP? Identify the ML techniques commonly being used
in SFP.
RQ2 What kind of empirical validation for Assess the empirical evidence obtained.
predicting faults is found using the ML
techniques found in RQ1?
RQ2.1 Which techniques are used for subselecting Identify techniques reported to be appropriate for
metrics for SFP? selecting relevant metrics.
RQ2.2 Which metrics are found useful for SFP? Identify metrics reported to be appropriate for SFP.
RQ2.3 Which metrics are found not useful for SFP? Identify metrics reported to be inappropriate for SFP.
RQ2.4 Which data sets are used for SFP? Identify data sets reported to be appropriate for SFP.
RQ2.5 Which performance measures are used Identify the measures which can be used for
for SFP? assessing the performance of the ML techniques
for SFP.
RQ3 What is the overall performance of the Investigate the performance of the ML techniques
ML techniques for SFP? for SFP.
RQ4 Whether the performance of the ML Compare the performance of the ML techniques
techniques is better than statistical over statistical techniques for SFP.
techniques?
RQ5 Are there any ML techniques that significantly Assess the performance of the ML techniques over
outperform other ML techniques? other ML techniques for SFP.
RQ6 What are the strengths and weaknesses of the Determine the conditions that favor the use of ML
ML techniques? techniques.
The following questions address various issues related to SR on the use of the ML
techniques for SFP:
Table 2.3 presents the research questions along with the motivation for SRML. While
forming the research questions, the interest of the researchers must be kept in mind.
For example, for Masters and PhD student thesis, it is necessary to identify the research
relevant to the proposed work so that the current body of knowledge can be formed and
the proposed work can be established.
Development of search
strategy Search terms
Digital libraries
Construction of quality
assessment checklists
Development of data
extraction forms
Identification of study
synthesis techniques
FIGURE 2.2
Steps involved in a review protocol.
In this step, the planning of the search strategy, study selection criteria, quality assessment
criteria, data extraction, and data synthesis is carried out.
The purpose of the review must state the options researchers have when deciding which
technique or method to adopt in practice. The review protocol is established by frequently
holding meetings and group discussions in the group formed comprising of preferably
senior members having experience in the area. Hence, this step is iterative and is defined
and refined in various iterations. Figure 2.2 shows the steps involved in the development
of review protocol.
The first step involves formation of search terms, selection of digital libraries that must
be searched, and refinement of search terms. This step allows identification of primary
studies that will address the research questions. The initial search terms may be identified
by the following steps to form the best suited search string:
Thereafter, the sophisticated search terms are formed by incorporating alternative terms
and synonyms using Boolean expression “OR” and combining main search terms using
“AND.” The following general search terms were used for identification of primary studies
in SRML case study:
Software AND (fault OR defect OR error) AND (proneness OR prone OR prediction OR
probability) AND (regression OR ML OR soft computing OR data mining OR classifica-
tion OR Bayesian network OR neural network [NN] OR decision tree OR support vector
machine OR genetic algorithms OR random forest [RF]).
Systematic Literature Reviews 41
After identifying the search terms, the relevant and important digital portals are to be
selected. The portals publishing the journal articles are the right place to search for the rel-
evant studies. The bibliographic databases are also common place of search as they provide
title, abstract, and publication source of the study. The selection of digital libraries/portals
is very essential, as the number of studies found is dependent on it. Generally, several
libraries must be searched to find all the relevant studies that cover the research questions.
The selection must not be restricted by the availability of digital portals at the home uni-
versities. For example, the following seven electronic digital libraries may be searched for
the identification of primary studies:
1. IEEE Xplore
2. ScienceDirect
3. ACM Digital Library
4. Wiley Online Library
5. Google Scholar
6. SpringerLink
7. Web of Science
The reference section of the relevant studies must also be examined/scanned to identify the
other relevant studies. The external experts in the areas may also be contacted in this regard.
The next step is to establish the inclusion and exclusion criteria for the SR. The inclusion
and exclusion criteria allow the researchers to decide whether to include or exclude the
study in the SR. The inclusion and exclusion criteria are based on the research questions.
For example, the studies that use data collected from university software developed by
student programmers or experiments conducted by students may be excluded from the
SR. Similarly, the studies that do not perform any empirical analysis on the techniques
and technologies that are being examined in the SR may be excluded. Hence, the inclusion
criteria may be specific to the type of tool, technique, or technology being explored in the
SR. The data on which the study was conducted or the type of empirical data being used
(academia or industry/small, medium, or large sized) may also affect the inclusion criteria.
The following inclusion and exclusion criteria were formed in SRML review:
Inclusion criteria:
• Empirical studies using the ML techniques for SFP.
• Empirical studies combining the ML and non-ML techniques.
• Empirical studies comparing the ML and statistical techniques.
Exclusion criteria:
• Studies without empirical analysis or results of use of the ML techniques for SFP.
• Studies based on fault count as dependent variable.
• Studies using the ML techniques in context other than SFP.
• Similar studies, that is, studies by the same author in conference as well-
extended version in journal. However, if the results were different in both the
studies, they were retained.
• Studies that only use statistical techniques for SFP.
• Review studies.
42 Empirical Research in Software Engineering
The above inclusion and exclusion criteria were applied on each relevant study tested
by two researchers independently, and they reached a common decision after detailed
discussion. In case of any doubt, full text of a study was reviewed and final decision
regarding the inclusion/exclusion of the study was made. Hence, more than one reviewer
should check the relevance of a study based on the inclusion and exclusion criteria before
a final decision for inclusion or exclusion of a study is made.
The third step in development of a review protocol is to form the quality questionnaire
for assessing the relevance and strength of the primary studies. The quality assessment is
necessary to investigate and analyze the quality and determine the strength of the stud-
ies to be included in final synthesis. It is necessary to limit the bias in the SR and provide
guidelines for interpretation of the results.
The assessment criteria must be based on the relevance of a particular study to the
research questions and the quality of the processes and methods used in the study.
In addition, quality assessment questions must focus on experimental design, appli-
cability of results, and interpretation of results. Some studies may meet the inclusion
criteria but may not be relevant with respect to the research design, the way in which
data is collected, or may not justify the use of various techniques analyzed. For example,
a study on fault proneness may not perform comparative analysis of ML and non-ML
techniques.
The quality questionnaire must be constructed by weighing the studies with numerical val-
ues. Table 2.4 presents the quality assessment questions for any SR. The studies are rated
according to each question and given a score of 1 (yes) if it is satisfactory, 0.5 (partly) if it is
moderately satisfactory, and a score of 0 (no) if it is unsatisfactory. The final score is obtained
after adding the values assigned to each question. A study could have a maximum score of
10 and a minimum score of 0, if ranked on the basis of quality assessment questions formed
in Table 2.4. The studies with low-quality scores may be excluded from the SR or final list of
primary studies.
In addition to the questions given in Table 2.4, the following four additional questions
were formed in SRML review (see Table 2.5). Hence, a researcher may create specific qual-
ity assessment questions with respect to the SR.
The quality score along with the level assigned to the study in the example case study
SRML taken in this chapter is given in Table 2.6. The reviewers must decide a threshold
value for excluding a study from the SR. For example, studies with quality score >9 were
considered for further data extraction and synthesis in SRML review.
TABLE 2.4
Quality Assessment Questions
Q# Quality Questions Yes Partly No
TABLE 2.5
Additional Quality Assessment Questions for SRML Review
Q# Quality Questions Yes Partly No
TABLE 2.6
Quality Scores for Quality Assessment
questions given in Table 2.4
Quality Score
9 ≤ score ≤ 10 Very high
8 ≤ score ≤ 6 High
5 ≤ score ≤ 4 Medium
0 ≤ score ≤ 3 Low
The next step is to construct data extraction forms that will help to summarize the infor-
mation extracted from the primary studies in view of the research questions. The details of
which specific research questions are answered by specific primary study are also present
in the data extraction form. Hence, one of the aim of the data extraction is to find which
primary study addresses which research question for a given study. In many cases, the
data extraction forms will extract the numeric data from the primary studies that will
help to analyze the results obtained from these primary studies. The first part of the data
extraction card summarizes the author name, title of the primary study, and publishing
details, and the second part of the data extraction form contains answers to the research
questions extracted from a given primary study. For example, the data set details, indepen-
dent variables (metrics), and the ML techniques are summarized for the SRML case study
(see Figure 2.3).
A team of researchers must collect the information from the primary studies. However,
because of the time and resource constraints at least two researchers must evaluate the
primary studies to obtain useful information to be included in the data extraction card.
The results from these two researchers must then be matched and if there is any disagree-
ment between them, then other researchers may be consulted to resolve these disagree-
ments. The researchers must clearly understand the research questions and the review
protocol before collecting the information from the primary studies. In case of Masters
and PhD students, their supervisors may collect information from the primary studies and
then match their results with those obtained by the students.
The last step involves identification of data synthesis tools and techniques to summarize
and interpret the information obtained from the primary studies. The basic objective while
synthesizing data is to accumulate and combine facts and figures obtained from the selected
primary studies to formulate a response to the research questions. Tables and charts may be
used to highlight the similarities and differences between the primary studies. The following
www.allitebooks.com
44 Empirical Research in Software Engineering
Section I
Reviewer name
Author name
Title of publication
Year of publication
Journal/conference name
Type of study
Section II
Data set used
Independent variables
Feature subselection methods
ML techniques used
Performance measures used
Values of accuracy measures
Strengths of ML techniques
Weaknesses of ML techniques
FIGURE 2.3
Data extraction form.
steps need to be followed before deciding the tools and methods to be used for depicting the
results of the research questions:
The effects of the results (performance measures) obtained from the primary studies may
be analyzed using statistical measures such as mean, median, and standard deviation (SD).
In addition, the outliers present in the results may be identified and removed using
various methods such as box plots. We must also use various tools such as bar charts,
scatter plots, forest plots, funnel plots, and line charts to visually present the results of
the primary studies in the SR. The aggregation of the results from various studies will
allow researchers to provide strong and well-acceptable conclusions and may give strong
support in proving a point. The data obtained from these studies may be quantitative
(expressed in the form of numerical measures) or qualitative (expressed in the form of
descriptive information/texts). For example, the values of performance measures are
quantitative in nature, and the strengths and weaknesses of the ML techniques are quali-
tative in nature.
A detailed description of the methods and techniques that are identified to represent
answers to the established research questions in the SRML case study for SFP using the
ML techniques are stated as follows:
• To summarize the number of ML techniques used in primary studies the SRML case
study will use a visualization technique, that is, a line graph to depict the number of
studies pertaining to the ML techniques in each year, and presented a classification
taxonomy of various ML techniques with their major categories and subcategories.
Systematic Literature Reviews 45
The case study also presented a bar chart that shows the total number of studies
conducted for each main category of the ML technique and pie charts that depict the
distribution of selected studies into subcategories for each ML category.
• The case study will use counting method to find the feature subselection tech-
niques, useful and not useful metrics, and commonly used data sets for SFP. These
subparts will be further aided by graphs and pie charts that showcase the distribu-
tion of selected primary studies for metrics usage and data set usage. Performance
measures will be summarized with the help of a table and a graph.
• The comparison of the result of the primary studies is shown with the help of a table
that compares six performance measures for each ML technique. The box plots will be
constructed to identify extreme values corresponding to each performance measure.
• A bar chart will be created to depict and analyze the comparison between the
performance of the statistical and ML techniques.
• The strengths and weaknesses of different ML techniques for SFP will be sum-
marized in tabular format.
Finally, the review protocol document may consist of the following sections:
1. Development of appropriate search strings that are derived from research questions
2. Adequacy of inclusion and exclusion criteria
3. Completeness of quality assessment questionnaire
4. Design of data extraction forms that address various research questions
5. Appropriateness of data analysis procedures
Masters and PhD students must present the review protocol to their supervisors for the
comments and analysis.
46 Empirical Research in Software Engineering
TABLE 2.7
Contingency Table for Binary Variable
Outcome Present Outcome Absent
Risk 1
RR =
Risk 2
ii. Odds ratio (OR): It measures the strength of the presence or absence of an
event. It is the ratio of odds of an outcome in two groups. It is desired that
the value is greater than one. The OR is defined as:
a11 a21
=
Odds1 = , Odds 2
a12 a22
Odds1
OR =
Odds 2
TABLE 2.8
Example Contingency Table for Binary
Variable
Faulty Not Faulty Total
Coupled 31 4 35
Not coupled 4 99 103
Total 35 103 138
48 Empirical Research in Software Engineering
b. For continuous variables (variables that do not have any specified range), the
following commonly used effects are of interest:
i. Mean difference: This measure is used when a study reports the same type
of outcome and measures them on the same scale. It is also known as “dif-
ference of means.” It represents the difference between the mean value of
each group (Kictenham 2007). Let X g1 and X g2 be the mean of two groups
(say g1 and g2), which is defined as:
Mean difference = X g1 − X g2
ii. Standardized mean difference: It is used when a study reports the same
type of outcome measure but measures it in different ways. For example,
the size of a program may be measured by function points or lines of code.
Standardized mean difference is defined as the ratio of difference between
the means in two groups to the SD of the pooled outcome. Let SDpooled be
the SD pooled across groups, SDg1 be the SD of one group, SDg2 be the SD of
another group, and ng1 and ng2 be the sizes of the two groups. The formula
for standardized mean difference is given below:
X g1 − X g2
Standardized mean difference =
SD pooled
where
For example, let X g1 = 110, X g2 = 100, SDg1 = 5 and SDg2 = 4, and ng1 = 20 and
ng2 = 20 of a sample population. Then,
(20 − 1) × 52 + (20 − 1) × 4 2
SD pooled = = 4.527
20 + 20 − 2
110 − 100
Standardized mean difference = = 2.209
4.527
Example 2.1
Consider the following data (refer Table 2.9) consisting of an attribute data class that can
have binary values true or false, where true represents that the class is data intensive
(number of declared variables is high) and false represents that the class is not data
intensive (number of declared variables is low). The outcome variable is change that
contains “yes” and “no,” where “yes” represents presence of change and “no” repre-
sents absence of change.
Calculate RR, OR, and risk difference.
Solution
The 2 × 2 contingency table is given in Table 2.10.
6 1
Risk 1 = = 0.75, Risk 2 = = 0.142
6+2 1+ 6
Systematic Literature Reviews 49
TABLE 2.9
Sample Data
Data Class Change
False No
False No
True Yes
False Yes
True Yes
False No
False No
True Yes
True No
False No
False No
True Yes
True No
True Yes
True Yes
TABLE 2.10
Contingency Table for Example Data Given in Table 2.9
Data Class Change Present Change Not Present Total
True 6 2 8
False 1 6 7
Total 7 8 15
0.75
=
RR = 5.282
0.142
6 1
=
=
Odds1 = 3=
, Odds 2 0.17
2 6
3
=
OR = 17.647
0.17
Risk difference = 0.75 − 0.142 = 0.608
TABLE 2.11
Results of Five Studies
Standard
Study AUC Error 95% CI
studies, whereas in the random effects model there are varied effects in the studies. When
heterogeneity is found in the effects, then the random effects model is preferred.
Table 2.11 presents the AUC computed from the ROC analysis, standard error, and upper
bound and lower bound of CI. Figure 2.4 depicts the forest plot for five studies using AUC
and standard error. Each line represents each study in the SR. The boxes (black-filled
squares) depict the weight assigned to each study. The weight is represented as the inverse
of the standard error. The lesser the standard error, the more weight is assigned to the
study. Hence, in general, weights can be based on the standard error and sample size. The
CI is represented through length of lines. The diamond represents summary of combined
effects of all the studies, and the edges of the diamond represent the overall effect. The
results show the presence of heterogeneity, hence, random effects models is used to ana-
lyze the overall accuracy in terms of AUC ranging from 0.69 to 0.85.
Study 1
Study 2
Study 3
Study 4
Study 5
FIGURE 2.4
Forest plots.
Systematic Literature Reviews 51
0.00
0.05
Standard error
0.10
0.15
0.20
−2.0 −1.5 −1.0 −0.5 0.0 0.5 1.0 1.5 2.0
Area under the curve
FIGURE 2.5
Funnel plot.
IEEE Xplore
Basic search
ScienceDirect
Select number of
relevant studies
ACM Digital
Library
Initial studies Reference
FIGURE 2.6
Search process.
the reference section of the relevant papers must also be included. The multiple copies
of the same publications must be removed and the collected publications must be stored
in a reference management system, such as Mendeley and JabRef. The list of journals
and conferences in which the primary studies have been published must be created.
Table 2.12 shows some popular journals and conferences in software engineering.
TABLE 2.12
Popular Journals and Conferences on Software Engineering
Publication Name Type
Table 2.13 shows an example of data extraction form collected for SRML case study using
research results given by Dejager et al. (2013). A similar form can be made for all the pri-
mary studies.
TABLE 2.13
Example Data Extraction Form
Section I
Reviewer name Ruchika Malhotra
Author name Karel Dejaeger, Thomas Verbraken, and Bart Baesens
Title of publication Toward Comprehensible Software Fault Prediction Models Using
Bayesian Network Classifiers
Year of publication 2013
Journal/conference name IEEE Transactions on Software Engineering
Type of the study Research paper
Section II
Data set used NASA data sets (JM1, MC1, KC1, PC1, PC2, PC3, PC4, PC5), Eclipse
Independent variables Static code measures (Halstead and McCabe)
Feature subselection method Markov Blanket
ML techniques used Naïve Bayes, Random Forest
Performance measures used AUC, H-measure
Values of accuracy measures (AUC) Data RF NB
JM1 0.74 0.74 0.69 0.69
KC1 0.82 0.8 0.8 0.81
MC1 0.92 0.92 0.81 0.79
PC1 0.84 0.81 0.77 0.85
PC2 0.73 0.66 0.81 0.79
PC3 0.82 0.78 0.77 0.78
PC4 0.93 0.89 0.79 0.8
PC5 0.97 0.97 0.95 0.95
Ecl 2.0a 0.82 0.82 0.8 0.79
Ecl 2.1a 0.75 0.73 0.74 0.74
Ecl 3.0a 0.77 0.77 0.76 0.86
Strengths (Naïve Bayes) It is easy to interpret and construct
Computationally efficient
Weaknesses (Naïve Bayes) Performance of model is dependent on attribute selection
technique used
Unable to discard irrelevant attributes
TABLE 2.14
Distribution of Studies Across ML Techniques
Based on Classification
Method # of Studies Percent
Miscellaneous
15%
Hybrid
OO 31%
7%
Procedural 47%
FIGURE 2.7
Primary study distribution according to the metrics used.
the results section by using visual diagrams and tables. For example, Table 2.14 pres-
ents the number of studies covering various ML techniques. There are various ML tech-
niques available in the literature such as decision tree, NNs, support vector machine,
and bayesian learning. The table shows that 31 studies analyzed decision tree techniques,
17 studies analyzed NN techniques, 18 studies examined support vector machines, and
so on. Similarly, the software metrics are divided into various categories in the SRML case
study—OO, procedural, hybrid, and miscellaneous. Figure 2.7 depicts the percentage of
studies examining each category of metrics, such as 31% of studies examine OO metrics.
The pie chart shows that the procedural metrics are most commonly used metrics with
47% of the total primary studies.
The results of the ML techniques that were assessed in at least 5 out of 64 selected
primary studies are provided using frequently used performance measures in the
64 primary studies. The results showed that accuracy, F-measure, precision, recall,
and AUC are the most frequently used performance measures in the selected primary
studies. Tables 2.15 and 2.16 present the minimum, maximum, mean, median, and SD
values for the selected performance measures. The results are shown for RF and NN
techniques.
TABLE 2.15
Results of RF Technique
RF Accuracy Precision Recall AUC Specificity
TABLE 2.16
Results of NN Technique
MLP Accuracy Precision Recall ROC Specificity
• Journal or conferences
• Technical report
• PhD thesis
The detailed reporting of the results of the SR is very important and critical so that academi-
cians can have an idea about the quality of the study. The detailed reporting consists of the
review protocol, inclusion/exclusion criteria, list of primary studies, list of rejected studies,
quality scores assigned to studies, and raw data pertaining to the primary studies, for example,
number of research questions addressed by the primary studies and so on should be reported.
The review results are generally longer than the normal original study. However, the journals
may not permit publication of long SR. Hence, the details may be kept in appendix and stored
in electronic form. The details in the form of technical report may also be published online.
Table 2.17 presents the format and contents of the SR. The table provides the contents
along with its detailed description. The strengths and limitations of the SR must also be
discussed along with the explanation of its effect on the findings.
TABLE 2.17
Format of an SR Report
Section Subsections Description Comments
Title – The title should be short and informative.
Authors – –
Details
Abstract Background What is the relevance and It allows the researchers to gain insight about the
importance of the SR? importance, addressed areas, and main findings
Method What are the tools and techniques of the study.
used to perform the SR?
Results What are the major findings
obtained by the SR?
Conclusions What are the main implications
of the results and guidelines for
the future research?
(Continued)
Systematic Literature Reviews 57
TABLE 2.18
Systematic Reviews in Software Engineering
Data
Research Study QA Synthesis
Authors Year Topics Size Used Methods Conclusions
Kitchenham 2007 Cost estimation 10 Yes Tables • Strict quality control on
et al. models, data collection is not
cross-company sufficient to ensure that a
data, within- cross-company model
company data performs as well as a
within-company model.
• Studies where within-
company predictions were
better than cross-company
predictions employed
smaller within-company
data sets, smaller number
of projects in the cross-
company models, and
smaller databases.
Jørgensen and 2007 Cost estimation 304 No Tables • Increase the breadth of the
Shepperd search for relevant studies.
• Search manually for
relevant papers within a
carefully selected set of
journals.
• Conduct more studies on
the estimation methods
commonly used by the
software industry.
• Increase the awareness of
how properties of the data
sets impact the results
when evaluating
estimation methods.
Stol et al. 2009 Open source 63 No Pie charts, • Most research is done on
software bar OSS communities.
(OSS)–related charts, • Most studies investigate
empirical tables projects in the “system”
research and “internet” categories.
• Among research methods
used, case study, survey,
and quantitative analysis
are the most popular.
(Continued)
Systematic Literature Reviews 59
Exercises
2.1 What is an SR? Why do we really need to perform SR?
2.2 a. Discuss the advantages of SRs.
b. Differentiate between a survey and an SR.
2.3 Explain the characteristics and importance of SRs.
Systematic Literature Reviews 61
TABLE 2.12.1
Contingency Table from Study on change
Prediction
Change Not Change
Prone Prone Total
Coupled 14 12 26
Not coupled 16 22 38
Total 30 34 64
2.4 a. What are the search strategies available for selecting primary studies? How will
you select the digital portals for searching primary studies?
b. What is the criteria for forming a search string?
2.5 What is the criteria for determining the number of researchers for conducting the
same steps in an SR?
2.6 What is the purpose of quality assessment criteria? How will you construct the
quality assessment questions?
2.7 Why identification of the need for an SR is considered the most important step in
planning the review?
2.8 How will you decide on the tools and techniques to be used during the data
synthesis?
2.9 What is publication bias? Explain the purpose of funnel plots in detecting
publication bias?
2.10 Explain the steps in SRs with the help of an example case study.
2.11 Define the following terms:
a. RR
b. OR
c. Risk difference
d. Standardized mean difference
e. Mean difference
2.12 Given the contingency table for all classes that are coupled or not coupled in a
software with respect to a dichotomous variable change proneness, calculate the
RR, OR, and risk difference (Table 2.12.1).
Further Readings
A classic study that describes empirical results in software engineering is given by:
A detailed survey that summarizes approaches that mine software repositories in the
context of software evolution is given in:
The guidelines for preparing the review protocols are given in:
For further understanding on forest and funnel plots, see the following publications:
M. Shepperd, D. Bowes, and T. Hall, “Researcher bias: The use of machine learning
in software defect prediction,” IEEE Transactions on Software Engineering, vol. 40,
no. 6, pp. 603–616, 2014.
3
Software Metrics
Software metrics are used to assess the quality of the product or process used to build it.
The metrics allow project managers to gain insight about the progress of software and
assess the quality of the various artifacts produced during software development. The
software analysts can check whether the requirements are verifiable or not. The metrics
allow management to obtain an estimate of cost and time for software development. The
metrics can also be used to measure customer satisfaction. The software testers can mea-
sure the faults corrected in the system, and this decides when to stop testing.
Hence, the software metrics are required to capture various software attributes at differ-
ent phases of the software development. Object-oriented (OO) concepts such as coupling,
cohesion, inheritance, and polymorphism can be measured using software metrics. In this
chapter, we describe the measurement basics, software quality metrics, OO metrics, and
dynamic metrics. We also provide practical applications of metrics so that good-quality
systems can be developed.
3.1 Introduction
Software metrics can be used to adequately measure various elements of the software
development life cycle. The metrics can be used to provide feedback on a process or tech-
nique so that better or improved strategies can be developed for future projects. The qual-
ity of the software can be improved using the measurements collected by analyzing and
assessing the processes and techniques being used.
The metrics can be used to answer the following questions during software development:
65
66 Empirical Research in Software Engineering
The above questions can be addressed by gathering information using metrics. The infor-
mation will allow software developer, project manager, or management to assess, improve,
and control software processes and products during the software development life cycle.
The above definition provides all the relevant details. Software metrics should be collected
from the initial phases of software development to measure the cost, size, and effort of the
project. Software metrics can be used to ascertain and monitor the progress of the soft-
ware throughout the software development life cycle.
making effective decisions. The effective application of metrics can improve the quality
of the software and produce software within the budget and on time. The contributions of
software metrics in building good-quality system are provided in Section 3.9.1.
FIGURE 3.1
Steps in software measurement.
68 Empirical Research in Software Engineering
determines the numerical relations corresponding to the empirical relations. In the next
step, real-world entities are mapped to numeric numbers, and in the last step, we deter-
mine whether the numeric relations preserve the empirical relation.
1. Process: The process is defined as the way in which the product is developed.
2. Product: The final outcome of following a given process or a set of processes is
known as a product. The product includes documents, source codes, or artifacts
that are produced during the software development life cycle.
The process uses the product produced by an activity, and a process produces products that
can be used by another activity. For example, the software design document is an artifact
produced from the design phase, and it serves as an input to the implementation phase. The
effectiveness of the processes followed during software development is measured using the
process metrics. The metrics related to products are known as product metrics. The effi-
ciency of the products is measured using the product metrics.
The process metrics can be used to
For example, the effectiveness of the inspection activity can be measured by computing
costs and resources spent on it and the number of defects detected during the inspection
activity. By assessing whether the number of faults found outweighs the costs incurred
during the inspection activity or not, the project managers can decide about the effective-
ness of the inspection activity.
The product metrics are used to measure the effectiveness of deliverables produced dur-
ing the software development life cycle. For example, size, cost, and effort of the deliver-
ables can be measured. Similarly, documents produced during the software development
(SRS, test plans, user guides) can be assessed for readability, usability, understandability,
and maintainability.
The process and product metrics can further be classified as internal or external attributes.
The internal attribute concerns with the internal structure of the process or product. The com-
mon internal attributes are size, coupling, and complexity. The external attributes concern
with the behavior aspects of the process or product. The external attributes such as testability,
understandability, maintainability, and reliability can be measured using the process or prod-
uct metrics.
The difference between attributes and metrics is that metrics are used to measure a
given attribute. For example, size is an attribute that can be measured through lines of
source code (LOC) metric.
The internal attributes of a process or product can be measured without executing the
source code. For instance, the examples of internal attributes are number of paths, number
of branches, coupling, and cohesion. External attributes include quality attributes of the
system. They can be measured by executing the source code such as the number of failures,
Software Metrics 69
Software metrics
Process Product
Reliability,
Failure rate found in Effectiveness of a Size, inheritance,
maintainability,
reviews, no. of issues method coupling
usability
FIGURE 3.2
Categories of software metrics.
response time, and navigation easiness of an item. Figure 3.2 presents the categories of
software metrics with examples at the lowest level in the hierarchy.
TABLE 3.1
Example of Metrics Having Continuous Scale
Class# LOC Added LOC Deleted
A 34 5
B 42 10
C 17 9
70 Empirical Research in Software Engineering
heavier than B with weight 100 pounds. Simple counts are represented by absolute scale.
The example of simple counts is number of faults, LOC, and number of methods. In abso-
lute type of scale, the descriptive statistics such as mean, median, and standard deviation
can be applied to summarize data.
Nonmetric type of data can be measured on nominal or ordinal scales. Nominal scale
divides metric into classes, categories, or levels without considering any order or rank
between these classes. For example, Change is either present or not present in a class.
Another example of nominal scale is programming languages that are used as labels for dif-
ferent categories. In ordinal scale, one category can be compared with the other category in
terms of “higher than,” “greater than,” or “lower than” relationship. For example, the overall
navigational capability of a web page can be ranked into various categories as shown below:
1, excellent
2, good
What is the overall navigational capability of a webpage? = 3, medium
4, bad
5, worst
Table 3.2 summarizes the differences between measurement scales with examples.
TABLE 3.2
Summary of Measurement Scales
Measurement
Scale Characteristics Statistics Operations Transformation Examples
Example 3.1
Consider the count of number of faults detected during inspection activity:
1. What is the measurement scale for this definition?
2. What is the measurement scale if number of faults is classified between 1 and
5, where 1 means very high, 2 means high, 3 means medium, 4 means low, and
5 means very low?
Solution:
1. The measurement scale of the number of faults is absolute as it is a simple count
of values.
2. Now, the measurement scale is ordinal since the variable has been converted
to be categorical (consists of classes), involving ranking or ordering among
categories.
A line of code is any line of program text that is not a comment or blank line, regardless
of the number of statements or fragments of statements on the line. This specifically
includes all lines containing program headers, declarations, and executable and non-
executable statements.
In OO software development, the size of software can be calculated in terms of classes and
the attributes and functions included in the classes. The details of OO size metrics can be
found in Section 3.5.6.
72 Empirical Research in Software Engineering
FIGURE 3.3
Operation to find greatest among three numbers.
Number of defects
Defect density =
KLOC
Software Metrics 73
The number of defects measure counts the defects detected during testing or by using any
verification technique.
Defect rate can be measured as the defects encountered over a period of time, for instance
per month. The defect rate may be useful in predicting the cost and resources that will be
utilized in the maintenance phase of software development. Defect density during testing
is another effective metric that can be used during formal testing. It measures the defect
density during the formal testing after completion of the source code and addition of the
source code to the software library. If the value of defect density metric during testing is
high, then the tester should ask the following questions:
If the reason for high number of defects is the first one then the software should be thor-
oughly tested to detect the high number of defects. However, if the reason for high number
of defects is the second one, it implies that the quality of the system is good because of the
presence of fewer defects.
For a given phase in the software development life cycle, latent defects are not known.
Thus, they are calculated as the estimation of the sum of defects removed during a phase
and defects detected later. The higher the value of the DRE, the more efficient and effec-
tive is the process followed in a particular phase. The ideal value of DRE is 1. The DRE of
a product can also be calculated by:
DB
DRE =
D B +D A
where:
DB depicts the defects encountered before software delivery
DA depicts the defects encountered after software delivery
74 Empirical Research in Software Engineering
Quantity and quality measures are expressed in percentages. For example, consider a
problem of proofreading an eight-page document. Quantity is defined as the percentage of
proofread words, and quality is defined as the percentage of the correctly proofread docu-
ment. Suppose quantity is 90% and quality is 70%, then task effectiveness is 63%.
The other measures of usability defined in MUSIC project are (Bevan 1995):
Effectiveness
Temporal efficiency =
Task time
Task time − unproductive time
Productive peroiid = × 100
Task time
User efficiency
Relative user efficiency = × 100
Expert efficiency
There are various measures that can be used to measure the usability aspect of the system
and are defined below:
• How the user is able to easily learn the interface paths in a webpage?
• Are the interface titles understandable?
• Whether the topics can be found in the ‘help’ easily or not?
The charts, such as bar charts, pie charts, scatter plots, and line charts, can be used to
depict and assess the satisfaction level of the customer. The satisfaction level of the cus-
tomer must be continuously monitored over time.
Software Metrics 75
NASA developed a test focus (TF) metric defined as the ratio of the amount of effort spent
in finding and removing “real” faults in the software to the total number of faults reported
in the software. The TF metric is given as (Stark et al. 1992):
On the basis of above direct measures, the following additional testing-related metrics
can be computed to derive more useful information from the basic metrics as given below.
3.5 OO Metrics
Because of growing size and complexity of software systems in the market, OO analysis
and design principles are being used by organizations to produce better designed, high–
quality, and maintainable software. As the systems are being developed using OO soft-
ware engineering principles, the need for measuring various OO constructs is increasing.
Features of OO paradigm (programming languages, tools, methods, and processes) pro-
vide support for many quality attributes. The key concepts of OO paradigm are: classes,
objects, attributes, methods, modularity, encapsulation, inheritance, and polymorphism
(Malhotra 2009). An object is made up of three basic components: an identity, a state, and a
behavior (Booch 1994). The identity distinguishes two objects with same state and behav-
ior. The state of the object represents the different possible internal conditions that the
object may experience during its lifetime. The behavior of the object is the way the object
will respond to a set of received messages.
A class is a template consisting of a number of attributes and methods. Every object
is the instance of a class. The attributes in a class define the possible states in which an
instance of that class may be. The behavior of an object depends on the class methods and
the state of the object as methods may respond differently to input messages depending on
the current state. Attributes and methods are said to be encapsulated into a single entity.
Encapsulation and data hiding are key features of OO languages.
The main advantage of encapsulation is that the values of attributes remain private,
unless the methods are written to pass that information outside of the object. The internal
working of each object is decoupled from the other parts of the software thus achieving
modularity. Once a class has been written and tested, it can be distributed to other pro-
grammers for reuse in their own software. This is known as reusability. The objects can
be maintained separately leading to easier location and fixation of errors. This process is
called maintainability.
The most powerful technique associated to OO methods is the inheritance relationship.
If a class B is derived from class A. Class A is said to be a base (or super) class and class B is
said to be a derived (or sub) class. A derived class inherits all the behavior of its base class
and is allowed to add its own behavior.
Polymorphism (another useful OO concept) describes multiple possible states for a
single property. Polymorphism allows programs to be written based only on the abstract
interfaces of the objects, which will be manipulated. This means that future extension
in the form of new types of objects is easy, if the new objects conform to the original
interface.
Software Metrics 77
TABLE 3.3
Chidamber and Kemerer Metric Suites
Metric Definition Construct Being Measured
CBO It counts the number of other classes to which a class is linked. Coupling
WMC It counts the number of methods weighted by complexity in a class. Size
RFC It counts the number of external and internal methods in a class. Coupling
LCOM Lack of cohesion in methods Cohesion
NOC It counts the number of immediate subclasses of a given class. Inheritance
DIT It counts the number of steps from the leaf to the root node. Inheritance
TABLE 3.4
Li and Henry Metric Suites
Metric Definition Construct Being Measured
TABLE 3.5
Lorenz and Kidd Metric Suites for measuring Inheritance
Metric Definition
NOP It counts the number of immediate parents of a given class.
NOD It counts the number of indirect and direct subclasses of a given class.
NMO It counts the number of methods overridden in a class.
NMI It counts the number of methods inherited in a class.
NMA It counts the number of new methods added in a class.
SIX Specialization index
TABLE 3.6
Briand et al. Metric Suites
IFCAIC These coupling metrics count the number of interactions between classes.
ACAIC These metrics distinguish the relationship between the classes (friendship, inheritance,
OCAIC none), different types of interactions, and the locus of impact of the interaction.
FCAEC The acronyms for the metrics indicates what interactions are counted:
DCAEC
• The first or first two characters indicate the type of coupling relationship between
OCAEC
classes:
IFCMIC A: ancestor, D: descendents, F: friend classes, IF: inverse friends (classes that declare
ACMIC a given class A as their friend), O: others, implies none of the other relationships.
DCMIC • The next two characters indicate the type of interaction:
FCMEC CA: There is a class–attribute interaction if class x has an attribute of type class y.
DCMEC CM: There is a class–method interaction if class x consist of a method that has
OCMEC parameter of type class y.
IFMMIC MM: There is a method–method interaction if class x calls method of another class y,
AMMIC or class x has a method of class y as a parameter.
OMMIC • The last two characters indicate the locus of impact:
FMMEC IC: Import coupling, counts the number of other classes called by class x.
DMMEC EC: Export coupling, count number of other classes using class y.
OMMEC
TABLE 3.7
Lee et al. Metric Suites
Metric Definition Construct Being Measured
Yap and Henderson-Sellers (1993) have proposed a suite of metrics to measure cohesion
and reuse in OO systems. Aggarwal et al. (2005) defined two reusability metrics namely
function template factor (FTF) and class template factor (CTF) that are used to mea-
sure reuse in OO systems. The relevant metrics summarized in tables are explained in
subsequent sections.
Software Metrics 79
TABLE 3.8
Benlarbi and Melo Polymorphism Metrics
Metric Definition
SPA It measures static polymorphism in ancestors.
DPA It measures dynamic polymorphism in ancestors.
SP It is the sum of SPA and SPD metrics.
DP It is the sum of DPA and DPD metrics.
NIP It measures polymorphism in noninheritance relations.
OVO It measures overloading in stand-alone classes.
SPD It measures static polymorphism in descendants.
DPD It measures dynamic polymorphism in descendants.
Figure 3.4 depicts the values for fan-in and fan-out metrics for classes A, B, C, D, E, and F of
an example system. The values of fan-out should be as low as possible because of the fact
that it increases complexity and maintainability of the software.
Class A
Fan-out = 4
Class F
Fan-in = 2
Fan-out = 0
FIGURE 3.4
Fan-in and fan-out metrics
80 Empirical Research in Software Engineering
This definition also includes coupling based on inheritance. Chidamber and Kemerer
(1994) defined coupling between objects (CBO) as “the count of number of other classes
to which a class is coupled.” The CBO definition given in 1994 includes inheritance-based
coupling. For example, consider Figure 3.5, three variables of other classes (class B, class C,
and class D) are used in class A, hence, the value of CBO for class A is 3. Similarly, classes
D, F, G, and H have the value of CBO metric as zero.
Li and Henry (1993) used data abstraction technique for defining coupling. Data abstrac-
tion provides the ability to create user-defined data types called abstract data types (ADTs).
Li and Henry defined data abstraction coupling (DAC) as:
DAC = number of ADTs defined in a class
In Figure 3.5, class A has three ADTs (i.e., three nonsimple attributes). Li and Henry defined
another coupling metric known as message passing coupling (MPC) as “number of unique
send statements in a class.” Hence, if three different methods in class B access the same
method in class A, then MPC is 3 for class B, as shown in Figure 3.6.
Chidamber and Kemerer (1994) defined response for a class (RFC) metric as a set of
methods defined in a class and called by a class. It is given by RFC = |RS|, where RS, the
response set of the class, is given by:
RS = I i ∪ all j {Eij }
where:
Ii = set of all methods in a class (total i)
Ri = {Rij} = set of methods called by Mi
A
Fan-out = 3
CBO = 3
B C
Fan-out = 2 Fan-out = 1 D
CBO = 2 CBO = 1
F H
FIGURE 3.5
Values of CBO metric for a small program.
Software Metrics 81
Class B
Class A
FIGURE 3.6
Example of MPC metric.
MethodA1() MethodB1()
MethodA2() MethodB2()
MethodA3()
Class C
MethodC1()
MethodC2()
FIGURE 3.7
Example of RFC metric.
For example, in Figure 3.7, RFC value for class A is 6, as class A has three methods of its
own and calls 2 other methods of class B and one of class C.
A number of coupling metrics with respect to OO software have been proposed by
Briand et al. (1997). These metrics take into account the different OO design mechanisms
provided by the C++ language: friendship, classes, specialization, and aggregation. These
metrics may be used to guide software developers about which type of coupling affects
the maintenance cost and reduces reusability. Briand et al. (1997) observed that the cou-
pling between classes could be divided into different facets:
The metrics for CM interaction type are IFCMIC, ACMIC, OCMIC, FCMEC, DCMEC, and
OCMEC. In these metrics, the first one/two letters denote the type of relationship (IF denotes
inverse friendship, A denotes ancestors, D denotes descendant, F denotes friendship, and O
denotes others). The next two letters denote the type of interaction (CA, CM, MM) between
classes. Finally, the last two letters denote the type of coupling (IC or EC).
Lee et al. (1995) acknowledged the need to differentiate between inheritance-based and
noninheritance-based coupling by proposing the corresponding measures: noninheritance
information flow-based coupling (NIH-ICP) and information flow-based inheritance coupling
(IH-ICP). Information flow-based coupling (ICP) metric is defined as the sum of NIH-ICP and
IH-ICP metrics and is based on method invocations, taking polymorphism into account.
class A
{
B B1; // Nonsimple attributes
C C1;
public:
void M1(B B1)
{
}
};
class B
{
public:
void M2()
{
A A1;
A1.M1();// Method of class A called
}
};
class C
{
void M3(B::B1) //Method of class B passed as parameter
{
}
};
FIGURE 3.8
Example for computing type of interaction.
Software Metrics 83
(1 N ) ∑ i=1 µ ( Di ) − m
n
LCOM1 =
1− m
FIGURE 3.9
Example of LCOM metric.
84 Empirical Research in Software Engineering
Top : Integer
Attributes
a : Integer
Push(a, n)
Pop() Methods
Getsize()
Empty ()
Display ()
FIGURE 3.10
Stack class.
The approach by Bieman and Kang (1995) to measure cohesion was based on that of
Chidamber and Kemerer (1994). They proposed two cohesion measures—TCC and LCC.
TCC metric is defined as the percentage of pairs of directly connected public methods
of the class with common attribute usage. LCC is the same as TCC, except that it also
considers indirectly connected methods. A method M1 is indirectly connected with
method M3, if method M1 is connected to method M2 and method M2 is connected
to method M3. Hence, transitive closure of directly connected methods is represented by
indirectly connected methods. Consider the class stack shown in Figure 3.10.
Figure 3.11 shows the attribute usage of methods. The pair of public functions with com-
mon attribute usage is given below:
{(empty, push), (empty, pop), (empty, display), (getsize, push), (getsize, pop), (push, pop),
(push, display), (pop, display)}
8
TCC ( Stack ) = × 100 = 80%
10
The methods “empty” and “getsize” are indirectly connected, since “empty” is connected
to “push” and “getsize” is also connected to “push.” Thus, by transitivity, “empty” is con-
nected to “getsize.” Similarly “getsize” is indirectly connected to “display.”
LCC for stack class is as given below:
10
LCC ( Stack ) = × 100 = 100%
10
FIGURE 3.11
Attribute usage of methods of class stack.
Software Metrics 85
Lee et al. (1995) proposed information flow-based cohesion (ICH) metric. ICH for a class is
defined as the weighted sum of the number of invocations of other methods of the same
class, weighted by the number of parameters of the invoked method.
AID =
∑ depth of each class
Total number of classes
A E
B C G
D F
FIGURE 3.12
Inheritance hierarchy.
86 Empirical Research in Software Engineering
parent class. NMA counts the number of new methods (neither overridden nor inherited)
added in a class. NMI counts number of methods inherited by a class from its parent class.
Finally, Lorenz and Kidd (1994) defined specialization index (SIX) using DIT, NMO, NMA,
and NMI metrics as given below:
NMO × DIT
SIX =
NMO + NMA + NMI
Consider the class diagram given in Figure 3.13. The class employee inherits class person.
The class employee overrides two functions, addDetails() and display(). Thus, the value of
NMO metric for class student is 2. Two new methods is added in this class (getSalary() and
compSalary()). Hence, the value of NMA metric is 2.
Thus, for class Employee, the value of NMO is 2, NMA is 2, and NMI is 1 (getEmail()).
For the class Employee, the value of SIX is:
2×1 2
SIX = = = 0.4
2+ 2+1 5
The maximum number of levels in the inheritance hierarchy that are below the class are
measured through class to leaf depth (CLD). The value of CLD for class Person is 1.
Person
name: char
phone: integer
addr: integer
email: char
addDetails()
display()
getEmail()
Employee
Emp_id: char
basic: integer
da: real
hra: real
addDetails()
display()
getSalary()
compSalary()
FIGURE 3.13
Example of inheritance relationship.
Software Metrics 87
requirements. Yap and Henderson-Sellers (1993) discuss two measures designed to evaluate
the level of reuse possible within classes. The reuse ratio (U) is defined as:
Number of superclasses
U=
Total number of classes
Consider Figure 3.13, the value of U is 1 2. Another metric is specialization ratio (S), and is
given as:
Number of subclasses
S=
Number of superclasses
In Figure 3.13, Employee is the subclass and Person is the parent class. Thus, S = 1.
Aggarwal et al. (2005) proposed another set of metrics for measuring reuse by using
generic programming in the form of templates. The metric FTF is defined as ratio of num-
ber of functions using function templates to total number of functions as shown below:
∑
n
uses _ FT ( Fi )
FTF = i −1
∑
n
F
i −1
where:
void method1(){
.........}
template<class U>
void method2(U &a, U &b){
.........}
void method3(){
........}
FIGURE 3.14
Source code for calculation of FTF metric.
88 Empirical Research in Software Engineering
class X{
.....};
template<class U, int size>
class Y{
U ar1[size];
....};
FIGURE 3.15
Source code for calculating metric CTF.
∑
n
uses _ CT ( Ci )
CTF = i −1
∑
n
Ci
i −1
where:
1
In Figure 3.15, the value of metric CTF = .
2
where:
M1,…Mn are methods defined in class K1
C1,…Cn are the complexities of the methods
Lorenz and Kidd defined number of attributes (NOA) metric given as the sum of number
of instance variables and number of class variables. Li and Henry (1993) defined number
of methods (NOM) as the number of local methods defined in a given class. They also
defined two other size metrics—namely, SIZE1 and SIZE2. These metrics are defined
below:
TABLE 3.9
Difference between static and dynamic metrics
S. No. Static Metrics Dynamic Metrics
TABLE 3.10
Mitchell and Power Dynamic Coupling Metric Suite
Metric Definition
Dynamic coupling between objects This metric is same as Chidamber and Kemerer’s
CBO metric, but defined at runtime.
Degree of dynamic coupling between two classes It is the percentage of ratio of number of times a class A
at runtime accesses the methods or instance variables of another
class B to the total no of accesses of class A.
Degree of dynamic coupling within a given set The metric extends the concept given by the above
of classes metric to indicate the level of dynamic coupling within
a given set of classes.
Runtime import coupling between objects Number of classes assessed by a given class at runtime.
Runtime export coupling between objects Number of classes that access a given class at runtime.
Runtime import degree of coupling Ratio of number of classes assessed by a given class at
runtime to the total number of accesses made.
Runtime export degree of coupling Ratio of number of classes that access a given class at
runtime to the total number of accesses made.
• LOC added: Sum total of all the lines of code added to a file for all of its revisions
in the repository
• Max LOC added: Maximum number of lines of code added to a file for all of its
revisions in the repository
• Average LOC added: Average number of lines of code added to a file for all of its
revisions in the repository
• LOC deleted: Sum total of all the lines of code deleted from a file for all of its revi-
sions in the repository
• Max LOC deleted: Maximum number of lines of code deleted from a file for all of
its revisions in the repository
• Average LOC deleted: Average number of lines of code deleted from a file for all of
its revisions in the repository
3.7.4 Miscellaneous
The other related evolution metrics are:
• Max change set: Maximum number of files that are committed or checked in
together in a repository
• Average change set: Average number of files that are committed or checked in
together in a repository
• Age: Age of repository file, measured in weeks by counting backward from a given
release of a software system
• Weighted Age: Weighted Age of a repository file is given as:
∑
N
Age ( i ) × LOC added ( i )
i =1
∑ LOC added ( i )
where:
i is a revision of a repository file and N is the total number of revisions for that
file
( ∃P ) , ( ∃Q ) u ( p ) ≠ u (Q )
It ensures that no measure rates all program/class to be of same metric value.
Property 2: Let c be a nonnegative number. Then, there are finite numbers of pro-
gram/class with metric c. This property ensures that there is sufficient resolution
in the measurement scale to be useful.
Property 3: There are distinct program/class P and Q such that u ( p ) = u ( Q ) ⋅
Property 4: For OO system, two programs/classes having the same functionality
could have different values.
( ∃P ) ( ∃Q ) P ≡ Q and u ( P ) ≠ (Q )
Property 5: When two programs/classes are concatenated, their metric should be
greater than the metrics of each of the parts.
and control quality, it is very important to understand how the quality can be measured.
Software metrics are widely used for measuring, monitoring, and evaluating the quality
of a project. Various software metrics have been proposed in the literature to assess the
software quality attributes such as change proneness, fault proneness, maintainability of
a class or module, and so on. A large portion of empirical research has been involved with
the development and evaluation of the quality models for procedural and OO software.
Software metrics have found a wide range of applications in various fields of software engi-
neering. As discussed, some of the familiar and common uses of software metrics are sched-
uling the time required by a project, estimating the budget or cost of a project, estimating the
size of the project, and so on. These parameters can be estimated at the early phases of soft-
ware development life cycle, and thus help software managers to make judicious allocation
of resources. For example, once the schedule and budget has been decided upon, managers
can plan in advance the amount of person-hours (effort) required. Besides this, the design of
software can be assessed in the industry by identifying the out of range values of the software
metrics. One way to improve the quality of the system is to relate structural attribute mea-
sures intended to quantify important concepts of a given software, such as the following:
• Encapsulation
• Coupling
• Cohesion
• Inheritance
• Polymorphism
• Fault proneness
• Maintainability
• Testing effort
• Rework effort
• Reusability
• Development effort
The ability to assess quality of software in the early phases of the software life cycle is the
main aim of researchers so that structural attribute measures can be used for predicting exter-
nal attribute measures. This would greatly facilitate technology assessment and comparisons.
Researchers are working hard to investigate the properties of software measures to
understand the effectiveness and applicability of the underlying measures. Hence, we need
to understand what these measures are really capturing, whether they are really differ-
ent, and whether they are useful indicators of quality attributes of interest? This will build
a body of evidence, and present commonalities and differences across various studies.
Finally, these empirical studies will contribute largely in building good quality systems.
the popular and widely used software metrics suite available to measure the constructs
is identified from the literature. Finally, a decision on the selection must be made on soft-
ware metrics. The criterion that can be used to select software metrics is that the selected
software metrics must capture all the constructs, be widely used in the literature, easily
understood, fast to compute, and computationally less expensive. The choice of metric
suite heavily depends on the goals of the research. For instance, in quality model pre-
diction, OO metrics proposed by Chidamber and Kemerer (1994) are widely used in the
empirical studies.
In cases where multiple software metrics are used, the attribute reduction techniques
given in Section 6.2 must be applied to reduce them, if model prediction is being conducted.
to identify the threshold values of various OO metrics. Besides this, Shatnawi et al. (2010)
also investigated the use of receiver operating characteristics (ROCs) method to identify
threshold values. The detailed explanation of the above two methodologies is provided
in the below sub sections (Shatnawi 2006). Malhotra and Bansal (2014a), evaluated the
threshold approach proposed by Bender (1999) for fault prediction.
1 Po
VARL = ln − α
β 1 − Po
where:
α is a constant
β is the estimated coefficient
Po is the acceptable risk level
In this formula, α and β are obtained using the standard logistic regression formula
(refer Section 7.2.1). This formula will be used for each metric individually to find its
threshold value.
For example, consider the following data set (Table A.8 in Appendix I) consisting of
the metrics (independent variables): LOC, DIT, NOC, CBO, LCOM, WMC, and RFC. The
dependent variable is fault proneness. We calculate the threshold values of all the metrics
using the following steps:
e ( )
g x
P=
1+ e ( )
g x
where:
g ( x ) = α + βx
where:
x is the independent variable, that is, an OO metric
α is the Y-intercept or constant
β is the slope or estimated coefficient
Table 3.11 shows the statistical significance (sig.) for each metric. The “sig.” parame-
ter provides the association between each metric and fault proneness. If the “sig.”
Software Metrics 97
TABLE 3.11
Statistical Significance of Metrics
Metric Significance
WMC 0.013
CBO 0.01
RFC 0.003
LOC 0.001
DIT 0.296
NOC 0.779
LCOM 0.026
value is below or at the significance threshold of 0.05, then the metric is said to
be significant in predicting fault proneness (shown in bold). Only for significant
metrics, we calculate the threshold values. It can be observed from Table 3.11 that
DIT and NOC metrics are insignificant, and thus are not considered for further
analysis.
Step 2: Calculate the values of constant and coefficient for significant metrics.
For significant metrics, the values of constant (α) and coefficient (β) using univariate
logistic regression are calculated. These values of constant and coefficient will be
used in the computation of threshold values. The coefficient shows the impact of
the independent variable, and its sign shows whether the impact is positive or
negative. Table 3.12 shows the values of constant (α) and coefficient (β) of all the
significant metrics.
Step 3: Computation of threshold values.
We have calculated the threshold values (VARL) for the metrics that are found to be
significant using the formula given above. The VARL values are calculated for
different values of Po, that is, at different levels of risks (between Po = 0.01 and
Po = 0.1). The threshold values at different values of Po (0.01, 0.05, 0.08, and 0.1)
for all the significant metrics are shown in Table 3.13. It can be observed that the
threshold values of all the metrics change significantly as Po changes. This shows
that Po plays a significant role in calculating threshold values. Table 3.13 shows
that at risk level 0.01 and 0.05, VARL values are out of range (i.e., negative values)
for all of the metrics. At Po = 0.1, the threshold values are within the observation
range of all the metrics. Hence, in this example, we say that Po = 0.1 is the appro-
priate risk level and the threshold values (at Po = 0.1) of WMC, CBO, RFC, LOC,
and LOCM are 17.99, 14.46, 52.37, 423.44, and 176.94, respectively.
TABLE 3.12
Constant (α) and Coefficient (β) of Significant Metrics
Metric Coefficient (β) Constant (α)
TABLE 3.13
Threshold Values on the basis of Logistic Regression Method
Metrics VARL at 0.01 VARL at 0.05 VARL at 0.08 VARL at 0.1
WMC −42.69 −15.17 −6.81 17.99
CBO −17.48 −2.99 1.41 14.46
RFC −61.41 −9.83 5.86 52.37
LOC −486.78 −74.11 51.41 423.44
LCOM −733.28 −320.61 −195.09 176.94
TABLE 3.14
Threshold Values or the basis of
ROC Curve Method
Metric Threshold Value
WMC 7.5
DIT 1.5
NOC 0.5
CBO 8.5
RFC 43
LCOM 20.5
LOC 304.5
1. Using software metrics, the researcher can identify change/fault-prone classes that
a. Enables software developers to take focused preventive actions that can reduce
maintenance costs and improve quality.
b. Helps software managers to allocate resources more effectively. For example, if
we have 26% testing resources, then we can use these resources in testing top
26% of classes predicted to be faulty/change prone.
2. Among a large set of software metrics (independent variables), we can find a suit-
able subset of metrics using various techniques such as correlation-based feature
selection, univariate analysis, and so on. These techniques help in reducing the
number of independent variables (termed as “data dimensionality reduction”).
Only the metrics that are significant in predicting the dependent variable are con-
sidered. Once the metrics found to be significant in detecting faulty/change-prone
classes are identified, software developers can use them in the early phases of
software development to measure the quality of the system.
3. Another important application is that once one knows the metrics being captured
by the models, and then such metrics can be used as quality benchmarks to assess
and compare products.
4. Metrics also provide an insight into the software, as well as the processes used to
develop and maintain it.
5. There are various metrics that calculate the complexity of the program. For exam-
ple, McCabe metric helps in assessing the code complexity, Halstead metrics helps
in calculating different measurable properties of software (programming effort,
program vocabulary, program length, etc.), Fan-in and Fan-out metrics estimate
maintenance complexity, and so on. Once the complexity is known, more complex
programs can be given focused attention.
6. As explained in Section 3.9.3, we can calculate the threshold values of different
software metrics. By using threshold values of the metrics, we can identify and
focus on the classes that fall outside the acceptable risk level. Hence, during the
100 Empirical Research in Software Engineering
project development and progress, we can scrutinize the classes and prepare
alternative design structures wherever necessary.
7. Evolutionary algorithms such as genetic algorithms help in solving the optimiza-
tion problems and require the fitness function to be defined. Software metrics help
in defining the fitness function (Harman and Clark 2004) in these algorithms.
8. Last, but not the least, some new software metrics that help to improve the quality
of the system in some way can be defined in addition to the metrics proposed in
the literature.
Exercises
3.1 What are software metrics? Discuss the various applications of metrics.
3.2 Discuss categories of software metrics with the help of examples of each category.
3.3 What are categorical metric scales? Differentiate between nominal scale and ordinal
scale in the measurements and also discuss both the concepts with examples.
3.4 What is the role and significance of Weyuker’s properties in software metrics.
3.5 Define the role of fan-in and fan-out in information flow metrics.
3.6 What are various software quality metrics? Discuss them with examples.
3.7 Define usability. What are the various usability metrics? What is the role of cus-
tomer satisfaction?
3.8 Define the following metrics:
a. Statement coverage metric
Software Metrics 101
b. Defect density
c. FCMs
3.9 Define coupling. Explain Chidamber and Kemerer metrics with examples.
3.10 Define cohesion. Explain some cohesion metrics with examples.
3.11 How do we measure inheritance? Explain inheritance metrics with examples.
3.12 Define the following metrics:
a. CLD
b. AID
c. NOC
d. DIT
e. NOD
f. NOA
g. NOP
h. SIX
3.13 What is the purpose and significance of computing the threshold of software
metrics?
3.14 How can metrics be used to improve software quality?
3.15 Consider that the threshold value of CBO metric is 8 and WMC metric is 15. What
does these values signify? What are the possible corrective actions according to you
that a developer must take if the values of CBO and WMC exceed these values?
3.16 What are the practical applications of software metrics? How can the metrics be
helpful to software organizations?
3.17 What are the five measurement scales? Explain their properties with the help of
examples.
3.18 How are the external and internal attributes related to process and product metrics?
3.19 What is the difference between process and product metrics?
3.20 What is the relevance of software metrics in research?
Further Readings
An in-depth study of eighteen different categories of software complexity metrics was
provided by Zuse, where he tried to give basic definition for metrics in each category:
Fenton’s book on software metrics is a classic and useful reference as it provides in-depth
discussions on measurement and key concepts related to metrics:
N. Fenton, and S. Pfleeger, Software Metrics: A Rigorous & Practical Approach, PWS
Publishing Company, Boston, MA, 1997.
102 Empirical Research in Software Engineering
The traditional Software Science metrics proposed by Halstead are listed in:
Chidamber and Kemerer (1991) proposed the first significant OO design metrics. Then,
another paper by Chidamber and Kemerer defined and validated the OO metrics suite
in 1994. This metrics suite is widely used and has obtained widest attention in empirical
studies:
The following paper explains various OO metric suites with real-life examples:
www.acis.pamplin.vt.edu/faculty/tegarden/wrk-pap/ooMETBIB.PDF
After the problem is defined, the experimental design process begins. The study must be
carefully planned and designed to draw useful conclusions from it. The formation of a
research question (RQ), selection of variables, hypothesis formation, data collection, and
selection of data analysis techniques are important steps that must be carefully carried
out to produce meaningful and generalized conclusions. This would also facilitate the
opportunities for repeated and replicated studies.
The empirical study involves creation of a hypothesis that is tested using statistical
techniques based on the data collected. The model may be developed using multivariate
statistical techniques or machine learning techniques. The steps involved in the experi-
mental design are presented to ensure that proper steps are followed for conducting an
empirical study. In the absence of a planned analysis, a researcher may not be able to draw
well-formed and valid conclusions. All the activities involved in empirical design are
explained in detail in this chapter.
Identify goals
Hypothesis
formulation
Creating solution
Variable selection to the problem
Empirical data
collection
FIGURE 4.1
Steps in experimental design.
of fault and has been published in Singh et al. (2010). Hereafter, the study will be referred
to as fault prediction system (FPS). The objective, motivation, and context of the study are
described below.
4.2.2 Motivation
The study predicts an important quality attribute, fault proneness during the early phases
of software development. Software metrics are used for predicting fault proneness. The
important contribution of this study is taking into account of the severity of faults dur-
ing fault prediction. The value of severity quantifies the impact of the fault on the soft-
ware operation. The IEEE standard (1044–1993, IEEE 1994) states, “Identifying the severity
of an anomaly is a mandatory category as is identifying the project schedule, and project
Experimental Design 105
cost impacts of any possible solution for the anomaly.” All the failures are not of the same
type; they may vary in the impact that they may cause. For example, a failure caused by a
fault may lead to a whole system crash or an inability to open a file (El Emam et al. 1999;
Aggarwal et al. 2009). In this example, it can be seen that the former failure is more severe
than the latter. Lack of determination of severity of faults is one of the main criticisms of
the approaches to fault prediction in the study by Fenton and Neil (1999). Therefore, there
is a need to develop prediction models that can be used to identify classes that are prone to
have serious faults. The software practitioners can use the model predicted with respect to
high severity of faults to focus the testing on those parts of the system that are likely to cause
serious failures. In this study, the faults are categorized with respect to all the severity levels
given in the NASA data set to improve the effectiveness of the categorization and provide
meaningful, correct, and detailed analysis of fault data. Categorizing the faults according to
different severity levels helps prioritize the fixing of faults (Afzal 2007). Thus, the software
practitioners can deal with the faults that are at higher priority first, before dealing with the
faults that are comparatively of lower priority. This would allow the resources to be judi-
ciously allocated based on the different severity levels of faults. In this work, the faults are
categorized into three levels: high severity, medium severity, and low severity.
Several regression (such as linear and logistic regression [LR]) and machine learning
techniques (such as decision tree [DT] and artificial neural network [ANN]) have been pro-
posed in the literature. There are few studies that are using machine learning techniques
for fault prediction using OO metrics. Most of the prediction models in the literature are
built using statistical techniques. There are many machine learning techniques, and there
is a need to compare the results of various machine learning techniques as they give dif-
ferent results. ANN and DT methods have seen an explosion of interest over the years and
are being successfully applied across a wide range of problem domains such as finance,
medicine, engineering, geology, and physics. Indeed, these methods are being introduced
to solve the problems of prediction, classification, or control (Porter 1990; Eftekhar 2005;
Duman 2006; Marini 2008). It is natural for software practitioners and potential users to
wonder, “Which classification technique is best?,” or more realistically, “What methods
tend to work well for a given type of data set?” More data-based empirical studies, which
are capable of being verified by observation, or experiments are needed. Today, the evi-
dence gathered through these empirical studies is considered to be the most powerful
support possible for testing a given hypothesis (Aggarwal et al. 2009). Hence, conduct-
ing empirical studies to compare regression and machine learning techniques is necessary
to build an adequate body of knowledge to draw strong conclusions leading to widely
accepted and well-formed theories.
4.2.4 Results
The results show that the area under the curve (measured from the ROC analysis) of mod-
els predicted using high-severity faults is low compared with the area under the curve of
the model predicted with respect to medium- and low-severity faults.
106 Empirical Research in Software Engineering
Hence, the RQs must fill the gap between existing literature and current work and must
give some new perspective to the problem. Figure 4.2 depicts the context of the RQs. The
RQ may be formed according to the research types given below:
FIGURE 4.2
Context of research questions.
Experimental Design 107
4.3.2 Characteristics of an RQ
The following are the characteristics of a good RQ:
1. Clear: The reader who may not be an expert in the given topic should understand
the RQs. The questions should be clearly defined.
2. Unambiguous: The use of vague statements that can be interpreted in multiple ways
should be avoided while framing RQs. For example, consider the following RQ:
Are OO metrics significant in predicting various quality attributes?
The above statement is very vague and can lead to multiple interpretations. This is
because a number of quality attributes are present in the literature. It is not clear
which quality attribute one wants to consider. Thus, the above vague statement
can be redefined in the following way. In addition, the OO metrics can also be
specified.
Are OO metrics significant in predicting fault proneness?
3. Empirical focus: This property requires generating data to answer the RQs.
4. Important: This characteristic requires that answering an RQ adds significant
contribution to the research and that there will be beneficiaries.
108 Empirical Research in Software Engineering
Finally, the research problem must be stated in either a declarative or interrogative form.
The examples of both the forms are given below:
Declarative form: The present study focuses on predicting change-prone parts of the
software at the early stages of software development life cycle. Early prediction of
change-prone classes will lead to saving lots of resources in terms of money, man-
power, and time. For this, consider the famous Chidamber and Kemerer metrics
suite and determine the relationship between metrics and change proneness.
Interrogative form: What are the consequences of predicting the change-prone parts
of the software at the early stages of software development life cycle? What is the
relationship between Chidamber and Kemerer metrics and change proneness?
• RQ1: Which OO metrics are related to fault proneness of classes with regard to
high-severity faults?
• RQ2: Which OO metrics are related to fault proneness of classes with regard to
medium-severity faults?
• RQ3: Which OO metrics are related to fault proneness of classes with regard to
low-severity faults?
• RQ4: Is the performance of machine learning techniques better than the LR method?
Experimental Design 109
The main aim of the research is to contribute toward a better understanding of the con-
cerned field. A literature review analyzes a body of literature related to a research topic
to have a clear understanding of the topic, what has already been done on the topic, and
what are the key issues that need to be addressed. It provides a complete overview of the
existing work in the field. Figure 4.3 depicts various questions that can be answered while
conducting a literature review.
The literature review involves collection of research publications (articles, conference
paper, technical reports, book chapters, journal papers) on a particular topic. The aim
is to gather ideas, views, information, and evidence on the topic under investigation.
What are the key theories, What are the key areas where
concepts, and ideas? knowledge gaps exist?
FIGURE 4.3
Key questions while conducting a review.
110 Empirical Research in Software Engineering
The purpose of the literature review is to effectively perform analysis and evaluation of
literature in relation to the area being explored. The major benefit of the literature review
is that the researcher becomes familiar with the current research before commencing his/
her own research in the same area.
The literature review can be carried out by two aspects. The research students perform
the review to gain idea about the relevant materials related to their research so that they
can identify the areas where more work is required. The literature review carried out as
a part of the experimental design is related to the second aspect. The aim is to examine
whether the research area being explored is worthwhile or not. For example, search-based
techniques have shown the predictive capabilities in various areas where classification
problem was of complex nature. But till date, mostly statistical techniques have been
explored in software engineering-related problems. Thus, it may be worthwhile to explore
the performance capability of search-based techniques in software engineering-related
problems. The second aspect of the literature review concerns with searching and analyz-
ing the literature after selecting a research topic. The aim is to gather idea about the current
work being carried out by the researcher, whether it has created new knowledge and adds
value to the existing research. This type of literature review supports the following claims
made by the researcher:
1. Increase in familiarity with the previous relevant research and prevention from
duplication of the work that has already been done.
2. Critical evaluation of the work.
3. Facilitation of development of new ideas and thoughts.
4. Highlighting key findings, proposed methodologies, and research techniques.
5. Identification of inconsistencies, gaps, and contradictions in the literature.
6. Extraction of areas where attention is required.
c. ScienceDirect/Elsevier
d. Wiley
e. ACM
f. Google Scholar
Before searching in digital portals, the researchers need to identify the most
credible research journals in the related areas. For example, in the area of soft-
ware engineering, some of the important journals in which search can be done
are: Software: Practice and Experience, Software Quality Journal, IEEE Transactions on
Software Engineering, Information and Software Technology, Journal of Computer Science
and Technology, ACM Transactions on Software Engineering Methodology, Empirical
Software Engineering, IEEE Software Maintenance, Journal of Systems and Software, and
Software Maintenance and Evolution.
Besides searching the journals and portals, various educational books, scientific
monograms, government documents and publications, dissertations, gray litera-
ture, and so on that are relevant to the concerned topic or area of research should
be explored. Most importantly, the bibliographies and reference lists of the materi-
als that are read need to be searched. These will give the pointers to more articles
and can also be a good estimate about how much have been read on the selected
topic of research.
After the digital portals and Internet resources have been identified, the next step
is to form the search string. The search string is formed by using the key terms
from the selected topic in the research. The search string is used to search the
literature from the digital portal.
2. Conduct the search: This step involves searching the identified sources by using
the formed search string. The abstracts and/or full texts of the research papers
should be obtained for reading and analysis.
3. Analyze the literature: Once the research papers relevant to the research topic
have been obtained, the abstract should be read, followed by the introduction
and conclusion sections. The relevant sections can be identified and read by the
section headings. In case of books, the index must be scanned to obtain an idea
about the relevant topics. The materials that are highly relevant in terms of mak-
ing the greatest contribution in the related research or the material that seems the
most convincing can be separated. Finally, a decision about reading the necessary
content must be made.
The strengths, drawbacks, and omissions in the literature review must be iden-
tified on the basis of the evidence present in the papers. After thoroughly and
critically analyzing the literature, the differences of the proposed work from the
literature must be highlighted.
4. Use the results: The results obtained from the literature review must then be
summarized for later comparison with the results obtained from the current
work.
of the subject under concern. It discusses the kind of work that is done on the concerned
topic of research, along with any controversies that may have been encountered by
different authors. The “body” contains and focuses on the main idea behind each paper in
the review. The relevance of the papers cited should be clearly stated in this section of the
review. It is not important to simply restate what the other authors have said, but instead
our main aim should be to critically evaluate each paper. Then, the conclusion should be
provided that summarizes what the literature says. The conclusion summarizes all the
evidence presented and shows its significance. If the review is an introduction to our own
research, it indicates how the previous research has lead to our own research focusing and
highlighting on the gaps in the previous research (Bell 2005). The following points must be
covered while writing a literature review:
• Identify the topics that are similar in multiple papers to compare and contrast
different authors’ view.
• Group authors who draw similar conclusions.
• Group authors who are in disagreement with each other on certain topics.
• Compare and contrast the methodologies proposed by different authors.
• Show how the study is related to the previous studies in terms of the similarities
and the differences.
• Highlight exemplary studies and gaps in the research.
The above-mentioned points will help to carry out effective and meaningful literature
review.
Basili et al. C++ University environment, C&K metrics, 3 code metrics LR LR Contingency table,
(1996) 180 classes correctness,
completeness
Abreu and Melo C++ University environment, MOOD metrics Pearson Linear least square R2
(1996) UMD: 8 systems correlation
Binkley and C++, Java 4 case studies, CCS: CDM, DIT, NOC, NOD, NCIM, Spearman – –
Schach (1998) 113 classes, 82K SLOC, NSSR, CBO rank
29 classes, 6K SLOC correlation
Harrison et al. C++ University environment, DIT, NOC, NMI, NMO Spearman – –
(1999) SEG1: 16 classes SEG2: rho
22 classes SEG3:
27 classes
Benlarbi and C++ LALO: 85 classes, OVO, SPA, SPD, DPA, DPD, LR LR –
Melt (1999) 40K SLOC CHNL, C&K metrics, part of
coupling metrics
El Emam et al. Java V0.5: 69 classes, V0.6: Coupling metrics, C&K metrics LR LR R2, leave one-out
(2001a) 42 classes cross-validation
El Emam et al. C++ Telecommunication Coupling metrics, DIT LR LR R2
(2001b) framework: 174 classes
Tang et al. C++ System A: 20 classes C&K metrics (without LCOM) LR – –
(1999) System B: 45 classes
System C: 27 classes
Briand et al. C++ University environment, Suite of coupling metrics, LR LR R2, 10 cross-validation,
(2000) UMD: 180 classes 49 metrics correctness,
completeness
(Continued)
113
114
Glasberg et al. Java 145 classes NOC, DIT ACAIC, OCAIC, LR LR R2, leave one-out
(2000) DCAEC, OCAEC cross-validation, ROC
curve, cost-saving
model
El Emam et al. C++, Java Telecommunication C&K metrics, NOM, NOA LR – –
(2000a) framework: 174 classes,
83 classes, 69 classes of
Java system
Briand et al. C++ Commercial system, Suite of coupling metrics, OVO, LR LR R2, 10 cross-validation,
(2001) LALO: 90 classes, SPA, SPD, DPA, DPD, NIP, SP, correctness,
40K SLOC DP, 49 metrics completeness
Cartwright and C++ 32 classes, 133K SLOC ATTRIB, STATES, EVENT, Linear Linear regression –
Shepperd (2000) READS, WRITES, DIT, NOC regression
Briand and Java Commercial system, Polymorphism metrics, C&K LR LR, Mars 10 cross-validation,
Wüst (2002) XPOSE & JWRITER: correctness,
144 classes completeness
Yu et al. (2002) Java 123 classes, 34K SLOC C&K metrics, Fan-in, WMC OLS+LDA – –
Subramanyam C++, Java C&K metrics OLS OLS
and Krishnan
(2003)
Gyimothy et al. C++ Mozilla v1.6: C&K metrics, LCOMN, LOC LR, linear LR, linear 10 cross-validation,
(2005) 3,192 classes regression, regression, NN, correctness,
NN, DT DT completeness
Aggarwal et al. Java University environment, Suite of coupling metrics LR LR 10 cross-validation,
(2006a, 2006b) 136 classes sensitivity, specificity
Arisholm and Java XRadar and JHawk C&K metrics LR LR
Empirical Research in Software Engineering
Yuming and C++ NASA data set, C&K metrics LR, ML LR, ML Correctness,
Experimental Design
(Continued)
116
Di Martino et al. Java Versions 4.0, 4.2, and C&K, NPM, LOC – Combination of Precision, accuracy,
(2011) 4.3 of the jEdit system GA+SVM, LR, recall, F-measure
C4.5, NB, MLP,
KNN, and RF
Azar and Java 8 open source software 22 metrics by Henderson-Sellers – ACO, C4.5, random Accuracy
Vybihad (2011) systems (2007), Barnes and Swim (1993), guessing
Coppick and Cheatham (1992),
C&K
Malhotra and – Open source data set C&K and QMOOD metrics LR LR and ML (ANN, Sensitivity, specificity,
Singh (2011) Arc, 234 classes RF, LB, AB, NB, precision
KStar, Bagging)
Malhotra and Java Apache POI, 422 classes MOOD, QMOOD, C&K LR LR, MLP (RF, Sensitivity, specificity,
Jain (2012) (19 metrics) Bagging, MLP, precision
SVM, genetic
algorithm)
Source: Compiled from multiple sources.
–implies that feature not examined.
LR: logistic regression, LDA: linear discriminant analysis, ML: machine learning, OLS: ordinary least square linear regression, PC: principal component analysis, NN:
neural network, BPN: back propagation neural network, PPN: probabilistic neural network, DT: decision tree, MLP: multilayer perceptron, SVM: support vector
machine, RF: random forest, GA+SVM: combination of genetic algorithm and support vector machine, NB: naïve Bayes, KNN: k-nearest neighbor, C4.5: decision tree,
ACO: ant colony optimization, Adtree: alternating decision tree, AB: adaboost, LB: logitboost, CHNL: class hierarchy nesting level: NCIM: number of classes inheriting
a method, NSSR: number of subsystems-system relationship: NPM, number of public methods: LCOMN, lack of cohesion on methods allowing negative value.
Related to metrics: C&K: Chidamber and Kemerer, MOOD: metrics for OO design, QMOOD: quality metrics for OO design.
Empirical Research in Software Engineering
Experimental Design 117
Independent variable 1
Dependent variable
Process
Independent variable N
FIGURE 4.4
Relationship between dependent and independent variables.
TABLE 4.2
Differences between Dependent and Independent Variables
Independent Variable Dependent Variable
Variable that is varied, changed, or manipulated. It is not manipulated. The response or outcome that is
measured when the independent variable is varied.
It is the presumed cause. It is the presumed effect.
Independent variable is the antecedent. Dependent variable is the consequent.
Independent variable refers to the status of the Dependent variable refers to the status of the
“cause,” which leads to the changes in the status of “outcome” in which the researcher is interested.
the dependent variable.
Also known as explanatory or predictor variable. Also known as response or predictor or target
variable.
For example, various metrics that can be used to For example, whether a module is faulty or not.
measure various software constructs.
118 Empirical Research in Software Engineering
For example, if the researcher wants to find whether a UML tool is better than a traditional
tool and the effectiveness of the tool is measured in terms of productivity of the persons
using the tool, then hypothesis testing can be used directly using the data given in Table 4.3.
Consider another instance where the researcher wants to compare two machine learn-
ing techniques to find the effect of software metrics on probability of occurrence of faults.
In this problem, first the model is predicted using two machine learning techniques. In the
next step, the model is validated and performance is measured in terms of performance
evaluation metrics (refer Chapter 7). Finally, hypothesis testing is applied on the results
obtained in the previous step for verifying whether the performance of one technique is
better than the other technique.
Figure 4.5 shows that the term independent and dependent variables is used in both
experimental studies and multivariate analysis. In multivariate analysis, the independent
and dependent variables are used in model prediction. The independent variables are used
as predictor variables to predict the dependent variable. In experimental studies, factors
for a statistical test are also termed as independent variables that may have one or more
TABLE 4.3
Productivity for Tools
UML Tool Traditional Tool
14 52
67 61
13 14
Independent
Independent
variables or factors:
variables: software
techniques and
metrics such as fan-in,
methods such as
cyclomatic complexity
machine learning
techniques
Dependent variable:
accuracy
FIGURE 4.5
Terminology used in experimental studies and multivariate analysis studies.
120 Empirical Research in Software Engineering
levels called treatments or samples as suitable for a specific statistical test. For example, a
researcher may wish to test whether the mean of two samples is equal or not such as in the
case when a researcher wants to explore different software attributes like coupling before
and after a specific treatment like refactoring. Another scenario could be when a researcher
wants to explore the performance of two or more learning algorithms or whether two treat-
ments give uniform results. Thus, the dependent variable in experimental study refers to
the behavior measures of a treatment. In software engineering research, in some cases,
these may be the performance measures. Similarly, one may refer to performances on dif-
ferent data sets as data instances or subjects, which are exposed to these treatments.
In software engineering research, the performance measures on data instances are
termed as the outcome or the dependent variable in case of hypothesis testing in experi-
mental studies. For example, technique A when applied on a data set may give an accuracy
(performance measure, defined as percentage of correct predictions) value of 80%. Here,
technique A is the treatment and the accuracy value of 80% is the outcome or the dependent
variable. However, in multivariate analysis or model prediction, the independent variables
are software metrics and the dependent variable may be, for example, a quality attribute.
To avoid confusion, in this book, we use terminology related to multivariate analysis
unless and until specifically mentioned.
Case 1: One factor, one treatments—In this case, there is one technique under obser-
vation. For example, if the distribution of the data needs to be checked for a given
variable, then this design type can be used. Consider a scenario where 25 students
had developed the same program. The cyclomatic complexity values of the pro-
gram can be evaluated using chi-square test.
Case 2: One factor, two treatments—This type of design may be purely randomized
or paired design. For example, a researcher wants to compare the performance
of two verification techniques such as walkthroughs and inspections. Another
instance is when a researcher wants to compare the performance of two machine
learning techniques, naïve Bayes and DT, on a given or over multiple data sets. In
these two examples, factor is one (verification method or machine learning tech-
nique) but treatments are two. Paired t-test or Wilcoxon test can be used in these
cases. Chapter 6 provides examples for these tests.
Experimental Design 121
TABLE 4.4
Factors and Levels of Example
Factor Level 1 Level 2
Case 3: One factor, more than two treatments—In this case, the technique that is to
be analyzed contains multiple values. For example, a researcher wants to compare
multiple search-based techniques such as genetic algorithm, particle swarm opti-
mization, genetic programming, and so on. Friedman test can be used to solve this
example. Section 6.4.13 provides solution for this example.
Case 4: Multiple factors and multiple treatments—In this case, more than one factor
is considered with multiple treatments. For instance, consider an example where
a researcher wants to compare paradigm types such as structured paradigm with
OO paradigm. In conjunction to the paradigm type, the researcher also wants to
check the complexity of the software being difficult or simple. This example is
shown in Table 4.4 along with the factors and levels. ANOVA test can be used to
solve such examples.
The examples of the above experimental design types are given in Section 6.4. After deter-
mining the appropriate experiment design type, the hypothesis needs to be formed in an
empirical study.
• It provides the researcher with a relational statement that can be directly tested in
a research study.
122 Empirical Research in Software Engineering
Primary thought
(not fully formed)
Thought-through and
Research questions
well-formed idea
Research hypothesis
FIGURE 4.6
Generation of hypothesis in a research.
TABLE 4.5
Transition from RQ to Hypothesis
RQ Corresponding Hypothesis
Is X related to Y? If X, then Y.
How are X and Y related to Z? If X and Y, then Z.
How is X related to Y and Z? If X, then Y and Z.
How is X related to Y under conditions Z and W? If X, then Y under conditions Z and W.
4. Write down the hypotheses in a format that is testable through scientific research:
There are two types of hypothesis—null and alternative hypotheses. Correct for-
mation of null and alternative hypotheses is the most important step in hypoth-
esis testing. The null hypothesis is also known as hypothesis of no difference and
denoted as H0. The null hypothesis is the proposition that implies that there is no
statistically significant relationship within a given set of parameters. It denotes the
reverse of what the researcher in his experiment would actually expect or predict.
Alternative hypothesis is denoted as Ha. The alternative hypothesis reflects that a
statistically significant relationship does exist within a given set of parameters. It
is the opposite of null hypothesis and is only reached if H0 is rejected. The detailed
explanation of null and alternative hypothesis is stated in the next Section 4.7.5.
Table 4.5 presents corresponding hypothesis to given RQs.
Some of the examples to show the transition from an RQ to a hypothesis are stated below:
RQ: What is the relation of coupling between classes and maintenance effort?
Hypothesis: Coupling between classes and maintenance effort are positively related
to each other.
Example 4.1:
There are various factors that may have an impact on the amount of effort required to
maintain a software. The programming language in which the software is developed
can be one of the factors affecting the maintenance effort. There are various program-
ming languages available such as Java, C++, C#, C, Python, and so on. There is a
need to identify whether these languages have a positive, negative, or neutral effect
on the maintenance effort. It is believed that programming languages have a positive
impact on the maintenance effort. However, this needs to be tested and confirmed
scientifically.
Solution:
The problem and hypothesis derived from it is given below:
Define hypothesis
• Define null hypothesis
• Define alternate hypothesis
Derive conclusions
• Check statistical significance of results
FIGURE 4.7
Steps in hypothesis testing.
Experimental Design 125
The null hypothesis can be written in mathematical form, depending on the particular
descriptive statistic using which the hypothesis is made. For example, if the descriptive
statistic is used as population mean, then the general form of null hypothesis is,
Ho : µ = X
where:
µ is the mean
X is the predefined value
In this example, whether the population mean equals X or not is being tested.
There are two possible scenarios through which the value of X can be derived. This
depends on two different types of RQs. In other words, the population parameter (mean in
the above example) can be assigned a value in two different ways. First reason is that the
predetermined value is selected for practical or proved reasons. For example, a software
company decides that 7 is its predetermined quality parameter for mean coupling. Hence,
all the departments will be informed that the modules must have a value of <7 for coupling
to ensure less complexity and high maintainability. Similarly, the company may decide
that it will devote all the testing resources to those faults that have a mean rating above 3.
The testers will therefore want to test specifically all those faults that have mean rating >3.
Another situation is where a population under investigation is compared with another
population whose parameter value is known. For example, from the past data it is known
that average productivity of employees is 30 for project A. We want to see whether the
average productivity of employees is 30 or not for project B? Thus, we want to make an
inference whether the unknown average productivity for project B is equal to the known
average productivity for project A.
The general form of alternative hypothesis when the descriptive parameter is taken as
mean (µ) is,
Ha : µ ≠ X
where:
µ is the mean
X is the predefined value
The above hypothesis represents a nondirectional hypothesis as it just denotes that there
will be a difference between the two groups, without discussing how the two groups differ.
The example is stated in terms of two popularly used methods to measure the size of soft-
ware, that is, (1) LOC and (2) function point analysis (FPA). The nondirectional hypothesis
can be stated as, “The size of software as measured by the two techniques is different.”
Whereas, when the hypothesis is used to show the relationship between the two groups
rather than simply comparing the groups, then the hypothesis is known as directional
hypothesis. The comparison terms such as “greater than,” “less than,” and so on is used in
the formulation of hypothesis. In other words, it specifies how the two groups differ. For
example, “The size of software as measured by FPA is more accurate than LOC.” Thus, the
direction of difference is mentioned. The same concept is represented by one-tailed and
two-tailed tests in statistical testing and is explained in Section 6.4.3.
One important point to note is that the potential outcome that a researcher is expecting
from his/her experiment is denoted in terms of alternative hypothesis. What is believed
to be the theoretical expectation or concept is written in terms of alternative hypothesis.
126 Empirical Research in Software Engineering
Thus, sometimes the alternative hypothesis is referred to as the research hypothesis. Now,
if the alternative hypothesis represents the theoretical expectation or concept, then what
is the reason for performing the hypothesis testing? This is done to check whether the
formed or assumed concepts are actually significant or true. Thus, the main aim is check
the validity of the alternative hypothesis. If null hypothesis is accepted, it signifies that the
idea or concept of research is false.
There are various tests available in research for verifying hypothesis and are given as
follows:
Critical region or
region of rejection
FIGURE 4.8
Critical region.
FIGURE 4.9
Significance levels.
128 Empirical Research in Software Engineering
TABLE 4.6
A Sample Data Set
CBO for Faulty CBO for Nonfaulty
S. No. Modules Modules
1 45 9
2 56 9
3 34 9
4 71 7
5 23 10
6 9 15
Mean 39.6 9.83
H a : µ ( CBO faulty ) > µ ( CBO nonfaulty ) or µ ( CBO faulty ) < µ ( CBO nonfaulty )
where:
µ1 is the mean of first population
µ2 is the mean of second population
∑ d − ( ∑ d )
2
2
n
σd =
n−1
Experimental Design 129
TABLE 4.7
T-Test Calculations
CBO for Faulty CBO for Nonfaulty Difference
Modules Modules (d) D2
45 9 36 1,296
56 9 47 2,209
34 9 25 625
71 7 64 4,096
23 10 13 169
9 15 –6 36
where:
n represents number of pairs and not total number of samples
d is the difference between values of two samples
Substituting the values of mean, variance, and sample size in the above formula, the
t-score is obtained as:
∑ d − ( ∑ d )
2
2
n 8431 − ( 179 ) 6
2
σd = = = 24.86
n−1 5
µ1 − µ 2 39.66 − 9.83
t= = = 2.93
σd n 24.86 6
As the alternative hypothesis is of the form, H1: µ > X or µ < X, the tail of sampling
distribution is nondirectional. Let us take the level of significance (α) for one-
tailed test as 0.05.
Step 5: Determine the significance value
The p-value at significance level of 0.05 (two-tailed test) is considered and df as 5.
From the t-distribution table, it is observed that the p-value is 0.032 (refer to Section
6.4.6 for computation of p-value).
Step 6: Deriving conclusions
Now, to decide whether to accept or reject the null hypothesis, this p-value is compared
with the level of significance. As the p-value (0.032) is less than the level of significance
(0.05), the H0 is rejected. In other words, the alternative hypothesis is accepted. Thus,
it is concluded that there is statistical difference between the average of coupling
metrics for faulty classes and the average of coupling metrics for nonfaulty classes.
are tested to compare the performance of regression and machine learning techniques at
different severity levels of faults:
First degree: The researcher is in direct contact or involvement with the subjects
under concern. The researcher or software engineer may collect data in real-time.
For example, under this category, the various methods are brainstorming, inter-
views, questionnaires, think-aloud protocols, and so on. There are various other
methods as depicted in Figure 4.10.
Second degree: There is no direct contact of the researcher with the subjects during
data collection. The researcher collects the raw data without any interaction with
the subjects. For example, observations through video recording and fly on the wall
(participants taping their work) are the two methods that come under this category.
Third degree: There is access only to the work artifacts. In this, already avail-
able and compiled data is used. For example, analysis of various documents
produced from an organization such as the requirement specifications, fail-
ure reports, document change logs, and so on come under this category. There
are various reports that can be generated using different repositories such
as change report, defect report, effort data, and so on. All these reports play
an important role while conducting a research. But the accessibility of these
reports from the industry or any private organization is not an easy task. This
is discussed in the next subsection, and the detailed collection methods are
presented in Chapter 5.
The main advantage of the first and second degree methods is that the researcher has
control over the data to a large extent. Hence, the researcher needs to formulate and decide
132 Empirical Research in Software Engineering
• Inquisitive techniques
Brainstorming and focus groups
Interviews
Questionnaires
First degree Conceptual modeling
(direct involvement of • Observational techniques
software engineers) Work diaries
Think-aloud protocols
Shadowing and observation
synchronized shadowing
Participant observation (join the team)
Second degree • Instrumenting systems
(indirect involvement of • Fly on the wall (participants taping their
software engineers) work)
FIGURE 4.10
Various data-collection strategies.
on data-collection methods in the experimental design phase. The methods under these
categories require effort from both the researcher and the subject. Because of this reason,
first degree methods are most expensive than the second or third degree methods. Third
degree methods are least expensive, but the control over data is minimum. This compro-
mises the quality of the data as the correctness of the data is not under the direct control
of the researcher.
Under first degree category, the interviews and questionnaires are the most easy
and straightforward methods. In interview-based data collection, the researcher pre-
pares a list of questions about the areas of interest. Then, an interview session takes
place between the researcher and the subject(s), wherein the researcher can ask vari-
ous research-related questions. Questions can be either open, inviting multiple and
broad range of answers, or closed, offering a limited set of answers. The drawback of
collecting data from interviews and questionnaires is that they produce typically an
incomplete picture. For example, if one wants to know the number of LOC in a soft-
ware program. Conducting interviews and questionnaires will only provide us general
opinions and evidence, but the accurate information is not provided. Methods such as
think-aloud protocols and work diaries can be used for this strategy of data collection.
Second degree requires access to the environment in which participants or subject(s)
work, but without having direct contact with the participants. Finally, the third degree
requires access only to work artifacts, such as source code or bugs database or docu-
mentation (Wohlin 2012).
TABLE 4.8
Differences between the Types of Data Sets
S. No. Academic Industrial Open Source
1 Obtained from the projects Obtained from the projects Obtained from the projects
made by the students of developed by experienced and developed by experienced
some university qualified programmers developers located at different
geographical locations
2 Easy to obtain Difficult to obtain Easy to obtain
3 Obtained from data set that is Obtained from data set Obtained from data set
not necessarily maintained maintained over a long period maintained over a long period
over a long period of time of time of time
4 Results are not reliable and Results are highly reliable and Results may be reliable and
acceptable acceptable acceptable
5 It is freely available May or may not be freely It is generally freely available
available
6 Uses ad hoc approach to Uses very well planned Uses well planned and mature
develop projects approach approach
7 Code may be available Code is not available Code is easily available
8 Example: Any software Example: Performance Manage- Example: Android, Apache
developed in university such ment traffic recording (Lindvall Tomcat, Eclipse, Firefox, and
as LALO (Briand et al. 2001), 1998), commercial OO system so on
UMD (Briand et al. 2000), implemented in C++ (Bieman
USIT (Aggarwal et al. 2009) et al. 2003), UIMS (Li and Henry
1993), QUES (Li and Henry 1993)
university systems, industrial or commercial systems, and public or open source soft-
ware. The academic data is the data that is developed by the students of some univer-
sity. Industrial data is the proprietary data belonging to some private organization or a
company. Public data sets are available freely to everyone for use and does not require any
payment from the user. The differences between them are stated in Table 4.8.
It is relatively easy to obtain the academic data as it is free from confidentiality concerns
and, hence, gaining access to such data is easier. However, the accuracy and reliability of
the academic data is questionable while conducting research. This is because the university
software is developed by inexperienced, small number of programmers and is typically
not applicable in real-life scenarios. Besides the university data sets, there is public or open
source software that is widely used for conducting empirical research in the area of soft-
ware engineering. The use of open source software allows the researchers to access vast
repositories of reasonable quality, large-sized software. The most important type of data is
the proprietary/industrial data that is usually owned by a corporation/organization and
is not publically available.
The usage of open source software has been on the rise, with products such as Android
and Firefox becoming household names. However, majority of the software devel-
oped across the world, especially the high-quality software, still remains proprietary
software. This is because of the fact that given the voluntary nature of developers for
open source software, the attention of the developers might shift elsewhere leading to
lack of understanding and poor quality of the end product. For the same reason, there
are also challenges with timeliness of the product development, rigor in testing and
documentation, as well as characteristic lack of usage support and updates. As opposed
to this, the proprietary software is typically developed by an organization with clearly
134 Empirical Research in Software Engineering
demarcated manpower for design, development, and testing of the software. This allows
for committed, structured development of software for a well-defined end use, based on
robust requirement gathering. Therefore, it is imperative that the empirical studies in
software engineering be validated over data from proprietary systems, because the devel-
opers of such proprietary software would be the key users of the research. Additionally,
industrial data is better suited for empirical research because the development follows
a structured methodology, and each step in the development is monitored and docu-
mented along with its performance measurement. This leads to development of code that
follows rigorous standards and robustly captures the data sets required by the academia
for conducting their empirical research.
At the same time, access to the proprietary software code is not easily obtained. For most
of the software development organizations, the software constitutes their key intellectual
asset and they undertake multiple steps to guard the privacy of the code. The world’s most
valuable products, such as Microsoft Windows and Google search, are built around their
closely held patented software to guard against competition and safeguard their products
developed with an investment of billions of dollars. Even if there is appreciation of the role
and need of the academia to access the software, the enterprises typically hesitate to share
the data sets, leading to roadblocks in the progress of empirical research.
It is crucial for the industry to appreciate that the needs of the empirical research do not
impinge on their considerations of software security. The data sets required by the academia
are the metrics data or the data from the development/testing process, and does not com-
promise on security of the source code, which is the primary concern of the industry. For
example, assume an organization uses commercial code management system/test manage-
ment system such as HP Quality Center or HP Application Lifecycle Management. Behind
the scenes, a database would be used to store information about all modules, including all
the code and its versions, all development activity in full detail, and the test cases and their
results. In such a scenario, the researcher does not need access to the data/code stored in the
database, which the organization would certainly be unwilling to share, but rather specific
reports corresponding to the problem he wishes to address. As an illustration, for a defect
prediction study, only a list of classes with corresponding metrics and defect count would
be required, which would not compromise the interests of the organization. Therefore, with
mutual dialogue and understanding, appropriate data sets could be shared by the industry,
which would create a win-win situation and lead to betterment of the process. The key chal-
lenge, which needs to be overcome, is to address the fear of the enterprises regarding the
type of data sets required and the potential hazards. A constructive dialogue to identify the
right reports would go a long way towards enabling the partnership because access to the
wider database with source code would certainly be impossible.
Once the agreement with the industry has been reached and the right data sets have been
received, the attention can be shifted to actual conducting of the empirical research with
the more appropriate industrial data sets. The benefits of using the industrial database
would be apparent in the thoroughness of the data sets available and the consistency of
the software system. This would lead to more accurate findings for the empirical research.
data, which is implemented in the C++ programming language. Fault data for KC1 is
collected since the beginning of the project (storage management system) but that data
can only be associated back to five years (MDP 2006). This system consists of 145 classes
that comprise 2,107 methods, with 40K LOC. KC1 provides both class-level and method-
level static metrics. At the method level, 21 software product metrics based on product’s
complexity, size, and vocabulary are given. At the class level, values of ten metrics are
computed, including six metrics given by Chidamber and Kemerer (1994). The seven OO
metrics are taken in this study for analyses. In KC1, six files provide association between
class/method and metric/defect data. In particular, there are four files of interest, the first
representing the association between classes and methods, the second representing asso-
ciation between methods and defects, the third representing association between defects
and severity of faults, and the fourth representing association between defects and specific
reason for closure of the error report.
First, defects are associated with each class according to their severities. The value of
severity quantifies the impact of the defect on the overall environment with 1 being most
severe to 5 being least severe as decided in data set KC1. The defect data from KC1 is
collected from information contained in error reports. An error either could be from the
source code, COTS/OS, design, or is actually not a fault. The defects produced from the
source code, COTS/OS, and design are taken into account. The data is further processed
by removing all the faults that had “not a fault” keyword used as the reason for closure of
error report. This reduced the number of faults from 669 to 642. Out of 145 classes, 59 were
faulty classes, that is, classes with at least one fault and the rest were nonfaulty.
In this study, the faults are categorized as high, medium, or low severity. Faults with
severity rating 1 were classified as high-severity faults. Faults with severity rating 2 were
classified as medium-severity faults and faults with severity rating 3, 4, and 5 as low-sever-
ity faults, as at severity rating 4 no class is found to be faulty and at severity rating 5 only
one class is faulty. Faults at severity rating 1 require immediate correction for the system
to continue to operate properly (Zhou and Leung 2006).
Table 4.9 summarizes the distribution of faults and faulty classes at high-, medium-, and
low-severity levels in the KC1 NASA data set after preprocessing of faults in the data set.
High-severity faults were distributed in 23 classes (15.56%). There were 48 high-severity
faults (7.47%), 449 medium-severity faults (69.93%), and 145 low-severity faults (22.59%). As
shown in Table 4.9, majority of the classes are faulty at severity rating medium (58 out of
59 faulty classes). Figure 4.11a–c shows the distribution of high-severity faults, medium-
severity faults, and low-severity faults. It can be seen from Figure 4.11a that 22.92% of
classes with high-severity faults contain one fault, 29.17% of classes contain two faults, and
so on. In addition, the maximum number of faults (449 out of 642) is covered at medium
severity (see Figure 4.11b).
TABLE 4.9
Distribution of Faults and Faulty Classes at High-, Medium-, and Low-Severity
Levels
Level of Number of % of Faulty Number of % of Distribution
Severity Faulty Classes Classes Faults of Faults
1–2 Faults
8 Faults 29–77 Faults 7.75%
16.67% 1 Faults 18.64% 3–4 Faults
22.92% 7.26%
5–6 Faults
5 Faults
11.38%
10.42%
15–28 Faults
15.5%
4 Faults 7–9 Faults
8.33% 13.32%
2 Faults
29.17%
3 Faults 10–14 Faults
12.5% 26.15%
(a) (b)
1 Faults
9–12 Faults 5.52%
15.17%
2 Faults
16.55%
4–7 Faults
17.93%
(c)
FIGURE 4.11
Distribution of (a) high-, (b) medium-, and (c) low-severity faults.
1. Diversity in data: The variables or attributes of the data set may belong to different
categories such as discrete, continuous, discrete ordered, counts, and so on. If the
attributes are of many different kinds, then some of the algorithms are preferable
over others as they are easy to apply. For example, among machine learning tech-
niques, support vector machine, neural networks, and nearest neighbor methods
require that the input attributes are numerical and scaled to similar ranges (e.g., to
the [–1,1] interval). Among statistical techniques, linear regression and LR require
Logistic
regression
Statistical
Discriminant
analysis
Binary
Decision tree
Machine
Type of learning Support vector
dependent machine
variable Machine
learning Artificial neural
network
Continuous
Linear
Statistical regression
Ordinary least
square
FIGURE 4.12
Selection of data analysis methods based on the type of dependent variable.
138 Empirical Research in Software Engineering
the input attributes be numerical. The machine learning technique that can han-
dle heterogeneous data is DT. Thus, if our data is heterogeneous, then one may
apply DT instead of other machine learning techniques (such as support vector
machine, neural networks, and nearest neighbor methods).
2. Redundancy in the data: There may be some independent variables that are redun-
dant, that is, they are highly correlated with other independent variables. It is advis-
able to remove such variables to reduce the number of dimensions in the data set.
But still, sometimes it is found that the data contains the redundant information. In
this case, the researcher should make careful selection of the data analysis methods,
as some of the methods will give poor performance than others. For example, linear
regression, LR, and distance-based methods, will give poor performance because of
numerical instabilities. Thus, these methods should be avoided.
3. Type and existence of interactions among variables: If each attribute makes an
independent impact or contribution to the output or dependent variable, then
the techniques based on linear functions (e.g., linear regression, LR, support vec-
tor machines, naïve Bayes) and distance functions (e.g., nearest neighbor meth-
ods, support vector machines with Gaussian kernels) perform well. But, if the
interactions among the attributes are complex and huge, then DT and neural net-
work should be used as these techniques are particularly composed to deal with
these interactions.
4. Size of the training set: Selection of appropriate method is based on the tradeoff
between bias/variance. The main idea is to simultaneously minimize bias and
variance. Models with high bias will result in underfitting (do not learn relation-
ship between the dependent and independent variables), whereas models with
high variance will result in overfitting (noise in the data). Therefore, a good learn-
ing technique automatically adjusts the bias/variance trade-off based on the size
of training data set. If the training set is small, high bias/low variance classifiers
should be used over low bias/high variance classifiers. For example, naïve Bayes
has a high bias/low variance (naïve Bayes is simple and assumes independence of
variables) and k-nearest neighbor has a low bias/high variance. But as the size of
training set increases, low bias/high variance classifiers show good performance
(they have lower asymptotic error) as compared with high bias/low variance clas-
sifiers. High bias classifiers (linear) are not powerful enough to provide accurate
models.
TABLE 4.10
Data Analysis Methods Corresponding to Machine Learning Tasks
S. No. Machine Learning Tasks Data Analysis Methods
Besides the four above-mentioned important aspects, there are some other considerations
that help in making a decision to select the appropriate method. These considerations are
sensitivity to outliers, ability to handle missing values, ability to handle nonvector data,
ability to handle class imbalance, efficacy in high dimensions, and accuracy of class prob-
ability estimates. They should also be taken into account while choosing the best data
analysis method. The procedure for selection of appropriate learning technique is further
described in Section 7.4.3.
The methods are classified into two categories: parametric and nonparametric. This
classification is made on the basis of the population under study. Parametric methods
are those for which the population is approximately normal, or can be approximated to
normal using a normal distribution. Parametric methods are commonly used in statistics
to model and analyze ordinal or nominal data with small sample sizes. The methods
are generally more interpretable, faster but less accurate, and more complex. Some of
the parametric methods include LR, linear regression, support vector machine, principal
component analysis, k-means, and so on. Whereas, nonparametric methods are those for
which the data has an unknown distribution and is not normal. Nonparametric meth-
ods are commonly used in statistics to model and analyze ordinal or nominal data with
small sample sizes. The data cannot even be approximated to normal if the sample size
is so small that one cannot apply the central limit theorem. Nowadays, the usage of non-
parametric methods is increasing for a number of reasons. The main reason is that the
researcher is not forced to make any assumptions about the population under study as is
done with a parametric method. Thus, many of the nonparametric methods are easy to
use and understand. These methods are generally simpler, less interpretable, and slower
but more accurate. Some of the nonparametric methods are DT, nearest neighbor, neural
network, random forest, and so on.
Exercises
4.1. What are the different steps that should be followed while conducting experi-
mental design?
4.2. What is the difference between null and alternative hypothesis? What is the
importance of stating the null hypothesis?
140 Empirical Research in Software Engineering
4.3. Consider the claim that the average number of LOC in a large-sized software is
at most 1,000 SLOC. Identify the null hypothesis and the alternative hypothesis
for this claim.
4.4. Discuss various experiment design types with examples.
4.5. What is the importance of conducting an extensive literature survey?
4.6. How will you decide which studies to include in a literature survey?
4.7. What is the difference between a systematic literature review, and a more general
literature review?
4.8. What is a research problem? What is the necessity of defining a research problem?
4.9. What are independent and dependent variables? Is there any relationship
between them?
4.10. What are the different data-collection strategies? How do they differ from one
another?
4.11. What are the different types of data that can be collected for empirical research?
Why the access to industrial data is difficult?
4.12. Based on what criteria can the researcher select the appropriate data analysis
method?
Further Readings
The book provides a thorough and comprehensive overview of the literature review
process:
A. Fink, Conducting Research Literature Reviews: From the Internet to Paper. 2nd edn.
Sage Publications, London, 2005.
E. L. Lehmann, and J.P. Romano, Testing Statistical Hypothesis, 3rd edn., Springer,
Berlin, Germany, 2008.
A classic paper provides techniques for collecting valid data that can be used for gathering
more information on development process and assess software methodologies: