Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
30 views50 pages

Research Methods Self Study Notes

The document outlines a course on Research Methodology at the Millennium Institute for Science and Management Studies, led by Mr. John Taban Luka. It aims to equip students with essential skills in social and educational research, covering topics such as research definitions, methods of acquiring knowledge, research designs, data collection, and analysis. The course emphasizes the application of APA referencing and ethical considerations in research, culminating in the ability to write a research proposal.

Uploaded by

ag.mowamorris234
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views50 pages

Research Methods Self Study Notes

The document outlines a course on Research Methodology at the Millennium Institute for Science and Management Studies, led by Mr. John Taban Luka. It aims to equip students with essential skills in social and educational research, covering topics such as research definitions, methods of acquiring knowledge, research designs, data collection, and analysis. The course emphasizes the application of APA referencing and ethical considerations in research, culminating in the ability to write a research proposal.

Uploaded by

ag.mowamorris234
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 50

MILLENNIUM INSTITUTE FOR SCIENCE AND MANAGEMENT STUDIES

(MISAMS)

Self-Study Materials

Course Code: UCCU2105

Course Unit: Research Methodology

By

MR. JOHN TABAN LUKA

SEPTEMBER, 2022
2

COURSE OUTLINE
Mission statement
Millennium Institute seeks to provide an outstanding holistic education to students from all
backgrounds through applied and theoretical research to generate solutions to national and
regional challenges.
Course Code: UCCU2105
Course Title: Research Methodology (Methods)
Tutor: Mr. John Taban Luka
Email:[email protected]
Tel: +211(0)929766623
Goal
This course aims to equip students with basic knowledge and skills in social and educational
research.
Course description
The purpose of this course is to develop learners’ understanding of the methods of
undertaking research studies with a focus on economic problems in a society. This course
examines different methods of acquiring knowledge, role of educational research, identification
of a research problem and stating of a research questions. Also, review of literature, meaning,
purpose and principles of research designs and the measurement design will be examined. It
further acquaints students with the methods of data collection and analysis; descriptive and
inferential statistics, interpretation of data and proposal and research report writing. General
application of the use of APA will be emphasized.
Learning outcomes
By the end of this course the students should be able to;
1) Define the term research
2) Explain the different methods of acquiring knowledge
3) Describe the main stages in research process
4) Analyse the major research designs
5) Compare the different techniques of probability and non-probability sampling
6) Describe different data collection instruments
7) Discuss the main techniques of analysing data
8) Report and utilize research data
9) Identify theoretical issues and practical issues of relevance to business and educational
research
10) Apply the research process in daily tasks
11) Use of APA version six referencing style in academic writings and research
12) Write a research proposal on a topic of choice
Week one lecture
Definitions of Research, the traditional methods of acquiring knowledge, and classification of
Research by purpose.
Week two lecture
The Scientific Method, Basic elements in Research, Paradigms in Social science research.
Week three lecture
Nature of Research, the Research process.
Week four lecture
3

Review of related literature, research design, criteria for selecting research designs.
Week five lecture
Quantitative research designs, qualitative research designs.
Week six lecture
Sample and sampling designs, probability sampling techniques, and non-probability sampling
techniques.
Week seven lecture
Data collection instruments: questionnaire, interviews, observation schedule, focused group
discussions.
Week eight lecture
Validity and Reliability.
Week nine lecture
Research proposal writing, quick reference to APA version six.
Week ten lecture
Ethical considerations in social research and Revisions
UNIT ONE
Nature of Educational research
1.1 Introduction
This unit examines the nature of educational research, the learner is exposed to the
concept of research and its applicability in the field of education. The methods of acquiring
knowledge, as well as the properties of scientific research.
1.2 The meaning of Educational Research
Ogula (2009) defines it as a systematic, controlled and empirical investigation of natural
or social phenomenon. Leedy (1989) defines research as a systematic investigation into study of
materials, sources etc. In order to establish facts and reach new conclusions: an endeavor to
discover new or collate old facts etc.
1.3 Different Methods of Acquiring Knowledge
1.3.1 Authority
One of the most common sources of knowledge is the authorities in different spheres of
knowledge. In many societies, people rely on the wisdom of elders who are recognised as
having better understanding of the world than ordinary members of society. Thus, statements
and pronouncements by experts in various areas of knowledge are seldom challenged or
questioned.
Examples of such people are elderly people in rural areas, heads of religious
organisations and dictators. A major weakness of this method is that authorities in most cases
tend to make false statements in order to justify and preserve their status. However, as Dawes
(1994) quoted by Kerlinger and Lee (2000) have said “We must take a large body of facts and
information on the basis of authority. Thus, it should not be concluded that the method of
authority is unsound; it is unsound only under certain circumstances.” (p.6)

1.3.1 The Mystical Method


In this method, the correctness of the knowledge is assumed to reside in the supernatural
source. The knowledge producers such as traditional medicine practitioners and diviners are
authorities because they claim that they are able to receive and decipher messages from ancestral
spirits. To convince people that they actually communicate with the spirits of dead ancestors,
4

they use rituals, ceremonies and unusual language. Like the first method, this approach is based
on faith.

1.3.2 Tenacity (Custom and Tradition)


The third method of knowing is the method of tenacity. Many people tend to believe
things because people in their society regard them as the truth, even when there are clearly
conflicting facts. They even go to the extent of inferring “new” knowledge from beliefs that
may be false. They know things to be true because they hold firmly on what they have always
known it to be true.” As a result, some people hold to certain things as true because most people
in society assume those things to be true.
1.3.3 Personal Experience
People tend to believe that what is in their minds and a social encounter is generally true.
For example, a person who has been swindled by a policeman believes that most policemen are
dishonest.
Limitations to using Everyday Methods of Knowing as sources of Knowledge
- People generally make no attempt to control any external sources of influence when
trying to explain the cause of an event.
- There is no questioning or testing of information. People tend to accept things simply at
face value.
- The methods are individualistic and subjective. Authorities can be biased due to natural
inclination to protect our self-esteem.
- They make knowledge static.
- Tradition makes it difficult to accept new knowledge and mitigate the desire to question
existing practices.
- Personal experience depends on what we have observed and how we have interpreted it,
but we can and do make mistakes in our observations and interpretations.
- Some authoritative statements are unverifiable.
1.3.4 Reasoning or rationalistic method
Reasoning is the second category of methods used by human beings to understand their
environment. This method is also called the priori method (Kerlinger and Lee, 2000 p. 7) or the
method of intuition (Graziano and Raulin (1993). By reasoning is usually meant the ability to
expound one’s thoughts logically and to make conclusions. Rationalists believe that knowledge
is innate in human beings and pure reason is sufficient to produce verifiable knowledge.
Kerlinger and Lee, (2000) have criticised this method as a way of acquiring knowledge
because of the assertion that a priori propositions “agree with reason and not necessarily with
experience.” Whose reason? (p.7). there are three types of reasoning: deductive reasoning,
inductive reasoning and inductive-deductive reasoning.
Inductive Reasoning
Inductive reasoning involves formulation of generalisations based on observation of a
limited number of specific events. In inductive reasoning, the researcher begins by observing
particular instances of a situation. He/she then draws conclusions from what he/she has
observed.
Inductive reasoning makes it possible to make a conclusion or probabilistic explanation
concerning the whole class, e.g. private secondary schools, based on observations of a number of
the elements of that class. The grounds for conclusion are provided by the possession of certain
identifiable features by the whole class.
5

Examples
Every national school in the sample is headed by a principal. Therefore, all national
schools are headed by principals. Nerima, Kerubo, Mutua, Njeri and Onyango who teach
mathematics studied mathematics at college. Therefore, all mathematics teachers studied
mathematics at college.
The major limitation of inductive explanations, in comparison to universal laws is that certain
conclusions cannot be drawn about specific cases (Nachmias and Nachmias, 1992, p.11).
Deductive Reasoning
This type of reasoning involves arriving at specific conclusions based on an a priori or self-
evident proposition.
Examples
If Polong teaches mathematics, then he studied mathematics at college.
Polong teaches mathematics. Therefore, Polong studied mathematics at college (Did he?).
Human beings reproduce, Amina is a human being, Therefore, Amina reproduces (Does she?). In
deductive reasoning the premises lead to the conclusion. Only if the premises are true, then the
conclusion must be true.
1.4 The scientific method
What is science?
The word science is derived from the Latin noun scientia, (meaning knowledge), and the
verb soire (meaning to know).
The scientific method of acquiring knowledge is a systematic process of investigating a research
problem following some principles.
Aims of Science
The basic aim of science is to generate theories. Theories are tentative explanations.
More specifically, the aims of science are to describe, to explain and to predict. The first step in
knowing is the description of the object or situation of the study as accurately as possible.
During the second step, an explanation of why a given event or behaviour has taken place is
given, i.e. the relationship between the described facts as expressed. The explanation of an
event or behaviour should allow the social scientist to make a prediction of some events under
well-defined conditions.
1.4.1 Properties of Scientific Research
The following are the main properties of scientific research:
1. Scientific research is empirical. Only knowledge gained through experience or the senses –
touch, sight, hearing, smell or taste - is acceptable. The empirically oriented social scientist
goes into the social world and makes observations about how people live and behave.
However, Nachmias and Nachmias (1992) cautioned against interpreting empiricism in the
narrow definition of the five senses – touch, smell, and hearing, listening and seeing.
2. It is systematic and logical. Observations are done systematically one at a time, starting with
description, explanation and finally prediction. In addition, the correct order must be
followed.
3. It is replicable. Since the observation is objective, anyone carrying out a study in the same
circumstances should come up with the same findings.
4. It is self-correcting. It has in-built mechanisms to protect investigators from error as far as is
humanly possible. In addition, research procedures and results are open to public scrutiny by
other researchers.
6

5. It is question-oriented. It is directed by a research question or problem and several specific


questions. These questions might spring from observation of natural or social phenomena, a
practical concern or gaps in what is reported in previous research studies and other scholarly
literature.
6. It public. Because findings from scientific research may be used to make decisions that affect
people and society at large, scientific research must be open to public scrutiny, and
examination and criticism by other scholars.
7. It critical. It proceeds in stages, starting with the research problem, followed by research
design, measurement design, data collection, data analysis and generalisations or tentative
answers and starts all over again by asking new questions for further research.
8. It self-critical. It critically examines its strengths, limitations and weaknesses, and discovers
and reports its validity and reliability.
9. Researchers strive to overcome their personal biases as much as possible. They do this by
clearly defining the phenomena being studied and using research procedures to study those
phenomena that other scholars will agree to as accurate.
10. It is objective. Empirical evidence is assumed to exist outside of scientists themselves.
However, in the usual sense of the term (to mean observation that is free from emotion,
conjecture, or personal bias), objectivity is rarely, if ever, possible (Singleton and Straits,
1999)
11. Quantitative researchers strive to make their findings generalisable to the target population.
Generalisability is achieved through the selection of representative samples.
12. One major goal of conducting scientific research is to accumulate evidence over time that can
be used to validate or disapprove commonly held notions about social reality. For these
reasons, researchers attempt to design their studies in such a way that other researchers can
replicate their research findings. Unlike everyday methods of knowing, the entire process of
inquiry can be reproduced by other researchers.
1.4.2 The role of research in an education programme
a- Educational research aims at providing general explanation “Why?” questions. For instance,
educational researchers ask for an explanation for why a given behaviour has taken place.
b- Educational research gives an accurate account of the characteristics of a particular
programme.
c- Research involves the systematic collection of information to describe, predict, control or to
explain the phenomena involved.
d- It identifies constraints to the implementation of an education programme.
e- Research can be used to determine actions and innovations that have an impact on the target
group.
f- Research provides useful data from programme planning.
g- All research is conducted under a general ethical code. Major professional associations have
specific code of ethics. These codes provide the basic rules for scholarship in a given field
UNIT TWO
APPROACHES IN EDUCATIONAL RESEARCH
1- Introduction
This unit examines the research approaches in education. The learner will be exposed to various
methods applied in classification of educational research. It sheds light on the major and
competing research paradigms namely qualitative, quantitative and mixed methods.
7

2.1 Classification of research by purpose


According to the purpose, research has been divided into two types:
 Basic Research and Applied research.
Basic research is directed towards an increase in knowledge, when successful, basic research
results in a fuller understanding of the subject matter under study and the generation of theories.
In basic research, the primary aim of the investigator is not to produce data for practical use, but
to enhance understanding of the subject matter under study.
• Applied research is directed towards practical applications of knowledge and when
successful, results in directives for development of blueprints.
In applied research, the primary aim of the investigator is to generate knowledge which is of
immediate practical utility.
The following researches are variants of applied research: Action research, Operations research,
Research and development
Paradigms in Social Science Research
Cryer (1996) defined a research paradigm as “a school of thought or a way of thinking about the
nature of truth as it can be realised from a piece of research”. Bogdan and Biklen (1998) defined
a paradigm as “a loose collection of logically related assumptions, concepts, or propositions that
orient thinking and research”. According to Kuhn (1962), “A paradigm is a dominant way of
conceptualising a phenomenon, of approaching it methodologically, and of looking for solutions
to research problems
A paradigm is a way of conducting research by following certain systematic procedures from the
way the problem is stated, design is chosen and data is collected, analyzed and report written.
There are quantitative paradigms and qualitative paradigms: The quantitative researcher in social
science borrows the methods used by natural scientists to conduct research. There is a belief that
the social world has a definite pattern that can be discovered by a scientist through research
where the researcher is far removed from the phenomenon being studied.
The qualitative researcher believes that the social world is complex and to study it there is a need
to be involved. In this paradigm the researcher is part of what is being studied. The researcher is
actually the instrument
Differences between Qualitative and Quantitative research
Criteria Qualitative research Quantitative research
Purpose To understand & interpret social To test hypotheses, look at cause
interactions. & effect, & make predictions.
Group Studied Smaller & not randomly Larger & randomly selected.
selected.
Type of Data Collected Words, images, or objects. Numbers and statistics.
Type of Data Analysis Identify patterns, features, Identify statistical
themes. relationships.
Objectivity and Subjectivity Subjectivity is expected. Objectivity is critical.
Role of Researcher Researcher & their biases may Researcher & their biases are
be known to participants in the not known to participants in
study, & participant the study, & participant
characteristics may be known characteristics are deliberately
to the researcher. hidden from the researcher
(double blind studies).
8

Results Particular or specialized Generalizable findings that


findings that is less can be applied to other
generalizable. populations.
Scientific Method Exploratory or bottom–up: the Confirmatory or top-down: the
researcher generates a new researcher tests the hypothesis
hypothesis and theory from and theory with the data.
the data collected.
Type of Data Analysis Identify patterns, features, Identify statistical
themes. relationships.
Objectivity and Subjectivity Subjectivity is expected. Objectivity is critical.
View of Human Behavior Dynamic, situational, social, Regular & predictable.
& personal.
Most Common Research Explore, discover, & Describe, explain, & predict.
Objectives construct.
Focus Wide-angle lens; examines the Narrow-angle lens; tests a
breadth & depth of specific hypotheses.
phenomena.
Nature of Observation Study behavior in a natural Study behavior under
environment. controlled conditions; isolate
causal effects.
Nature of Reality Multiple realities; subjective. Single reality; objective.
Final Report Narrative report with Statistical report with
contextual description & correlations, comparisons of
direct quotations from means, & statistical
research participants. significance of findings.

Mixed methods research Paradigm


Studies that are products of the pragmatist paradigm and that combine the qualitative and
quantitative approaches within different phases of the research process. (Tashakkori & Teddlie,
2008).
Combines both quantitative and qualitative designs and methods,Uses sequential, concurrent and
transformative inquiry strategies and Data can be collected simultaneously or sequentially;
depending upon design.
Designs under mixed methods paradigm
Creswell (2009) identified the following designs:
Sequential:
Creswell (2009) asserts that sequential mixed methods procedures are those in which the
researcher seeks to elaborate on or expand on the findings of one method with another methods.
This may involve beginning with a qualitative interview for exploratory purposes and following
up with a quantitative survey method with a large sample so that the researcher can generalise
the results and vice versa.
Concurrent mixed method procedures
These are those in which the researcher converges or merges quantitative and qualitative data in
order to provide a comprehensive analysis of a research problem. In this design, the investigator
9

collects both forms of data at the same time and then integrates the information in the
interpretation of the overall results. Also in this design the researcher may embed one smaller
form of data within another larger data collection in order to analyse different types of questions
Transformative mixed methods
These are those in which the researcher uses a theoretical lens as an overarching perspective
within a design that contains both quantitative and qualitative data. This lens provides a frame
work for topics of interest, methods for collecting data and outcomes or changes anticipated by
the study. Within this lens could be data collection methods that involves a sequential or
concurrent approach
How methods can be mixed
Types of mixing Comments

Two types of research question. One fitting a quantitative approach and the other
qualitative.
The manner in which the research Preplanned (quantitative) versus
questions are developed. participatory/emergent (qualitative).
Two types of sampling procedure. Probability versus non probability.
Two types of data collection Survey or questionnaires (quantitative) versus focus
procedures. groups (qualitative).
Two types of data analysis Numerical versus textual (or visual).
Two types of data analysis. Statistical versus thematic.
Two types of research question. One fitting a quantitative approach and the other
qualitative.
The manner in which the research Preplanned (quantitative) versus
questions are developed. participatory/emergent (qualitative).
Two types of conclusions. Objective versus subjective interpretations.

Planning mixed methods procedures


Timing Weighting Mixing
No Sequence Equal Integrating
Concurrent
Sequential - Qualitative first Qualitative Connecting
Sequential - Quantitative first Quantitative Embedding

Required Researcher Skills


Knowledge of various research methods, understanding of assumptions underlying each research
method, Working knowledge of analytic procedures and tools related to both quantitative and
qualitative research, Ability to understand and interpret results from the different methods
QUESTIONS FOR DISCUSSION
1. Describe four everyday methods of knowing.
10

2. Discuss the disadvantages of the mystical source of knowledge.


3. Describe three methods of reasoning.
4. What is science? Describe the aims of science.
5. Describe four characteristics of scientific research.
6. Describe the major phases in the scientific method.
7. What is mixed methods research?
8. What are the advantages of adopting mixed methods research?
UNIT THREE
BASIC ELEMENTS IN RESEARCH
3.0 Introduction
This unit introduces the learners to the basic elements in research. The learners should be able to
describe concepts, different variables and the stages in a research process.
3.1 Concepts
A concept is an abstraction formed by generalisation from particulars (Kerlinger, 1993). For
example, achievement, intelligence, learning and attitude are common concepts in education.
3.2 Variables
A variable is a type of quantity that may take on more than one value. A variable can be
measured and can assume different values or scores for different people. Examples of important
variables in social sciences are sex, level of education, religious affiliation, social class,
intelligence and achievement. For example, “sex” is a variable because it can be differentiated
by two distinct values, male and female. Similarly, level of education can be differentiated by
five distinct values, no schooling, some primary education, some secondary education,
completed secondary education, and college or university education.
3.3 Nominal and Ordered variables
Nominal variables are variables that merely indicate whether an attribute is present or absent.
They do not indicate the magnitude of the attribute.
Examples: Gender, Race, Ethnic group, Political affiliation
b) Ordered variables indicate how much of an attribute is present.
Examples: Class rank, Age, Height, Weight, Temperature, Income.
3.4 TYPES OF VARIABLES
3.4.1 Independent and Dependent Variables
An independent variable has been defined as the presumed cause of the dependent or predictor
variable, thus the dependent variable is the expected outcome of the independent variable. E.g.
Effect of alcohol consumption on household conditions; effects of distance on price
3.4.2. Extraneous Variables
These are variables that can influence the effect of the independent variables but are unknown or
not controlled by the researcher. For example, a researcher might be interested in finding out the
effect of a textbook on student achievement. The teachers might have a negative attitude towards
the book. This variable may be controlled by selecting subjects at random, assigning subjects to
groups at random and assigning treatments to groups at random. Three requirements are
necessary for one to say that an independent variable caused a dependent variable.
1. The independent and dependent variables should co-vary.
2. There should be a specific sequence of cause and effect.
3. The independent variable should be the real cause of the dependent variable.
11

3.4.3 Intervening or mediating variables


Is an unobservable attribute or a characteristic that influences the dependent variable apart from
the independent variable. Examples are hostility and anxiety.
3.5 Stages in a research process
Research in education is a process which consists of interrelated activities. The following are the
main stages in the research process:-
3.5.1 Selection and definition of the research problem
 Selecting the topic for research and identification of the research problem
 Reviewing related literature
 Formulating the research problem
 Formulating specific research questions and hypotheses
 Defining concepts
3.5.1 Selection of the research design and methodology
 Choice of research design
 Description of the sample and sampling procedures
a- Measurement design
 Identification of data collection procedures
 Construction of research instruments
3.5.2 Data Collection
 Collection of data using a variety of research instruments
3.5.3 Data analysis
 Data organisation
 Data processing
 Statistical and qualitative data analysis
3.5.4 Generalisation
 Interpretation of data
 Conclusions and recommendations
 Production of the research report
3.6 Deciding on the topic
• A potential research situation arises when the following three conditions exist:
There is an intellectual challenge which can be addressed through collection and analysis of data
or a perceived discrepancy exists between what is and what should be, there is a need to
understand, explain, predict or control phenomena, a review of literature shows that the research
problem has not been addressed. Therefore, there is need to conduct research on the question.
A good topic should have both independent and dependent variables, it should exclude the
phrases such as: investigation on, examination of, analysis of and should not exceed 20 words
Kerliger (2000) defines a research problem as a sentence that interrogates the relationship
between two or more variables. The answer you give provides for what is being sought in
research. Which requires a thorough investigation. In short, a research problem is an area of
concern where there is a gap in knowledge base is needed for professional practice. It is a
question of interest and a challenge which can be answered through collection, analysis and
interpretation of data. (Ogula, 2009).
3.7 Sources of a research problem
12

Previous research findings in professional journals, reports, seminars and conferences, Personal
observations, experiences and media, existing problems in work place, technological and
scientific advancements, replication, Discussion with experts
3.8 Characteristics of a research problem
It should be interesting to you, It should have practical value to you, your work place and your
community, Should not be over researched on, Should be within your experience and expertise,
Can be finished within the allocated time, Should not carry legal or moral impediments
3.9 Review of Related Literature
Review of literature is a broad, comprehensive in depth, systematic and critical review of
scholarly publications, including unpublished scholarly print materials, audiovisuals and
personal communications, Literature Review is what is read before a study is conducted to
ensure that it is based on a gap in knowledge to be filled by the study
3.9. 1 Purpose of literature review
To find out the research Gap, To be familiar with authors in the field, To select research studies
in my field, To identify more appropriate research methods to be used in my study, To avoid
duplicating the study-by reading what others have done I will be able to avoid repeating them
3.9.2 Stages in developing literature review
• Stage 1:Identify a research topic.
• Stage 2:Review secondary sources to get an overview of the topic.
• Stage 3:Identify primary sources to search.
• Stage 4:Conduct searches.
• Stage 5:Organise information.
• Stage 6:Evaluate the research reports.
• Stage 7:Write the literature review.
3.9.3 Materials to include in literature review
Mention the problem being addressed, State the central purpose of that research, briefly state
information about the population sample, sampling techniques and data collection instruments,
include findings, conclusions and recommendations, Provide your own interpretation of the other
researchers’ findings as well as the summary. While the literature review presents other people’s
ideas, your voice should remain visible. When paraphrasing a source that is not your own, be
sure to represent the author’s opinions accurately and in your own words.
Summary of the problem, research questions and hypotheses if applicable, Research design and
methodology, a description of the population and sample, the instruments used and method(s) of
data analysis. The findings, conclusions and recommendations.
3.9.4 Example Literature Review.
After you have identified a publication for use in your proposed study, you should abstract the
following information on a piece of paper. Name(s) of author(s), date of publication, name of
book, journal etc. in which the reviewed publication appears, place of publication and name of
publisher. For example: Bless, C. and Achola, P. (1988). Fundamental of Social Research
Methods. An African Perspective Lusaka: Government Printing Department.
Example two
13

Juma (1994) conducted a study of secondary school students’ achievement in Kiswahili. The
researcher administered a forty-item achievement test to a sample of two thousand form four
students from fifty secondary schools in Kenya. He analysed data by the use of percentages,
means scores, standard deviations and a simple analysis of variance. Results showed a
significant relationship between teachers’ attitudes towards Kiswahili and students’ achievement.
3.9.6 Types of Literature Review
Historical reviews – breaks the literature down into phases or stages of development. Thematic
reviews – which are structured around different themes. Theoretical reviews – which trace
theoretical developments in a particular study area. Empirical review - which attempts to
summarise the findings of research studies.
3.9. 7 Challenges facing Literature review
Coping –Plagiarism, Poor citation, Use of very old literature, citing irrelevant literature,
Criticizing rather than critiquing.
3.9. 8 Research Questions
Research questions refer to questions which the researcher would like answered by undertaking
the study (Mugenda & Mugenda, 2003). The difference between research questions and
objectives is that a research question is stated in question form while an objective is a statement
form. In order to include both objectives and research question in a research proposal, the
objectives should be broader and the research question more specific. Research questions are
the ones that guide the study.
3. 9. 9 Hypothesis
A tentative answer to a research problem written in declarative sentence form, there are two
types of hypothesis:
a- Null hypothesis b- Alternative hypothesis
3.9.10 Theoretical Framework
Every study must be anchored within a theory in the discipline, students are expected to review
theories that are related to the study the student is undertaking. It is suggested in the section of
theoretical framework which is in chapter one the student reviews a theory related to his/her
study. There is a tendency for students to review even other theories which are in the discipline
but not related to the study
3.10 Conceptual Framework
It is a framework usually developed by the researcher to demonstrate the interrelationships
between variables in the study. The relationship is usually presented graphically or
diagrammatically and is usually supported by an explanation. The purpose of the conceptual
framework is to help the reader to quickly see the proposed relationship.
UNIT FOUR
RESEARCH DESIGN AND STRUCTURE OF AN EDUCATIONAL RESEARCH
4.1 Introduction
This unit examines the design and the structure of educational research. It explores the major
research designs namely: experimental and non-experimental designs
4.2 Research Designs
According to Ogula (2009) a research design is a strategy for planning and conducting a study.
It is a blue – print that guides the planning and implementation of the research (i.e. data
collection and analysis). According to Kangete, Wakahiu and Karanja (2016) a research design
is the fundamental frame work that holds the research venture together.
14

• In quantitative research, research designs are specified before conducting research and
cannot be changed once research has started.
• However in qualitative research designs are more flexible.
• Researchers are free to change the research design when the research is being carried out
4.3 Designs under the quantitative research paradigm
4.3.1 EXPERIMENTAL DESIGNS
Experimental designs are used to study cause and effect relationships among two or more
variables. The main difference between a true experimental design and other designs is the fact
that research units are assigned to the treatment and control conditions at random.
The main purpose of experimental research is to study causal links; to determine whether a given
variable x has an effect on another variable y, or whether changes in one variable produce
changes in another variable.
The three essential elements of an experimental design are:
1. Randomisation – The researcher assigns participants to different groups.
2. Manipulation – The researcher does something at least to some of the participants in the
research.
3. Control – The researcher introduces one or more controls over the experimental situation.
Experimental research deals with manipulation of variables to see if changes result in other
variables. In experimental studies, one group, the experimental or treatment group receives the
treatment; the other group, a control group, receives a neutral treatment. The two groups are
compared before the treatment using a pre-test. After the treatment, a post-test is administered to
the two groups.
This situation is presented diagrammatically below.
According to Frey, Botan, Friedman & Kreps (1991, p.156) three requirements are necessary for
establishing a causal relationship between an independent and dependent variable. All three
requirements are necessary for inferring causality; none is sufficient in and of itself. These are:
1. The independent variable must precede the dependent variable.
2. The independent and dependent variable must be shown to co-vary, or go together.
3. The changes observed in the dependent variable must be the results of changes in the
independent variable and not some other unknown variable.
4.3.2 True Experimental Designs
The true experimental designs are the most exact method of establishing cause and effect
relationships. They are called true experimental designs because they provide adequate controls
for all sources of internal invalidity. The experimental method was developed in the natural
sciences where consistent causal relationships are easy to establish. However, the accumulated
experience shows that in the social sciences, causal relationships are difficult to measure and true
experimental designs are rarely used.
The true experimental design is considered the most useful design to demonstrate programme
impact if conditions of randomisation in selection of participating units and in the assignment of
treatment and control conditions at random are met. Research units can be individuals (students,
teachers, parents etc.), groups of individuals, institutions, regions etc.
There are three true experimental designs:
15

i) the pre-test-post-test control group design


ii) the post-test-only control group design; and
iii) the Solomon four-group design
4.3.3 The Pre-Test-Post-Test Control Group Design
The subjects are randomly divided into two groups. One of these groups receives the treatment.
The other does not. In this design a pre-test is administered to the experimental group before it
receives the treatment. The same pre-test is administered to the control group. At the end of the
treatment, a post test is administered to both groups. The design is shown below:

For example, if a researcher wants to determine the effectiveness of a technique of teaching


Christian religious education to secondary school students (form 2) he should randomly assign
students to two groups; an experimental group and a control group. Both groups should be pre-
tested, the new technique used to teach the experimental group for say a month, and both groups
post-tested. Both the pre-tested means and post-test means can then be compared by using the t
test to determine if there is any systematically significant difference between them.
The weaknesses of this design are:
 influence of exposure to books, radio and television programmes;
 in the case of a long period of time between pre and post-test, the students could have
naturally matured; and
 The use of the same test can influence the results.
You should counteract those influences by keeping the time between tests short, monitoring
outside influences but withholding scores until after the post-test.
4.3.4 The Post-Test-Only Control Group Design
There are circumstances under which a pre-measurement period is not practical. The post-test-
only control group design omits the pre-tested groups altogether. The design is shown below:
In this design, students and teachers in the pilot and control groups are measured with respect to
the dependent variable during or after the introduction of curriculum materials (the independent
variable). For example, say an investigator wants to determine what the effects are on students’
knowledge of population concepts of integrating population and family life education in social
studies. He/she can administer a post-test to a random sample of students, who have been
exposed to a social studies course with population and family life education and also to a random
sample of students at the same grade level who have been exposed only to social studies (control
group). He/she should then compare the mean post-test scores of the two groups to find out if
there is any statistically significant difference between them.
STRENGTHS
1. Assignment of subjects to experimental and control groups is done randomly. This controls
for selection and morality.
2. No pre-test is given in order to control for simple testing effects.
4.3.5 Solomon four-groups design
This design combines the pre-test-post-test control group and the post-test-only control group
designs. This is one of the most powerful designs available for controlling threats to the validity
of cause and effect inferences.
Four groups are randomly selected from the population. Two of these groups receive the
treatment and two groups do not (control groups). One group receiving the treatment and one of
16

the groups not receiving the treatment are pre-tested. After the treatment all four groups are
post-tested.
4.3.5 Quasi-experimental designs
Quasi-experimental designs have been defined as “experiments that have treatments, outcome
measures and experimental units, but do not use random assignment” of subjects. The purpose of
a quasi-experimental design is to approximate a true experimental design. In a quasi-
experimental design, subjects are not assigned randomly to conditions, although the independent
variable(s) may be manipulated.
It is not usually possible in social science research to apply true experimental designs because of
the difficulty of obtaining equivalent groups or achieving random assignments of subjects to the
two groups. Even when equivalent groups are selected at the beginning of the research project,
differential dropouts of subjects will result in non-equivalent groups during the research project.
Besides, in educational research, it is sometimes not be feasible to divide intact classes to
provide for random samples.
For situations where it is not feasible or desirable to apply true experimental designs, quasi-
experimental designs are normally used. The distinguishing feature of quasi-experimental design
is that the subjects are not randomly selected and assigned to treatment and control groups. The
sample of participants in quasi-experimental designs comes from intact groups, such as students
in classrooms.
4.3.5 Types of quasi-experimental designs
There are three basic designs for quasi-experiments. These are:
1. A non-equivalent control group design.
2. Interrupted time-series designs.
3. Multiple time-series design
Each of these designs is described briefly below.
1. Non-Equivalent Control Group Design
Non-equivalent control group design retains the idea of a treatment group and a control group,
without random assignment.
PROCEDURE

1. Assign subjects to the treatment and control groups.


2. Administer a pre-test to subjects in the two groups.
3. Apply the intervention (treatment) to the experimental group.
4. Administer post-test to the two groups.
STRENGTH
It may be possible to use an existing sample group as an experiment and control group.
LIMITATIONS
Since subjects are not assigned to the experimental conditions, the two groups are not equivalent.
It is difficult to identify whether or not causes other than the treatment may have been
responsible for differences in the changes of the two groups.
4.3.8 Correlation design
These studies are used to describe in quantitative terms the degree to which two or more
variables are related. Correlational studies are used to determine whether or not and to what
extent an association exists between two or more paired variables, it involves the collection of
17

data on two or more quantifiable variables on the same group of subjects and computing a
correlation coefficient.
If two variables are highly related, scores on one variable can be used to predict scores on the
other variable.
It explains how characteristics vary together, provide rigorous and replicable procedures for
understanding relationships.
It finally determines to what degree a relationship exists between the quantifiable variables.

Advantages
- This method permits one to analyse inter-relationships among a large number of variables
in a single study.
- It allows one to analyse how several variables either singly or in a combination might
affect a particular phenomenon.
- The design provides information concerning the degree of relationship between variables
being studied.
Disadvantages
- Correlation between two variables does not necessarily imply causation.
- A correlation coefficient is an index and therefore any two variables will always show a
relationship even when common sense dictates that such variables are not related.

UNIT FIVE
5.5 VARIETIES OF QUALITATIVE RESEARCH
According to Creswell (2012) Qualitative research can be classified into five main approaches or
traditions, these are:
1. Biography
2. Phenomenology
3. Grounded theory
4. Ethnography
5. Case study
5.1 CASE STUDY METHOD
A case study is an intensive study of all relevant materials about a social object in its natural
context. The social object could be an individual, event, school, group or community. The
purpose of a case study is to obtain insights and explanations about the object of study. A case
study involves detailed, in-depth collection of data using multiple sources of information
including observations, interviews, documents and audio-visual material. Case studies describe
rather than predict a phenomenon. It is an intensive and holistic analysis of a single entity.
It uses a smaller sample for in-depth analysis.
The following are characteristics of case studies:
1. Case studies focus on contemporary phenomena. This distinguishes them from historical
research which focuses on past phenomena.
2. They study phenomena in their total context.
The main advantage of the case study is that it provides in-depth information about particular
small groups and geographic area.
Some limitations of the case study are:
 The results of the case study method do not allow clear-cut generalisations.
18

 The case study method requires considerable expertise.


Yin (1994) identifies five steps to designing a case study. These are:-
i- Develop the research question
ii- Identify the propositions for the study if any
iii- Specify the units of analysis
iv- Establish the logic linking the data to the propositions
v- Explain the criteria for the implementation of the findings
5.2 BIOGRAPHY
Denzin (1989) defines the biographical method as the “studied use and collection of life
documents that describe turning point moments in an individual’s life.” From this definition, it is
clear that a biography is a study of a person’s life, his or her experiences and achievements
(written by somebody else).
The following techniques are used:
 The person being studied is asked to put forth an account of his or her own life. This is called
an autobiography.
 Direct observation of the subject’s action patterns in various settings.
 Interviewing people who have known the subject in different situations.
 Observing the subject in contrived problem or conflict situations.
5.3 HISTORICAL DESIGN
According to Oso and Onen (2011). This design explores, explain and understand the past
phenomenon from data already available.
With the purpose of arriving at conclusions about causes, trends and effects of the past in order
to explain the present and predict the future. It is useful where primary data cannot be collected.
Historical research involves the discovery and analysis of previous events and interpretation of
trends in the attitudes of the events of the past and generalizations from the past events to help in
the guide present or future behaviour.
5.4 A PHENOMENOLOGICAL STUDY
Phenomenology is concerned with individuals and how changes in people’s thoughts and
knowledge can become clearer. Phenomenologists make deductions about inner experiences
from external indicators.
A phenomenological study describes the direct or lived experiences for several individuals about
a phenomenon or concept. Creswell (1998) has identified the following as the main stages of a
phenomenological study:
Stage 1: The researcher needs to understand the philosophical perspectives behind this
approach, particularly aspects dealing with what people experience a
phenomenon.
Stage 2: The investigator writes research questions that explore the meaning of the lived
experience for individuals and asks some individuals to describe their everyday
lived experiences.
Stage 3: The investigator then collects data from individuals who have experienced the
phenomenon under investigation.
Stage 4: The investigator analyses data.
Stage 5: Report writing.
19

5.5 A GROUNDED THEORY STUDY


In grounded theory, a theory is generated from the data gathered by people observing people in
the real world. The researcher does not begin with a preconceived theory in mind, but one
emerges through systematic collection and analysis of data relevant to the area of study.
The goal of a grounded theory study is to generate or discover a theory. Proponents of this
method hold that theories should be “grounded” in data from the field. The research typically
collects primarily interview and, analyses interview data and generates a theory.
5.6 ETHNOGRAPHIC METHOD
The term ethnography is derived from two Greek words ethnos (a tribe, race or nation) and
graphos (something written down). Hence, ethnography is a written report about a group of
people.
Gephart (1988) describes ethnography as follows:
“Ethnography is the use of direct observation and extended field research to
produce a thick naturalistic description of a people and their culture, (p. 16)”. A
peoples culture is a sum total of their way of life including their behaviours,
language and the things they make and use (artefacts). The goals of ethnographic
research are to describe and interpret a cultural or social group or system.
Criteria for selecting research Designs
i- The research problem
ii- Personal experiences
iii- Audience
UNIT SIX
POPULATION, SAMPLE AND SAMPLING TECHNIQUES
6.0 INTRODUCTION
This unit introduces the learner to the concept of population, sample and will examine
probability and non-probability sampling techniques.
6.1 TARGET POPULATION
Population is the total number items or subjects that a researcher wants to study. E.g. all
counties, all schools, all teachers, all head teachers, all students or pupils.
6.2 SAMPLING DESIGN
6.2.1 Sample
A sample is a number of individuals or things selected from a population. A sample is a subject
of population. It is a group of individuals or things selected from a population. Suppose the mean
score of all students who sat the South Sudan Certificate of Secondary Education examination is
42. 42 is a population parameter. On the other hand, if the mean score of provincial schools in
the examination is 42, we are using 42 as a sample statistics.
6.2.2 Population
A population is the group of interest to the researcher, the group to which the researcher would
like to generalize the results of the study.
6.2.3 Sampling
Sampling is the process of choosing a small group of people or things from the population.
6.2.4 Purpose of Sampling
Sampling is undertaken because resources often do not permit researchers to study all members
of the target population. Sampling results in a detailed study of a small group rather than the
20

whole of a population. This in turn leads to reduced costs associated with collecting and
analysing data and greater accuracy due.
6.2.5 Sample Units
Possible units include:
a) An individual person
b) A household
c) A school
d) A village
e) A social group, such as a youth group, a club or women’s group
f) An administrative unit
The selected unit must be precisely defined.
6.2.6 Sampling Frame
This is complete list of the membership of a population or universe from which subjects for a
sample can be selected. This is prepared in the form of a physical list of population elements.
6.2.7 Sample Size
A researcher normally faces the problem of determining the size of the sample necessary to
achieve the objectives of the planned study. It is recommended that researchers use the largest
sample possible because statistics calculated from large samples are more accurate, other things
being equal, than those from small samples. The larger the sample, the more likely are its mean
and standard deviation to be representative of the mean and standard deviation of the target
population.
The following are some of the factors that the researcher should consider when deciding on the
sample size.
1. Availability of resources and time. In most research projects, financial and time
restrictions limit the number of subjects that can be studied. However, it is generally
desirable to have a minimum of 30 cases.
2. Larger samples are necessary when groups must be broken into subgroups. For example,
the researcher may be interested in comparing the attitudes of different categories of
teachers towards environmental education. This would require dividing teachers by
professional carried out if the sample is large enough so that after the teachers have been
divided into groups, each subgroup has sufficient numbers of cases to permit a statistical
analysis.
3. When high attribution is expected some subjects or schools drop out of the project for
one reason or another. It is therefore necessary for the researcher to use a large sample.
4. When the target population is very heterogeneous, a large sample must be used in order
that persons with different characteristics will be satisfactorily represented.
5. When there is political instability in the country, it may not be possible to include pilot
schools in some parts of the country in the sample.
In studies that involves hypothesis testing the sample should be big enough so that it has
sufficient probability of declaring a relationship or difference of pre-defined magnitude if
statistically significant one exists. A small sample has small power and likelihood of type II
error, β. That is the probability of declaring a relationship or difference not significant if such a
difference exist (i.e. not a product of chance).
21

There is an incorrect supposition that the desirable sample size is a percentage of the size of the
population. It should be noted that the sample size is a function of the absolute number of sample
units, not the sampling fraction.
As Best and Kahn (1989) have noted: “The ideal sample is large enough to serve as an adequate
representation of the population about which the researcher wishes to generalize and small
enough to be selected economically – in terms of subject availability, expense in both time and
money, and complexity of data analysis. There is no fixed number of percentage of subjects that
determine the size of an adequate sample”. (Pp. 16-17).
They have made the following practical observations about sample size.
1. The larger the sample, the smaller the magnitude of sampling error.
2. Survey-type studies probably should have larger samples than needed in experimental
studies.
3. When sample groups are to be subdivided into smaller groups to be compared, the
researcher initially should elect large enough samples so that the subgroups are of
adequate size for his or her purposes.
4. In mailed questionnaire studies, because the percentage of responses may be as low as 20
to 30 percent, larger initial sample mailing is indicated.
5. Subject availability and cost factors are legitimate consideration in determining
appropriate sample size.
The sample size required is a function of the variability of the characteristics measured, and the
degree of the precision required. (Casley and Lury 1982, P. 75).
The optimum sample size is directly related to the type of research you are undertaking. For
different types of research, “rule of thumb” can be used to determine the appropriate sample size.

6.7 RULES OF THUMB


Quantitative research rules of thumb
Borg and Gall (1989) recommend the following sizes for the different kinds of research:
Types of Research Recommended Sample Size
 Correlation  About 30 observations
 Multiple Regressions  At least 15 observations per variable
 100 observations for each major
 Survey Research subgroups; 20 to 50 for minor subgroups
 Causal-comparative,
experimental or Quasi-
experimental  About 15 observations per person
6.8 representativeness of the samples
The sample must be as representative as possible of the population from which it is drawn. A
sample is often described as being representative if certain known percentage frequency
distributions of elements’ characteristics within the sample are some of the variables that are
distributions within the whole population. The following are some of the variables that are
associated with subjects (sex, age, socio-economic status, level of education, marital status, place
of abode, religious denomination) and schools (types of school, school location, school size,
etc.).
The following are the most popular marker variables in the social sciences: sex, age, level of
education, marital status, socio-economic status, place of abode, and religious denomination.
22

6.9 Probability Sampling


Probability sampling is a method of drawing a portion of a population so that each number of the
target population has a known and non-zero chance of being selected into the sample. There are
many ways in which a probability sample may be drawn from a population. Some of these ways
are described below.
6.9.1 Simple random sampling
Simple random sampling is the process of selecting from the population that provides every
sample of a given size an equal probability of being selected. Several techniques can be used to
derive a simple random sample. Two of these are:
a) Make a list of all members of the target population (e.g. standard 7 pupils in pilot
schools) and assign a number to each pupil. Then use a table of random numbers to draw
a sample from the list. To use the random numbers table, the researcher randomly selects
a row or column as a starting point, then selects all the numbers that follow in that row.
He should proceed to the next row or column if more numbers are needed.
b) If a small population is used, a researcher can obtain the sample by placing a slip paper
with the name or identification number of each individual in the population in a
container, mixing the slips thoroughly, and then drawing the required number of names
of identification number.
6.9.2 Systematic random sampling
This technique consists of selecting every sampling unit on a list of all numbers of the
population. Suppose the researcher wants to select a sample of 50 students from a list of 500
students. To use this method, he or she first divides the population by the number needed for the
sample (500 ÷ 50=10 ). He or she then selects a number smaller than 10. Then, starting with that
number (e.g. 6), he or she selects every tenth name from the list of students.
6.9.3 The multi-stage random sampling
Another procedure is multi-stage random sampling. A typical example of multi-stage sampling
would be:
a) Randomly select a given number of states or provinces or districts from the list of all
states or provinces or districts.
b) Randomly select from within each chosen states or provinces or districts, schools from
the list of all schools of the defined type.
c) Randomly select from within each chosen school, individuals from the list of all
individuals of the defined type.
6.9.4 Stratified sampling
Stratification ensures that different groups of the population are represented in the sample. The
population is divided into strata such as boys and girls, rural and urban etc. from which random
samples are drawn. In this procedure, the target population is first stratified into a number of
categories. Thus, the strata may be based upon grades, e.g. 200 grade 5 pupils, 100 grade 6
pupils, and 50 grad 7 pupils. For each of these grades, pupils are selected at random. The number
of pupils selected from each grade should be proportional to the number of pupils in each grade.
6.9.5 Cluster sampling
In this technique, the unit of sampling is not the individual but rather a naturally occurring group
of individuals. The technique consists of determining a number of clusters and selecting units
from each unit randomly. For example, a population of secondary school students may be
grouped into a number of classrooms, or it may be grouped into a number of schools. A cluster
23

sample of students may then be selected from this population by selecting clusters of students as
classroom or school groups rather than individually.
Suppose that a researcher wishes to administer a test to random sample of 2000 pupils. In cluster
sampling, the researcher would draw a random sample of 5 classrooms from a list of all
classrooms in the target population. Then he would administer the test to every pupil in each of
the 5 classrooms.
6.10 Non-Probability sampling
Non-probability sampling does not use random sampling. Elements in the target population have
an unknown chance of being selected into the sample. A non-probability sample is based on
subjective judgement and is biased in the sense that some members of the target population have
more chance of being selected than others. There are three main types of non-probability
samples: Quota, Judgement, and Convenience.
6.10.1 Quota sampling
Quota sampling derives its name from the practice of assigning quotas or proportions of kinds of
people to interviewers. In this technique, sample members are drawn from various target
population strata e.g. untrained teachers, graduate teachers, and grade 1 teachers.
6.10.2 Purposive or Judgement sample
The second type of non-probability sampling used by educationists is judgement or purposive
sampling. In this procedure, the choice of sampling units depends on the subjective judgement of
the researcher. However, the researcher should ensure that the sample he selects is representative
of the population.
6.10.3 Convenience sampling
Convenience sampling consists of units that are convenient to the researcher. In most cases, the
schools selected are those which are easy to reach and are willing to take part in study.
6.10.4 Network sample (Snow ball sampling)
This is a non-random sample in which subjects are asked to refer the researcher to other people
who could serve as subjects.
6.10.5 Volunteer sampling
A non-random sample in which subjects choose purposely to participate in the study.
Discussion questions
1. Distinguish between probability sampling and non-probability sampling.
2. Describe the major types of probability sampling.
3. Define the following terms;
a) Target population
b) Sample
4. Explain briefly each of the following concepts
a) Multi-stage sampling
b) Judgement sample
c) Snowball sample
5. The following is a summary of a classroom study.
The purpose of the study was to evaluate a science project designed to improve science
achievement among group of standard six pupils. A comparison of group of standard six
pupils using a different programme was employed. Subjects were randomly assigned to
the two groups. Equal numbers of boys and girls were used in each group. A science
achievement test was administered both as pre-test and post-test measures.
Identify the steps in a research process outlined in the summary.
24

6. What are the most important distinctions between survey and experimental research
designs?
UNIT SEVEN

STRUCTURE OF A RESEARCH PROPOSAL


7.0Introduction
This unit explores different items in a research proposal, in preparation for proposal writing.

7.1. Research Proposal (preliminary pages)

a-Title page b- Declaration c- Abstract d- Acknowledgments e- Table of Contents f-


List of Tables g- List of Figures

7.1.2 Chapter One: i- Background to the Problem ii- Statement of the Problem iii- Research
Questions iv- Hypotheses if any v- Significance of the Study vi- Scope and Delimitations of
the Study vii- theoretical frame work viii- conceptual frame work, ix-Operational definitions
of terms

7.1.3 The topic:

A research topic should be a concise statement, identifying the variables and theoretical issues
under investigation and the relationship between them. It should have an independent and
dependent variables: the recommended length of a research topic no more than 12 words and
should be fully self-explanatory when standing alone. (APA, 2010. p.23) e.g. “Effects of
transformed letters on reading speed”

7.1.3 Back ground and statement to the problem

The introduction should answer questions like why is the problem important? How does this
study relate to previous work done? How does this differ from or build on earlier studies?
Research is drive by desire to resolve issues, discuss and summarise findings and conclusions of
relevant scholarship from global, regional and local levels Demonstrate the logical continuity
between previous and the present work.

• How will this study contribute to solutions of social and economic problems?

• Who values your study? Who needs and is likely to use the information?

The statement of the problem should include brief review of research in the field and questions
previous research studies left unanswered, finally a statement of the main question to be
addressed

7.2.1 Chapter two: Review of related literature a-Introduction b- Review of theories c-


Review of empirical studies d- summary and knowledge gap
25

7.2.3 Review of theories


In reviewing theories, you select at least two most relevant theories to your study, indicate the
proponents of the theory, what is says, its relevance to your study, the strengths and limitations
of the theory.

In reviewing empirical studies you review the most recent researches conducted in your topic or
area of study, indicating the author, date, location, purpose, design, target population, sampling
techniques, instruments and data collection used, the findings recommendations and the
conclusions. At least 4 research studies of less than ten years old are encouraged.

Summary and Knowledge Gap include any gaps such as in methodology used, location, target
population and conclusions reached.

7.3 Chapter three: Research design and methodology a-Research Design b- Target population
c- Description Sample and sampling procedures d-Description of Research Instruments e-
Data Collection procedures f- Data Analysis procedures g- Ethical considerations h-
References i-Appendices: Questionnaire, interview guides and Map of study location.

7.4. The trustworthiness in qualitative studies


The following techniques are used to increase the trustworthiness of qualitative studies:
triangulation, peer debriefing, member checking, prolonged engagement, the audit trial and
Thick, Rich Description.

7.4.1 Triangulation
Triangulation involves the use of numerous methods of data collection and sources. There are
five types of triangulation: methodological triangulation, source triangulation, investigator
triangulation, theory triangulation and environmental triangulation. Methodological triangulation
is applied when the researcher uses two or more methods of data collection to measure variables.
For example, a questionnaire could be supplemented with in-depth interview, observation and
existing records.

Source triangulation is utilised when comparing the information which is given by the source at
different times and in different situations. For example, the information given by the head
teacher during the informal conversation could be compared with the information he or she gives
in the actual interview.

Investigator triangulation is applied when two or more researchers are used to study the same
subject. During the study, the researchers regularly exchange opinions of what they see or hear.

The theory of triangulation involves the use of multiple professional perspectives to interpret a
single set of data.
Environmental Triangulation involves the use of different locations and other settings related to
the environment in which the study took place.

7.4.2Peer Debriefing
Peer debriefing involves testing findings with disinterested parties and individuals. The role of
the peer debriefer is that of a “devil’s advocate”, a person who listens to the researcher’s findings
26

and asks hard questions about the methods used and interpretations. The peer debriefer should be
someone who is familiar with the research or phenomenon being explored. “A peer reviewer
provides support, plays devil’s advocate, challenges the researcher’s assumptions, pushes the
researchers to the next step methodologically, and asks hard questions about methods and
interpretations.” (Lincoln and Guba, 1985, Creswell and Miller, 2000).

7.4.3 Member checking


Member checking consists of taking data and interpretations back to the participants in the study
so that they can confirm the credibility of the information and narrative account. (Lincoln and
Guba 1985). One procedure to facilitate this process is for the researcher to convene a focus
group or a meeting of participants.

7.4.4 Prolonged Engagement in the Field


In this validity procedure, the researcher stays in the research site for a prolonged period of time.
Greswell and Miller (2000) contended that: “being in the field over time solidifies evidence
because researchers can check out the data and their hunches and compare interview data with
observational data. In practice, prolonged engagement in the field has no set duration, but
ethnographers recommend six months to a year at a site” p. 128.

7.4.5 The Audit Trail


In this procedure, the research report is given to individuals external to the study who “examine
the narrative account and attest to its credibility. It is a systematic procedure in that the reviewer
writes an analysis after carefully studying the documentation provided by the researcher.”
(Creswell and Miller 2000 p. 129). The credibility of the research is established by another
person who reads a narrative and makes decisions about the applicability of the findings to
similar contexts or other settings.

7.4.5 Thick, Rich Description


According to Denzin (1989), “Thick descriptions are deep, dense, detailed accounts…thin…
descriptions, by contrast lack detail, and simply report facts” (p. 83). The researcher should
present in-depth descriptions of the participant’s thoughts and beliefs in their own words.
UNIT 8
INSTRUMENTS OF DATA COLLECTION

8.1 Introduction
This unit examines the different data collection instruments used collecting quantitative and
qualitative data. It will discuss questionnaire, interview schedule, content analysis, Focused
group discussions,

After the target groups have been selected, instruments have to be developed to assist in data
collection. Instruments associated with the quantitative paradigm are:

b- Questionnaire
c- Attitude scale
d- Test items
e- Interview schedule
27

8.2 Factors Affecting Rate of return a-Appearance b- Length c- Complexity and


difficulty d- Perceived importance e- Follow up f- Endorsement g- Design h-
Timing i- Accuracy of the address

j- Return dead line k-Incentives and return

8.2 QUESTIONNAIRE
A questionnaire is a carefully designed instrument (written, typed or printed) for collecting data
directly from people. A typical questionnaire consists of questions and statements. Two types of
questions are normally asked: closed-ended questions and open-ended questions. Closed-ended
questions are structured in such a way that the respondent is provided with a list of responses
from which to select an appropriate answer.
The following is an example of a closed-ended question.

1. (a) Have you ever attended an in-service course on methods of teaching craft education?
[ ] Yes
[ ] No
(b) If yes, about how many times have you received in-service training?
[ ] Once
[ ] Twice
[ ] Thrice
[ ] Four times
[ ] Five times
[ ] More than five times
Open-ended questions are those that require the respondent to provide her or his own answer to
the question. For example, you can ask the respondent the question: What problems have you
experienced when teaching craft education?

Some of the disadvantages of open-ended questions are:-

• Some respondents give irrelevant answers.


• There is possibility of researcher bias.
• The responses must be coded manually before they can be processed for computer analysis.
8.2.1 WRITING QUESTIONNAIRE ITEMS
After you have defined the problem of investigation precisely, construct questions or items to
deal with each aspect in turn. Below are some of the rules that should be observed when
constructing questionnaire items.

1. The questions must be clearly worded so that they can be comprehended by the respondent.
2. Items should be short because short items are easier to understand.
3. Avoid double-barrelled items which require the subjects to respond to two separate ideas.
28

4. Avoid leading questions which suggest that one response may be more appropriate than the
other.
5. Do not use words that some respondents may not understand
6. Avoid biased or leading questions.
7. Do not ask a question that assumes a fact not necessarily in evidence e.g. Have you stopped
giving birth? How does a woman who has never given birth respond?
8. Avoid touchy questions to which the respondent might not respond honestly.

8.2.2 DEVELOPMENT OF QUESTIONS


The following is a likely sequence for the development of the questions to be included in a
questionnaire.

• Defining the information that is required from the questions


• Formulating draft questions
• Discussing the questions with other members of the research team and other experts
• Preparation of the first draft of the questionnaire for pilot testing
• Pilot testing of the questionnaire on a small sample of respondents
• Analysis of the data collected and the experience in the pilot survey
• Reformulating and finalisation of the questionnaire, and
• Preparation of an interviewers’ manual
8.2.3 QUESTIONNAIRE FORMAT
1. A questionnaire should have a suitable title.
2. Instructions should be clear and unambiguous.
3. The questionnaire should be attractive and brief and as easy to respond to as possible.
4. Arrange the questionnaire in content subsections, e.g. background data, objectives,
implementation of the curriculum, etc.
5. No item should be included which does not directly relate to the objectives of the study.
6. Divide the questionnaire into meaningful components.
7. The first section is personal data. The major considerations are variables that influence what
you are investigating. The question you should ask is: what are the major variables that are
likely to influence people’s responses?
8. Include brief, clear instructions printed in bold type.
9. The questionnaire should start with simple factual questions so that the person completing it
gets off a good start.
10. Avoid negative items.
11. Avoid biased items and terms.
12. Do not use one item to select multiple answers.
13. Open-ended general questions should be at the end to allow expression of points which the
respondent thinks important.
14. Questionnaires may include attitude scales, rating and checklists provided that they are brief
and straight forward.
29

8.2.4 FORMAT

1. Titles

Write the title of the study followed by the title of the questionnaire.

Example

Evaluation of the effectiveness of Schools Broadcasting in Kenya

Questionnaire for Teachers

2. Purpose

State the purpose of the study.

a) Instructions: Give brief general instructions, including instructions for the return of the
instrument. Do not give instructions for completion of individual sections. This should
appear at the beginning of each section.
b) Cover Letter: Seek permission from the respondent for participation in the study. Informed
consent may be included in the cover letter in the questionnaire.
c) Organise the questionnaire in sections according to the type of questions asked. Normally
section one consists of demographic questions.
8.3 PILOT-TESTING THE QUESTIONS
After developing the questionnaire, you should administer it to people who are already in the
target population or who are similar to those in the population to be studied.
The following are some of the issues clarified during the pilot study.
1. Whether there are flaws and ambiguities.
2. The feasibility of the proposed procedure for coding responses.
3. Whether or not the prospective respondents are available and accessible.
4. Whether the intended respondents possess the information being sought and are willing to
participate in the study.
5. Whether the intended respondents will understand the questions.
6. Whether the procedures for administering the questionnaire are appropriate.
Pre-test subjects should be encouraged to make comments and suggestions concerning specific
items, instructions and recording procedures.

8.4 ADVANTAGES AND DISADVANTAGES OF THE MAIL QUESTIONNAIRE


Compared to other data collection techniques, questionnaires have some advantages and
disadvantages.

8.4.1 Advantages of Questionnaires

• Questionnaires can research a large group of respondents within a short time and with little
costs.
• The biases which might result from the personal characteristics of interviews are avoided or
reduced.
30

• Since the respondents do not indicate their names, they tend to give honest answers. The
absence of an interviewer also makes respondents give honest answers without fear of giving
answers that they think the interviewer may not want to hear.
• Respondents have adequate time to consult documents or other people if questions require
doing so, and
• Respondents have enough time to reflect before answering questions.
8.4.2 Disadvantages of Questionnaires

• The researcher has no control over the person who fills out the questionnaire
• There is no opportunity for the respondent to see and obtain clarification about ambiguous
questions. Similarly, there is no opportunity for probing beyond the answers given by the
respondents
• It is normally difficult to obtain an adequate response rate
• Questionnaires cannot be filled out by illiterate people and people who do not understand the
language in which the questions are written.
• There is a tendency for the respondents to skip questions they consider difficult, sensitive or
controversial.
• There is no assurance that the intended respondent understands the questions, and
• Some people tend to ignore them if they find them unimportant
8.4.3 TYPES OF QUESTIONS IN A QUESTIONNAIRE
A typical questionnaire has four kinds of questions.
• Demographic questions
• Opinion and attitude questions
• Self-perception questions
• Informational questions.
8.4.4 Demographic Questions
These are questions which seek background information about the respondent, for example sex,
age, level of education, marital status, occupation, religious affiliation, place of residence, etc.

Examples

How old are you? What is your occupation?

8.4.5 Opinion and Attitude Questions

These types of questions solicit information about respondent’s attitudes, beliefs, feelings and
misconceptions relating to an area of inquiry.

Examples

Which of the following subjects do you like most?


31

8.4.6 Information questions

These questions seek to find out the respondent’s knowledge of an area of concern to the
evaluator.

Example

1. How often do you use teaching aids?


2. Are teachers in your district conversant with authentic assessment methods?

RESPONSE FORMATS
1. UNSTRUCTURED FORMAT
In an open-ended, free or unstructured response format, the respondent has complete freedom to
answer as he/she chooses. One disadvantage of open- ended responses is that they are difficult to
code or score and some information may be lost during coding. They also take extra time to
complete and code.

Example

Qualitative open-ended question: What do you think best about your job?
Quantitative open-ended question: What is your highest qualification level?

The most common open-ended items are qualitative. Qualitative open-ended questions ask for a
non-numerical response, while quantitative open-ended questions ask for a numerical response.

Semi-Structured Format

This is of filled in response items; the response is open-ended but only a short answer is
expected. They are easier to code than unrestricted free items.

STRUCTURED RESPONSE FORMAT


These questions give the subject choice from which an answer is selected. There are four
categories of structured response items: checklist, inventory type, ranking type and scaling or
rating type.

Checklists
Checklist present a number of options for which the respondent is expected to check or tick the
most relevant response or all the suitable responses.

Example

The following are some of the reasons students have given for studying education. Tick any one
of them, which you think represents the most important reason why you are studying education.

i)The course is easy [ ]


ii)Teachers are marketable [ ]
iii)Teachers are very well paid [ ]
iv)It was difficult to get admission to another course [ ]
32

v)My parents wanted me to be a teacher [ ]

8.5 INTERVIEW SCHEDULE


An interview is a conversation in which one person, the interviewer, seeks responses for a
particular purpose from another person, the interviewee.
The interview is one of the most used techniques of obtaining information. It is a way of
obtaining data about a person by asking him/her rather than by watching him/her behave. A
personal interview helps the evaluator to measure what a person knows (knowledge) and what
he/she likes and dislikes (values and preferences). The information obtained can be transformed
into a number of quantitative data by using attitude scaling or rating scaling techniques. Example
Interviewer’s explanation to the respondent: “We are interested in finding out your involvement
in the education of your child. We have a checklist here of some of the kinds of things parents
do. Think about your own situation and indicate your response.”
Never Sometimes Often Very Often

Do you encourage your child to do her/his homework [ ] [ ] [ ] [ ]


assignment?

Do you assist your child to do his/her assignments? [ ] [ ] [ ] [ ]

Do you check your child’s class work? [ ] [ ] [ ] [ ]

As we can see, in a schedule-structured interview, the questions and their sequences are fixed
and identical for every respondent.
8.5.1 Advantages of Interview Schedule
• It is flexible and adaptable to individual situation.
• It allows a glimpse of the respondent’s gestures, tone or voice etc; and thus reveal his/her
feelings.
• It permits the investigator to pursue leads and to ask for elaboration of points that the
respondent has not made clear.
• It permits the establishment of rapport between the investigator and the respondent. This
stimulates the respondents to give more complete and valid answers.
• It makes it possible for information to be obtained from illiterate respondents or respondents
who are reluctant to put things in writing.
• It promotes a higher percentage rate of return.
• It permits the interviewer to help the respondent clarify his/her thinking on a given point.
• It enables the investigator to pursue leads in order to gain insight into the problem.
8.5.2 Weaknesses of Interview Schedule
• It is costly in time and personnel.
• The interviewer is likely to influence the responses he/she receives.
• Interviewing requires skilled personnel.
33

8.6 CONTENT ANALYSIS


Holsti (1968) defines content analysis as any technique for making inferences by systematically
and objectively identifying specified characteristics of messages. Content analysis is the
systematic assessment of the manifest content of communications. It involves the use of
standardised procedures for the analysis of different types of communication- oral, written and
verbal. For example, if you wish to determine the opinions of the general public towards a new
curriculum innovation, you can assess the mass media by reading the articles dedicated to the
new curriculum and noting those that are against the innovation and those that support the
innovation.

Instead of interviewing or observing respondents, the evaluators take the communications that
people have produced and ask questions about them. Content analysis focuses on examining and
studying events that occurred before the investigation. The document analysis method is used to
gather information from project documents, public documents, institutional publications,
historical documents, educational records, documents in the mass media, archival records, public
records, i.e. political and judicial records, government documents and private records, namely
autobiographies and diaries.

8.6.1 ADVANTAGES

• Respondents are not aware that they are being studied.


• Data are normally obtained from reliable document sources.
8.6.2 LIMITATION/DISADVANTAGES
• The records used might contain institutional biases.
• Records may not be made available to the researchers.
• There is lack of information on the way the recorded data have been collected.
• Compared to other methods, document analysis is cumbersome, difficult and time
consuming.
• It does not have a high status as a research method due to the fact that it involves mainly
library research.
• Documents are not systematically available.

8.7. FOCUS GROUP DISCUSSION


Obeng-Quaidoo (1991) defines a focus group discussion as a carefully planned discussion
designed to obtain perceptions on a defined area of interest in a permissive, non-threatening
environment. The main characteristics of focus group discussions are: a)there are several
respondents acting together.
b) There is a skilled moderator to guide the discussion.
c) There is a discussion guide or a written list of topics to be covered and a questionnaire. Focus
groups are frequently used to gather information on existing educational programmes or
activities. Focus groups can be used to:
• Obtain data for developing relevant evaluation questions by exploring, in greater depth, the
problem to be investigated.
• Identify the needs of the target group.
• Identify appropriate content.
• Explore controversial issues.
34

• Supplement information from other sources.


• Obtain information that may be employed in the future development of a questionnaire.
8.7.1 POINTS TO CONSIDER WHEN PLANNING FOR A FOCUS GROUP
DISCUSSION
1. A focus group should be composed of 6 to 12 individuals.
2. Each group should have a trained moderator. This should be a person in whom group
members have confidence.
3. A focus group discussion guide consisting of guidelines on how to conduct the focus group
discussions and a series of open-ended questions should be prepared.
4. A written list of topics to be covered should be prepared.
5. Respondents should sit in a circle. This will facilitate communication.
6. Plan to conduct a focus group discussion for an hour and a half or less.
8.7.1 ADVANTAGES

• Focus groups are flexible and may allow for probing


• They are useful to obtain detailed information about personal and group feelings,
perceptions and opinions.
• They can save time and money compared to individual interviews.
• They can provide a broader range of information.
• They offer the opportunity to seek clarification.
8.7.2 WEAKNESSES

• A dominant individual in the group may override and intimidate other group members.
• An unskilled moderator may allow the discussion to drift off track.
• Particular weakness of a focus group is the possibility that the members may not express
their honest and personal opinions about the topic at hand.
• They may be hesitant to express their thoughts, especially when their thoughts oppose the
views of another participant
• Compared with surveys and questionnaires, focus groups are much more expensive to
execute
• Moderators may intentionally or inadvertently, inject their personal biases into the
participants' exchange of ideas
8.7.2 HOW TO PREPARE FOR A FOCUS GROUP DISCUSSION
The following are the main stages in preparing and conducting a focus group discussion.

PREPARING A FOCUS GROUP DISCUSSION


a) Define the study problem.
b) Formulate evaluation objectives/questions.
c) Define the target population and the sample, and identify the sampling methods.
d) Develop the focus group discussion guide.
The questions for the focus group should be unstructured or open-ended. These types of
questions allow the moderator to probe the respondents. Some guideline developments of
questions are listed below.
35

• Ask simple, straight-forward questions.


• Avoid dichotomous questions, or questions that can be answered by a simple ‘yes’ or ‘no’.
• Avoid asking questions that start with ‘why’ because it looks like an examination
question.
• Avoid double questions.
8.7.3 FORMAT
• Interesting questions should be placed at the beginning of the guide.
• Number all questions.
• Arrange questions in a logical sequence, i.e. from the general to the specific.
8.7.4 HOW TO CONDUCT A FOCUS GROUP DISCUSSION
1. Choosing the Group

Focus groups are made up of homogeneous groups of people, i.e. people who are similar to each
other in such characteristics as occupation, age, religion, sex, level of education, attitudes
towards an issue etc. Experience has shown that in many settings, and educational homogeneity
of members is very important. Whenever possible, the groups’ members should be total
strangers.
2. The Moderator

The moderator should be a stranger and not a relative or a friend of any of the participants.
He/she should be someone who has the ability to stimulate and guide the group. He should be a
good listener, enthusiastic, friendly, knowledgeable, a good communicator and have an excellent
memory and sense of timing. His/her appearance should be like that of participants. If the
discussion is gender sensitive, a male moderator should be used for male groups and female
moderator for female groups.
3. During the focus group discussion, one person should serve as a moderator and the other as a
recorder.
Steps in Conducting a Focus Group Discussion

Step 1: Introduce the participants’ recorders and moderator.


Step 2: Introduce the session.
Step 3: Conduct the discussion using the guide. During this stage, the moderator should:

(a)Encourage discussion
(b)Encourage participation.
(c)Guide the discussion, detest any contradictions and additions.
(d)Build rapport.
(e)Transcribe and synthesise what people say.
Step 4: Summarise the discussion, check for agreement and thank the participants.
8.8 OBSERVATION TECHNIQUES
One way of obtaining information about progress and outcomes of a programme or project is to
observe directly selected aspects of its development and implications as they occur.
8.8.1 SITUATIONS THAT MAY BE SERVED BY OBSERVATIONAL DATA
1. Measuring classroom process variables
• How the lesson is divided into a variety of activities.
36

• Students’ participation.
• Does the lesson arouse the interest of the pupils?
• Use of teaching-learning resources.
• Unexpected outcome.
2. Measurement of attainment of programme objectives e.g. using tools, properly performing
experiments, etc.
3. Measuring programme implementation
How instructions are being carried out. Not all teachers and even members of the
programme carry out programme instructions in the way intended by the programme
director.

4. Identifying difficulties in programme use


Difficulties may emerge as a result of misunderstanding of some content element by the
teacher. Programme staff and students also encounter difficulties.

5. Identifying changes introduced by the programme staff and teachers.


6. Identifying unintended outcomes.
7. Identifying support for data from other source.
It is necessary to have an observer who takes notes.

8.8.2 SHORTCOMINGS OF OBSERVATION


1. It is a more expensive way of collecting information than questionnaires.
2. It is time consuming.
3. Coverage is limited to a number of trained personnel.
4. It cannot be applied to many aspects of social life. For example, one cannot observe
attitudes and beliefs.
5. There are many biases due to subjectivity of the observer.
6. Observation tells what happens but not why it happened.
7. It is highly subjective when it comes to analysing data and arriving at conclusions.
8.8.3 CATEGORIES OF OBSERVATIONAL METHODS
Observational methods are often broken down into two categories: participant observation and
structured observation.

8.8.3.1 Participant Observation


In this method, the data gatherer become a participant in the study, observes and records
information. Since the data gatherer is a member of the group, the other members tend to act
naturally and he/she is able to obtain information that he/she could not have obtained if he/she
had used an interview schedule or a questionnaire or other methods.

Advantages

It provides reliable information.

Disadvantages

It is expensive and time consuming.


37

It could be subject to bias.

8.8. 3.2 Direct observation

Direct observation is an excellent method of obtaining information about classroom interaction


and the quality and quantity of physical facilities and resources. For example, if an evaluator
wants to find out whether or not teachers are using recommended teaching methods, he/she
should visit their classrooms and observe their lessons. Observation is normally carried out by
several training observers who look for specific behaviour and characteristics in order to
understand teacher/pupil interaction.

The following procedure is normally followed when designing an observation instrument.

• Specify the behaviour of interest.


• Specify actors of interest, e.g. teachers
• Define the breakdown the behaviours so that scoring is reliable.
• Field test and revise the format.
• Support for data from other sources such as tests, interviews and questionnaires

• Identifying unintended outcomes.

8.8.3.3 PROJECTIVE TECHNIQUES


The purpose of projective techniques is to get the respondent to unknowingly reveal his beliefs,
values, attitudes and biases as he/she responds to seemingly unrelated stimulus situations.
Below are examples of projective test items:
1. Draw a picture of an unhappy family.
2. Ask “Given this picture (a picture of a woman with many children) what advice should the
woman give to unmarried girls?”
3. Explain what happened before the situation the woman is in took place.
8.8.3.4 UNOBTRUSIVE MEASURES
Many subjects alter their behaviour when they know information is being collected about or from
them. To get around this reaction, unobtrusive measures are used. Subjects are not aware that
data for research are being collected. These measures are useful for indirectly measuring
attitudes and values. There are two kinds of unobtrusive measures:
1.Observation of activities in normal school setting, and
2.Observation of students in contrived situations.
8.8.3.5 OBSERVATION OF ACTIVITIES IN THE NORMAL SCHOOL SETTINGS
It is possible to obtain useful information about students’ attitudes towards certain issues by
observing their participation in activities related to those issues. Below are some measures:

• Attendance rate
• books and other materials taken from the library
• participation in extra-curricular activities
• leisure activities
38

• Volunteering to participate in community projects


• Courses selected.
8.8.3.6 OBSERVATION OF STUDENTS IN CONTRIVED SITUATIONS
Students can be observed in situations created by the teacher to test students’ reaction. Some
examples of these activities are:

• attendance at optional environmental education public lectures,


• volunteering to answer questionnaires of fictitious surveys, and
• Comments heard in response to an environmental problem created on the ground.
8.8.4 GUIDELINES FOR DEVELOPMENT
1. Make a list of unobtrusive data that are likely to yield useful information.
2. Establish contrived situation.
3. Devise means of collecting and recording data from unobtrusive measures such as record
sheets.
4. Establish contrived situations.
UNIT 9

VALIDITY AND RELIABILITY OF MEASURES


Introduction

It is important for researchers to show their audience that the measures used were valid and
reliable. Unless that is done, some people are likely to doubt the reliability and validity of your
findings and conclusions.

9.1 VALIDITY
Validity is ‘the degree to which the data support the inferences that are made from the
measurement”. (Kelly, 1999, p. 13). According to Gronlund (1976), when using the term
validity, in relation to testing and evaluation, there are a number of cautions to be borne in mind.

1. Validity pertains to the results of a test, or evaluation instrument, and not the instrument
itself. We sometimes speak of the validity of a test for the sake of convenience, but it is
more appropriate to speak of the validity of the test results or more specifically, of the
validity of the interpretations to be made from the results.
2. Validity is a matter of degree. Validity is best considered in terms of categories that specify
degree, such as high validity, moderate validity and low validity.
3. The prime requisite of a research instrument is that it must be valid. If research instruments
are not valid, the study is worthless. An instrument is considered valid only in terms of a
specific group. There are three categories of validity.
• Content Validity
• Criterion-related Validity
• Construct Validity
9.1.1 CONTENT VALIDITY
Content validity refers to the degree to which an instrument measures the subject matter and
behaviours the researcher wishes to measure. Content validity has an index. There are two types
of content validity: face validity and sampling validity.
39

a) Face Validity
Face validity is concerned with “the extent to which an instrument measures what it appears to
measure according to the researcher’s subjective assessment.” (Nachmias and Nachmias, 1992,
p.158). To evaluate the face validity of your instrument, you should review each item in the
instrument to assess the extent to which it is related to what you wish to measure. After this, it
may be necessary for you to consult experts.

b) Sampling Validity
Sampling validity refers to the degree to which a measure adequately samples the subject matter
content and the behaviour changes under consideration. Sampling validity is commonly used in
evaluating achievement tests. The setter must ensure that the test includes questions on all the
material covered in the course.

Validation Procedure

Content validation is judgemental. It is done by asking a panel of experts in the field of study,
such as teachers, inspectors of school and curriculum developers to critically examine the items
for their representativeness of the content of the property being measured. Another method of
determining the content validity of an instrument is by having a group of experts in the field rate
each questionnaire or test item in terms of its relevance to the research questions, for example,
on a five-point scale.

Where:

1 = Not relevant

2 = Somewhat relevant

3 = Quite relevant

4 = Very relevant

Two different assessments of the same content can be correlated using Pearson’s product
moment coefficients. Below is some guidance in assessing the information.

High validity r = 0.75 and above

Acceptable r = 0.50 – 0.74

Uncertain r = below 0.75

Items with coefficients of 0.5 and below can either be reworded to increase their validity or
substituted.
9.1.2 CRITERION-RELATED VALIDITY (EMPIRICAL VALIDITY)
This is the degree to which a measure is related to some other standard or criterion that is known
to indicate the construct accurately (Durrheim and Blanche1999, p.83). Criterion-related validity
is established by comparing an instrument with another instrument whose validity is known to be
accurate, an accurate measure of the same construct. Suppose that a researcher wants to find out
40

the attitudes of teachers towards the teaching profession. To evaluate the criterion-related
validity of the attitude scale, the researcher should find out whether the new instrument is related
to other measures of attitudes towards the teaching profession. If the new measure correlates
with other measures of attitudes towards the teaching profession, the researcher can conclude
that the new instrument is valid.

a) Concurrent Validity
Concurrent validity refers to the extent to which a new measuring instrument is related to the
pre-testing measure of the construct.
Procedure
In order to determine the concurrent validity of a test, you should administer the test to a sample
of students and then correlate the results with several criterion scores. For example, if a teacher
wants to establish whether the results of students’ scores on an achievement test are valid, he/she
should correlate the results with pupils’ scores on other tests.
b) Predictive Validity
Predictive validity refers to the extent to which a measuring instrument predicts future outcomes
that are logically related to the construct. The predictive validity of a measuring instrument such
as a test is evaluated by checking scores on the instrument against the subject’s future
performance. Validation Procedure
Suppose you want to determine the predictive validity of the South Sudan Certificate of Primary
Education examination. You should administer, obtain SSCPE results of a sample of students
and then carry out a follow up study of the performance of the pupils in the South Sudan
Certificate of Secondary Education examination. You should then find out the correlation
between their SSCPE scores and SSCSE scores. If the correlation is high, you can conclude that
SSCPE has a high predictive validity.

9.1.3 CONSTRUCT VALIDITY


A construct is a psychological quality which we assume exists in individuals and helps us to
explain aspects of their behaviour. (Gronlund, 1976, p. 93). Examples of psychological
constructs normally measured in social sciences are intelligence, academic achievement,
liberalism, conservatism, comprehension and reasoning.

Construct validity is concerned with the extent to which performance of an instrument can be
interpreted by theoretical traits, psychological constructs or traits which it purports to measure.
Examples of constructs are attitude, motivation, self-esteem honesty, critically thinking, study
skills and scientific attitude.

Construct validation involves correlating the measuring instrument to measures of other


constructs with which the construct is theoretically associated. For example, if you were
studying ethnic attitudes and from your literature review, you knew that tribalism is theoretically
related to conservatism, you can ask participants to complete measures of conservatism and
tribalism and correlate the two measures.

It is important to distinguish between criterion-related validations and construct validation.


Construct validation involves determining the relationship between different theoretically
41

associated constructs, while criterion-related validation involves determining relationship


between two different measures of the same construct (Durrheim and Blanche 2002, p. 87).

There are two main method of determining the construct validity of a research instrument:
Convergent validity and discriminant validity.

a) Convergent Validity
This method attempts to determine whether scores from different measures of different
theoretically associated constructs are related to one another. For example, a test of driving
ability should relate well to other tests measuring driving ability.

b) Discriminant validity

This method is derived from the idea that measures of constructs that are theoretically unrelated
to each other should not be correlated with each other. If strong correlations are found, the
measures are said to lack construct validity.

9.2 RELIABILITY
“Reliability is the accuracy or precision of a measuring instrument (Kerlinger, 1973, p. 443).
Reliability refers to the consistency with which a measuring instrument yields the same results
for an individual to who the instrument is administered several times.

Reliability is the degree to which an instrument yields the same results on repeat trials. When
repeated measures of the same thing give similar results, the instrument is said to be reliable.
For example, if a teacher gives the same test on different occasions and gets different results, the
instrument contains measurement errors. Measurement errors may be due to inaccurate qualities
in the measuring instrument of disturbances in performance on the measure. Reliability refers to
the results obtained with a research instrument and not to the instrument itself (Gronlund, 1976,
p.107). This is because the results are dependent on sample characteristics, the context and the
time the instrument was administered.

Qualitative researchers use the term dependability, i.e. the extent to which the reader can be
convinced the results did occur as the research says they did.
PROCEDURES OF ESTIMATING RELIABILITY

9.2.1 Types of Reliability

There are two main methods of estimating reliability: repeated measurements and internal
consistency. All of them involve the procedure of correlating two sets of scores. The closer the
agreement between the two sets of scores the greater the reliability. The reliability measure
varies on a scale of 0 to 1. 0.00 (indicating total unreliability) and 1.00 (indicating perfect
reliability).

9.2.2 Repeated Measurement

This is concerned with the ability of the instrument to measure the same thing at different times.
Three methods are used:
42

• Test-Retest Method
• Alternate Form Method
• Parallel Form Method
9.2.3 TEST-RETEST METHOD (Measure of stability)
This type of reliability is estimated by measuring individuals on the same instrument on different
occasions and correlating the scores obtained by the same persons on the two administrations.
Symbolically,

2
St
r xx =
1
2
Sx

Where x = performance on the first measurement

x1 = performance on the second measurement

= Correlation coefficient between x and x1

= Estimated variance of the true scores

= Calculated variance of the observed scores

Source: Nachmias and Nachmias, (1992, p. 164)

The test - retest procedure has been criticised:

i) Some subjects may memorise answers and if the second administration is given too soon, do
well during the second administration. To some extent the programme can be overcome by using
a longer interval between test administrations to give memory a chance to fade (Kubiszn and
Borich, 2000, p. 312)

ii) The scores of the individuals may be affected by the mood of the individual.

6. If too long a time lapses before the second administration, learning takes place and the second
recorded result will be different from the first.
9.2.4 ALTERNATIVE FORM METHOD

This method is similar to the test-retest method except that the questions on the second
instrument are renumbered to create an alternate form of the first instrument (Kelly, 1999, p.
130).

9.2.5 PARALLEL-FORM METHODS


In this method, measures which are exactly equivalent to each other are administered to the same
group of individuals on the same occasion. The two sets of scores are correlated to obtain an
estimate of reliability. The main limitation of this method is the difficulty in making parallel or
equivalent forms of the same instrument.
43

9.2.5 INTERNAL CONSISTENCY FORMS OF RELIABILITY

Internal consistency is estimated by determining the degree to which each item in a scale
correlates with each other item. Internal consistency reliability is based on a single
administration of a measure.

Internal consistency reliability indicates the degree of homogeneity among the items in an
instrument. There are three types of internal consistency reliability: (1) Split-half, (2)
KuderRichardson, and (3) Cronbach-alpha.

9.3.1SPLIT-HALF RELIABILITY
This is another method of estimating reliability. A measure is split into two parts. Each of them
is treated as a separate scale and scored accordingly. Reliability is then estimated by correlating
scores on the two scales. Individuals who score high on one scale should also score high on the
other scale. The Spearman-Brown prophesy formula is used to estimate reliability.

1 2 roe
rxx =
1−roe

Where: rxx1 = the reliability of the original test

roe = the reliability coefficient obtained by correlating the scores of the odd statements
with the scores of the even statements
Source: Nachmias and Nachmias, (1992, p. 165)

Pearson or correlation coefficient is normally used if the measures are at the interval or ratio
level. One weakness of this method is that the correlation between the two halves is dependent
upon the method used by the researcher to divide the items.

9.3.2THE KUDER-RICHARDSON METHOD


This method is used in tests for which there is a right and wrong answer for each item.
(McMillan, 1992, p. 107).

Reliability estimate:

( ( )
)
2
m
m−
n n
KR 21= 1−
n−1 SD2

Where: n = the number of items on the test

m = the mean (arithmetic average) of the test scores.

SD = the standard deviation of the test scores.


44

9.3.4 CRONBACH-ALPHA

The Cronbach-alpha method is used with instruments in which there is no right or wrong answer
to each item, such as an attitude scale.

Formula:

Nr
a=
1+ ( N −1 ) r

N = No of items

r = Average inter-items correlation among the items

The following may help you to assess the reliability of a research instrument.

High .80 –.90


Acceptable .60 - .80

Unacceptable Below .50

9.3.5 THE RELATIONSHIP BETWEEN RELIABILITY AND VALIDITY

A good measuring instrument is both valid and reliable. However, an instrument can be reliable
but not valid but it cannot be valid without being reliable. The figure below demonstrates an
analogy between a person firing a rifle at a target and the relationship between reliability and
validity of a measure.

9.4 IMPORTANT CHARACTERISTICS OF AN EVALUATION INSTRUMENT


In designing evaluation instruments, you should ensure that they have high validity and
reliability. Validity refers to the extent to which an evaluation instrument measures what it is
designed to measure. The type of validity often examined by educationalist is content validity.
Face validity and sampling validity
Face validity of a research instrument is the extent to which it appears relevant and appropriate
according to the evaluator’s subjective assessment.

Sampling validity is concerned with the extent to which the content of a measuring instrument
adequately samples the content of the property being measured. One commonly used method of
determining the face validity of a research instrument is to have a group of experts such as
teachers, curriculum specialists and teacher educators determine the extent to which the
instrument measures what it is supposed to measure. Items that do not have validity are either
replaced or reworded.
Reliability refers to the accuracy or precision of a measuring instrument. A minimal requirement
for an evaluation instrument should be that the respondent gives the same answer to the same
question if the circumstances have not changed.
45

UNIT 10

ETHICAL CONSIDERATIONS IN SOCIAL RESEARCH

10.1 Informed Consent


The researcher should obtain the informed consent of research participants. Hesse- Biber (2016)
states that informed consent involves implementing a range of procedures when using humans as
subjects. The word informed consent explicitly emphasises that the subjects of the research must
have adequate knowledge (Faden & Beauchamp, 1986; Israel & Hay, 2006), meaning that it is
the responsibility of the researcher to ensure that they give adequate information to their subjects,
about the research project. The informed consent is a cornerstone of ethical standards which must
be observed all the time, when dealing with human subjects.
In some cases people mistakenly think that informed consent is the same as authorisation.
Authorisation is a transcribed approval from an individual letting the exposure and or use of
his/her data for research purposes only. Research ethics is a system of accepted behaviour that
researchers ought to show when designing and conducting research. The researcher is expected
to pay careful attention to the following ethical issues.

10.2 Anonymity
Anonymity refers to a situation whereby the researcher cannot link the information that a
participant gave to that precise research participants. Research subjects are even more likely to be
frank or afford accurate data if they presumed that no one will pin-point them or link them to
their answers. Anonymity can be superlatively accomplished by telling the research participants
to not divulge their names in the first place for instance, many surveys are conducted
anonymously and sometimes research participants are precisely indoctrinated not to sign or put
anything which identifies them on the questionnaires for instance names.
In some cases however, the researcher may want to make sure that there is anonymity but,
could uphold access to their participants for a prolonged time so as to ask them the same
questions and see if there are any changes after a couple of months or even years. This can be
attained by apportioning them with code-named or personal ID numbers and inculcating them to
make use of these aliases whenever the survey is being conducted, (Withrow 2013).
When conducting research, information obtained anonymously helps to make sure the privacy of
the participants is safeguarded. Researchers sometimes pledge anonymity of the participants in
the cover letters or by a word of mouth. It is often necessary for participants to be recognised for
instance, when follow-ups or reminders have to be sent to participants who have not responded
or who will be needed in the second round of the study.

10.3 Confidentiality
The researcher has the obligation to “protect the anonymity of the research participants
and the secrecy of their disclosures unless they consent to the release of personal information”.
Confidentiality can be threatened when third parties are involved in the study for instance, if
there is someone who is sponsoring the study or court seeking to identify research participants.
In some cases there might be interference by a sponsor but this is comparatively less difficult to
avert than court cases.
46

Nevertheless, when beginning a research consensus with sponsoring organisation or


agency, the researcher have made it known in the agreement that no individual identities will not
be exposed under any situations, if the sponsor objects then researchers should reject the consent
before carrying out the study, the researcher promise the participants that he/she is going to
handle all the information in a confidential manner and will respect privacy of data sources by
not revealing or divulging the information without permission from the participants. When
reporting the findings, it is the duty of the researcher to make sure that information about any of
your participants is not linked to their identities under any circumstance.
There are several steps which researchers needs to follow when dealing the issue of
confidentiality of participants and below are some of the steps: deliberate on the parameters of
confidentiality by giving research subjects information in what way their data will be utilised,
what will be done with event resources, pictures, audio and video footages and protect their
consent.

10.4 Protection from harm


When conducting social research, researchers should not deliberately disenchant or hurt
the research subjects, irrespective of whether they volunteered to take part in the study or not,
Feasibly the unequivocal instance pertaining harm to participants is on revealing of sensitive
information which can embarrass or jeopardise them directly or indirectly in relation to their
friendship, homes, jobs, and so on. The researcher must look for direct or indirect threats and
guard against them during the study as this may harm participants psychologically in the course
of the study.
In regular cases, research participants are asked to disclose attitudes they feel are
unpopular or demanding personal traits for instance welfare receipts payments, low income and
so on as divulging such information usually make them feel threatened or uncomfortable,
(Kumar, 2011). Researchers should try by all means to make sure that research participants are
secure from unjustifiable interference, anguish, disgrace, physical anxiety, personal humiliation,
emotional and any other form of harm, (Stevens, 2013). The researcher should try all means to
leave participants feeling as they were before taking part in the study.

10.5 Competence:
Only persons who have the mental capacity to provide consent should participate in the study.

10.6 Reporting Results:


Researchers should not manipulate or falsify data to establish predetermined results. Participants
should not be harmed as a result of participating in research. Stacks and Hocking (1991) have
suggested the following rules for ethical scholarship.

1. Give appropriate credit for one’s ideas. The most fundamental rule of all scholarship is to
credit other people’s ideas and statements, whether a direct quotation from a source or the
paraphrasing of ideas.
2. Report conflicting evidence and all that you find.
3. Describe the flaws in your research.
4. Use primary sources whenever possible.
47

5. Honesty and integrity go hand in hand.


Quick reference to APA Version Six

11.1 What is APA?


APA is one of many referencing styles used in academic writing.

APA stands for American Psychological Association. The Association outlines the style in the
Publication manual of the American Psychological Association (APA, 6th ed.).

11.2 Other types of referencing styles


MLA. MLA is most often applied by the arts and humanities, particularly in the USA.

Harvard. Harvard is very similar to APA. Harvard referencing is the most well used referencing
style in the UK and Australia

Vancouver. The Vancouver system is mainly used in medical and scientific papers.

Chicago and Turabian. These are two separate styles but are very similar, just like Harvard and
APA.

11.3 Why reference?


When you reference you use the standardised style to acknowledge the source of
information used in your assignment. It is important (morally & legally) to acknowledge
someone else’s ideas or words you have used. (Intellectual property ownership or copy right.)
Academic writing encourages paraphrasing information you have researched and read.
Paraphrasing means re-wording something you have read in to your own words. If you use
someone else’s words or work and fail to acknowledge them – you may be accused of plagiarism
and infringing copyright. Referencing correctly enables the marker or reader locate the source of
the information.

They can verify the information or read further on the topic, referencing also allows for you to
retrace your steps and locate information you have used for assignments and discover further
views or ideas discussed by the author. Everything you have cited in text appears in your
reference list and likewise... everything that appears in your reference list will have been cited in
text, Check the reference list prior to handing in your assignment.

11.3 Two main parts to referencing


The first indicating within the texts or assignment the sources of the information you have used
to write your assignment. This demonstrates support for your ideas, arguments and views.
Sometimes this is referred to as: citing in text, in text citations or text citations. The second part
to referencing is the construction of a reference list. The reference list shows the complete
details of everything you cited, It should appear in an alphabetical list on a separate page, at the
end of your assignment.
48

11.4 How to reference? In-text


citations:

Even though you have put someone else’s ideas or information in your own words (i.e.
Paraphrased), you still need to show where the original idea or information came from.

This is all part of the academic writing process, when citing in text within an assignment, use the
author/s (or editor/s) last name followed by the year of publication.

Citing at the end of a sentence: Water is a necessary part of every person’s diet and of all the
nutrients a body needs to function, it requires more water each day than any other nutrient
(Whitney & Rolfes, 2011).

Whitney and Rolfes (2011) state the body require many nutrients to function but highlight that
water is of greater importance than any other nutrient, Water is an essential element of anyone’s
diet and Whitney and Rolfes (2011) emphasise it is more important than any other nutrient

11.5 Reference list

Whitney, E., & Rolfes, S. (2011). Understanding nutrition (12th ed.). Australia: Wadsworth
Cengage Learning.

11.5 Basic Rules


The reference list is arranged in alphabetical order of the authors’ last names.
If there is more than one work by the same author, order them by publication date – oldest to
newest, if there is no author the title moves to that position and the entry is alphabetised by the
first significant word

Exclude words such as “A” or “The”. If the title is long, it may be shortened when citing in text.

4. Use “&” instead of “and” when listing multiple authors of a source.

5. The first line of the reference list entry is left-hand justified, while all subsequent lines are
consistently indented.

6. Capitalise only the first word of the title and of the subtitle, if there is one, plus any proper
names

7. Italicise the title of the book, the title of the journal/serial and the title of the web document.

8. Do not create separate lists for each type of information source. Books, articles, web
documents, brochures, etc. are all arranged alphabetically in one list.
49

Personal communication

This refers to letters, including email, interviews, telephone conversations and discussions on
placement or work experience. Personal communications are cited in text only and are NOT
included in the reference list. Refer to APA manual, 2010, p.179

In text citation:

No-tillage technologies have revolutionised the way arable farmers manage their farming
operation and practices (W.R. Ritchie, personal communication, September 30, 2014).

Citing Journals

Gore. (2003). Research for beginners. Journal of Social Science Research, 34(1), 39-50.

Avoid including the word volume. Citing on line journals: retrieved from http://www.jlz.org.

REFERENCES AND FURTHER READING


Bogdan, R., &BIklen, S. K. (1998). Qualitative research for education: An Introduction to
theory and methods (3r.Ed.). Boston: Allyn & Bacon.

Creswel, J, W. (2013). Research Design: Qualitative, Quantitative, and Mixed Method


Approaches. California, Sage Publications

Creswell, J. W. (2012). Qualitative Inquiry and Research Design: Choosing Among Five
Approaches. California. SAGE Publication.

Creswell, J. W. (2017). Designing and Conducting Mixed Methods Research. California. Sage
Publications

Denzin, Norman, K. (2017). The sage hand book of qualitative research 5 th Ed. London: Sage
Publication

Faden, R.R. & Beauchamp, T.L. (1986). A History and theory of informed consent. New York:
Oxford University Press

Hesse-Biber, S. N. (2016). The Practice of Qualitative Research: Engaging Students in the


Research Process. California: Sage Publications

Israel, M. and Hay, I. (2006). Research ethics for Social sciences. California: Sage Publications.

Kangethe, S. N., Wakahiu, J. & Karanja, M. (2016). Fundamentals of Research Methods in


Education: A students’ hand book. Nairobi. CUEA press

Kerlinger, F. N. (2000). Foundations of Educational Research. New York. Holt Richard and

Winston.
50

Kothari. C. R. & Garg, G. (2014) Research Methodology: Methods and Techniques. 3rd.(ed).
Dehli. New age international Publishers.

Kumar, R. (201). Research Methodology: A Step by Step Guide for Beginners. 3rd Edition.
London: Sage Publications.

Kuhn,T. S. (1962). The Structure of scientific Revolution- Chicago: University of Chicago Press.

Mugenda, O. M. & Mugenda, A. G. (2003). Research methods: quantitative and qualitative


approaches. Nairobi. African Centre for technology.

Ogula, P. A. (2008). A hand book in Education Research. Nairobi. Kermit publishers

Oso, W. Y& Onen, D. (2011). A general guide to writing research proposal and report: a hand
book for beginning researchers. Nairobi. Jomo Kenyata Foundation
Singleton, R. A., & Straits, B. C. (1999). Approaches to social research. (3rd Edition). New
York: Oxford University Press

Stacks, Don W.; Hocking, John E. (2001). Essentials of communications Research. USA:Allyn
& Bacon, Incorporated

Stevens, M. (2013). Ethical issues in qualitative research. King’s College London: Social Care
Workforce Research Unit.

Walliman, N. (2006). Social Research Methods. London: Sage Publications

You might also like