Research Methods Self Study Notes
Research Methods Self Study Notes
(MISAMS)
Self-Study Materials
By
SEPTEMBER, 2022
2
COURSE OUTLINE
Mission statement
Millennium Institute seeks to provide an outstanding holistic education to students from all
backgrounds through applied and theoretical research to generate solutions to national and
regional challenges.
Course Code: UCCU2105
Course Title: Research Methodology (Methods)
Tutor: Mr. John Taban Luka
Email:[email protected]
Tel: +211(0)929766623
Goal
This course aims to equip students with basic knowledge and skills in social and educational
research.
Course description
The purpose of this course is to develop learners’ understanding of the methods of
undertaking research studies with a focus on economic problems in a society. This course
examines different methods of acquiring knowledge, role of educational research, identification
of a research problem and stating of a research questions. Also, review of literature, meaning,
purpose and principles of research designs and the measurement design will be examined. It
further acquaints students with the methods of data collection and analysis; descriptive and
inferential statistics, interpretation of data and proposal and research report writing. General
application of the use of APA will be emphasized.
Learning outcomes
By the end of this course the students should be able to;
1) Define the term research
2) Explain the different methods of acquiring knowledge
3) Describe the main stages in research process
4) Analyse the major research designs
5) Compare the different techniques of probability and non-probability sampling
6) Describe different data collection instruments
7) Discuss the main techniques of analysing data
8) Report and utilize research data
9) Identify theoretical issues and practical issues of relevance to business and educational
research
10) Apply the research process in daily tasks
11) Use of APA version six referencing style in academic writings and research
12) Write a research proposal on a topic of choice
Week one lecture
Definitions of Research, the traditional methods of acquiring knowledge, and classification of
Research by purpose.
Week two lecture
The Scientific Method, Basic elements in Research, Paradigms in Social science research.
Week three lecture
Nature of Research, the Research process.
Week four lecture
3
Review of related literature, research design, criteria for selecting research designs.
Week five lecture
Quantitative research designs, qualitative research designs.
Week six lecture
Sample and sampling designs, probability sampling techniques, and non-probability sampling
techniques.
Week seven lecture
Data collection instruments: questionnaire, interviews, observation schedule, focused group
discussions.
Week eight lecture
Validity and Reliability.
Week nine lecture
Research proposal writing, quick reference to APA version six.
Week ten lecture
Ethical considerations in social research and Revisions
UNIT ONE
Nature of Educational research
1.1 Introduction
This unit examines the nature of educational research, the learner is exposed to the
concept of research and its applicability in the field of education. The methods of acquiring
knowledge, as well as the properties of scientific research.
1.2 The meaning of Educational Research
Ogula (2009) defines it as a systematic, controlled and empirical investigation of natural
or social phenomenon. Leedy (1989) defines research as a systematic investigation into study of
materials, sources etc. In order to establish facts and reach new conclusions: an endeavor to
discover new or collate old facts etc.
1.3 Different Methods of Acquiring Knowledge
1.3.1 Authority
One of the most common sources of knowledge is the authorities in different spheres of
knowledge. In many societies, people rely on the wisdom of elders who are recognised as
having better understanding of the world than ordinary members of society. Thus, statements
and pronouncements by experts in various areas of knowledge are seldom challenged or
questioned.
Examples of such people are elderly people in rural areas, heads of religious
organisations and dictators. A major weakness of this method is that authorities in most cases
tend to make false statements in order to justify and preserve their status. However, as Dawes
(1994) quoted by Kerlinger and Lee (2000) have said “We must take a large body of facts and
information on the basis of authority. Thus, it should not be concluded that the method of
authority is unsound; it is unsound only under certain circumstances.” (p.6)
they use rituals, ceremonies and unusual language. Like the first method, this approach is based
on faith.
Examples
Every national school in the sample is headed by a principal. Therefore, all national
schools are headed by principals. Nerima, Kerubo, Mutua, Njeri and Onyango who teach
mathematics studied mathematics at college. Therefore, all mathematics teachers studied
mathematics at college.
The major limitation of inductive explanations, in comparison to universal laws is that certain
conclusions cannot be drawn about specific cases (Nachmias and Nachmias, 1992, p.11).
Deductive Reasoning
This type of reasoning involves arriving at specific conclusions based on an a priori or self-
evident proposition.
Examples
If Polong teaches mathematics, then he studied mathematics at college.
Polong teaches mathematics. Therefore, Polong studied mathematics at college (Did he?).
Human beings reproduce, Amina is a human being, Therefore, Amina reproduces (Does she?). In
deductive reasoning the premises lead to the conclusion. Only if the premises are true, then the
conclusion must be true.
1.4 The scientific method
What is science?
The word science is derived from the Latin noun scientia, (meaning knowledge), and the
verb soire (meaning to know).
The scientific method of acquiring knowledge is a systematic process of investigating a research
problem following some principles.
Aims of Science
The basic aim of science is to generate theories. Theories are tentative explanations.
More specifically, the aims of science are to describe, to explain and to predict. The first step in
knowing is the description of the object or situation of the study as accurately as possible.
During the second step, an explanation of why a given event or behaviour has taken place is
given, i.e. the relationship between the described facts as expressed. The explanation of an
event or behaviour should allow the social scientist to make a prediction of some events under
well-defined conditions.
1.4.1 Properties of Scientific Research
The following are the main properties of scientific research:
1. Scientific research is empirical. Only knowledge gained through experience or the senses –
touch, sight, hearing, smell or taste - is acceptable. The empirically oriented social scientist
goes into the social world and makes observations about how people live and behave.
However, Nachmias and Nachmias (1992) cautioned against interpreting empiricism in the
narrow definition of the five senses – touch, smell, and hearing, listening and seeing.
2. It is systematic and logical. Observations are done systematically one at a time, starting with
description, explanation and finally prediction. In addition, the correct order must be
followed.
3. It is replicable. Since the observation is objective, anyone carrying out a study in the same
circumstances should come up with the same findings.
4. It is self-correcting. It has in-built mechanisms to protect investigators from error as far as is
humanly possible. In addition, research procedures and results are open to public scrutiny by
other researchers.
6
collects both forms of data at the same time and then integrates the information in the
interpretation of the overall results. Also in this design the researcher may embed one smaller
form of data within another larger data collection in order to analyse different types of questions
Transformative mixed methods
These are those in which the researcher uses a theoretical lens as an overarching perspective
within a design that contains both quantitative and qualitative data. This lens provides a frame
work for topics of interest, methods for collecting data and outcomes or changes anticipated by
the study. Within this lens could be data collection methods that involves a sequential or
concurrent approach
How methods can be mixed
Types of mixing Comments
Two types of research question. One fitting a quantitative approach and the other
qualitative.
The manner in which the research Preplanned (quantitative) versus
questions are developed. participatory/emergent (qualitative).
Two types of sampling procedure. Probability versus non probability.
Two types of data collection Survey or questionnaires (quantitative) versus focus
procedures. groups (qualitative).
Two types of data analysis Numerical versus textual (or visual).
Two types of data analysis. Statistical versus thematic.
Two types of research question. One fitting a quantitative approach and the other
qualitative.
The manner in which the research Preplanned (quantitative) versus
questions are developed. participatory/emergent (qualitative).
Two types of conclusions. Objective versus subjective interpretations.
Previous research findings in professional journals, reports, seminars and conferences, Personal
observations, experiences and media, existing problems in work place, technological and
scientific advancements, replication, Discussion with experts
3.8 Characteristics of a research problem
It should be interesting to you, It should have practical value to you, your work place and your
community, Should not be over researched on, Should be within your experience and expertise,
Can be finished within the allocated time, Should not carry legal or moral impediments
3.9 Review of Related Literature
Review of literature is a broad, comprehensive in depth, systematic and critical review of
scholarly publications, including unpublished scholarly print materials, audiovisuals and
personal communications, Literature Review is what is read before a study is conducted to
ensure that it is based on a gap in knowledge to be filled by the study
3.9. 1 Purpose of literature review
To find out the research Gap, To be familiar with authors in the field, To select research studies
in my field, To identify more appropriate research methods to be used in my study, To avoid
duplicating the study-by reading what others have done I will be able to avoid repeating them
3.9.2 Stages in developing literature review
• Stage 1:Identify a research topic.
• Stage 2:Review secondary sources to get an overview of the topic.
• Stage 3:Identify primary sources to search.
• Stage 4:Conduct searches.
• Stage 5:Organise information.
• Stage 6:Evaluate the research reports.
• Stage 7:Write the literature review.
3.9.3 Materials to include in literature review
Mention the problem being addressed, State the central purpose of that research, briefly state
information about the population sample, sampling techniques and data collection instruments,
include findings, conclusions and recommendations, Provide your own interpretation of the other
researchers’ findings as well as the summary. While the literature review presents other people’s
ideas, your voice should remain visible. When paraphrasing a source that is not your own, be
sure to represent the author’s opinions accurately and in your own words.
Summary of the problem, research questions and hypotheses if applicable, Research design and
methodology, a description of the population and sample, the instruments used and method(s) of
data analysis. The findings, conclusions and recommendations.
3.9.4 Example Literature Review.
After you have identified a publication for use in your proposed study, you should abstract the
following information on a piece of paper. Name(s) of author(s), date of publication, name of
book, journal etc. in which the reviewed publication appears, place of publication and name of
publisher. For example: Bless, C. and Achola, P. (1988). Fundamental of Social Research
Methods. An African Perspective Lusaka: Government Printing Department.
Example two
13
Juma (1994) conducted a study of secondary school students’ achievement in Kiswahili. The
researcher administered a forty-item achievement test to a sample of two thousand form four
students from fifty secondary schools in Kenya. He analysed data by the use of percentages,
means scores, standard deviations and a simple analysis of variance. Results showed a
significant relationship between teachers’ attitudes towards Kiswahili and students’ achievement.
3.9.6 Types of Literature Review
Historical reviews – breaks the literature down into phases or stages of development. Thematic
reviews – which are structured around different themes. Theoretical reviews – which trace
theoretical developments in a particular study area. Empirical review - which attempts to
summarise the findings of research studies.
3.9. 7 Challenges facing Literature review
Coping –Plagiarism, Poor citation, Use of very old literature, citing irrelevant literature,
Criticizing rather than critiquing.
3.9. 8 Research Questions
Research questions refer to questions which the researcher would like answered by undertaking
the study (Mugenda & Mugenda, 2003). The difference between research questions and
objectives is that a research question is stated in question form while an objective is a statement
form. In order to include both objectives and research question in a research proposal, the
objectives should be broader and the research question more specific. Research questions are
the ones that guide the study.
3. 9. 9 Hypothesis
A tentative answer to a research problem written in declarative sentence form, there are two
types of hypothesis:
a- Null hypothesis b- Alternative hypothesis
3.9.10 Theoretical Framework
Every study must be anchored within a theory in the discipline, students are expected to review
theories that are related to the study the student is undertaking. It is suggested in the section of
theoretical framework which is in chapter one the student reviews a theory related to his/her
study. There is a tendency for students to review even other theories which are in the discipline
but not related to the study
3.10 Conceptual Framework
It is a framework usually developed by the researcher to demonstrate the interrelationships
between variables in the study. The relationship is usually presented graphically or
diagrammatically and is usually supported by an explanation. The purpose of the conceptual
framework is to help the reader to quickly see the proposed relationship.
UNIT FOUR
RESEARCH DESIGN AND STRUCTURE OF AN EDUCATIONAL RESEARCH
4.1 Introduction
This unit examines the design and the structure of educational research. It explores the major
research designs namely: experimental and non-experimental designs
4.2 Research Designs
According to Ogula (2009) a research design is a strategy for planning and conducting a study.
It is a blue – print that guides the planning and implementation of the research (i.e. data
collection and analysis). According to Kangete, Wakahiu and Karanja (2016) a research design
is the fundamental frame work that holds the research venture together.
14
• In quantitative research, research designs are specified before conducting research and
cannot be changed once research has started.
• However in qualitative research designs are more flexible.
• Researchers are free to change the research design when the research is being carried out
4.3 Designs under the quantitative research paradigm
4.3.1 EXPERIMENTAL DESIGNS
Experimental designs are used to study cause and effect relationships among two or more
variables. The main difference between a true experimental design and other designs is the fact
that research units are assigned to the treatment and control conditions at random.
The main purpose of experimental research is to study causal links; to determine whether a given
variable x has an effect on another variable y, or whether changes in one variable produce
changes in another variable.
The three essential elements of an experimental design are:
1. Randomisation – The researcher assigns participants to different groups.
2. Manipulation – The researcher does something at least to some of the participants in the
research.
3. Control – The researcher introduces one or more controls over the experimental situation.
Experimental research deals with manipulation of variables to see if changes result in other
variables. In experimental studies, one group, the experimental or treatment group receives the
treatment; the other group, a control group, receives a neutral treatment. The two groups are
compared before the treatment using a pre-test. After the treatment, a post-test is administered to
the two groups.
This situation is presented diagrammatically below.
According to Frey, Botan, Friedman & Kreps (1991, p.156) three requirements are necessary for
establishing a causal relationship between an independent and dependent variable. All three
requirements are necessary for inferring causality; none is sufficient in and of itself. These are:
1. The independent variable must precede the dependent variable.
2. The independent and dependent variable must be shown to co-vary, or go together.
3. The changes observed in the dependent variable must be the results of changes in the
independent variable and not some other unknown variable.
4.3.2 True Experimental Designs
The true experimental designs are the most exact method of establishing cause and effect
relationships. They are called true experimental designs because they provide adequate controls
for all sources of internal invalidity. The experimental method was developed in the natural
sciences where consistent causal relationships are easy to establish. However, the accumulated
experience shows that in the social sciences, causal relationships are difficult to measure and true
experimental designs are rarely used.
The true experimental design is considered the most useful design to demonstrate programme
impact if conditions of randomisation in selection of participating units and in the assignment of
treatment and control conditions at random are met. Research units can be individuals (students,
teachers, parents etc.), groups of individuals, institutions, regions etc.
There are three true experimental designs:
15
the groups not receiving the treatment are pre-tested. After the treatment all four groups are
post-tested.
4.3.5 Quasi-experimental designs
Quasi-experimental designs have been defined as “experiments that have treatments, outcome
measures and experimental units, but do not use random assignment” of subjects. The purpose of
a quasi-experimental design is to approximate a true experimental design. In a quasi-
experimental design, subjects are not assigned randomly to conditions, although the independent
variable(s) may be manipulated.
It is not usually possible in social science research to apply true experimental designs because of
the difficulty of obtaining equivalent groups or achieving random assignments of subjects to the
two groups. Even when equivalent groups are selected at the beginning of the research project,
differential dropouts of subjects will result in non-equivalent groups during the research project.
Besides, in educational research, it is sometimes not be feasible to divide intact classes to
provide for random samples.
For situations where it is not feasible or desirable to apply true experimental designs, quasi-
experimental designs are normally used. The distinguishing feature of quasi-experimental design
is that the subjects are not randomly selected and assigned to treatment and control groups. The
sample of participants in quasi-experimental designs comes from intact groups, such as students
in classrooms.
4.3.5 Types of quasi-experimental designs
There are three basic designs for quasi-experiments. These are:
1. A non-equivalent control group design.
2. Interrupted time-series designs.
3. Multiple time-series design
Each of these designs is described briefly below.
1. Non-Equivalent Control Group Design
Non-equivalent control group design retains the idea of a treatment group and a control group,
without random assignment.
PROCEDURE
data on two or more quantifiable variables on the same group of subjects and computing a
correlation coefficient.
If two variables are highly related, scores on one variable can be used to predict scores on the
other variable.
It explains how characteristics vary together, provide rigorous and replicable procedures for
understanding relationships.
It finally determines to what degree a relationship exists between the quantifiable variables.
Advantages
- This method permits one to analyse inter-relationships among a large number of variables
in a single study.
- It allows one to analyse how several variables either singly or in a combination might
affect a particular phenomenon.
- The design provides information concerning the degree of relationship between variables
being studied.
Disadvantages
- Correlation between two variables does not necessarily imply causation.
- A correlation coefficient is an index and therefore any two variables will always show a
relationship even when common sense dictates that such variables are not related.
UNIT FIVE
5.5 VARIETIES OF QUALITATIVE RESEARCH
According to Creswell (2012) Qualitative research can be classified into five main approaches or
traditions, these are:
1. Biography
2. Phenomenology
3. Grounded theory
4. Ethnography
5. Case study
5.1 CASE STUDY METHOD
A case study is an intensive study of all relevant materials about a social object in its natural
context. The social object could be an individual, event, school, group or community. The
purpose of a case study is to obtain insights and explanations about the object of study. A case
study involves detailed, in-depth collection of data using multiple sources of information
including observations, interviews, documents and audio-visual material. Case studies describe
rather than predict a phenomenon. It is an intensive and holistic analysis of a single entity.
It uses a smaller sample for in-depth analysis.
The following are characteristics of case studies:
1. Case studies focus on contemporary phenomena. This distinguishes them from historical
research which focuses on past phenomena.
2. They study phenomena in their total context.
The main advantage of the case study is that it provides in-depth information about particular
small groups and geographic area.
Some limitations of the case study are:
The results of the case study method do not allow clear-cut generalisations.
18
whole of a population. This in turn leads to reduced costs associated with collecting and
analysing data and greater accuracy due.
6.2.5 Sample Units
Possible units include:
a) An individual person
b) A household
c) A school
d) A village
e) A social group, such as a youth group, a club or women’s group
f) An administrative unit
The selected unit must be precisely defined.
6.2.6 Sampling Frame
This is complete list of the membership of a population or universe from which subjects for a
sample can be selected. This is prepared in the form of a physical list of population elements.
6.2.7 Sample Size
A researcher normally faces the problem of determining the size of the sample necessary to
achieve the objectives of the planned study. It is recommended that researchers use the largest
sample possible because statistics calculated from large samples are more accurate, other things
being equal, than those from small samples. The larger the sample, the more likely are its mean
and standard deviation to be representative of the mean and standard deviation of the target
population.
The following are some of the factors that the researcher should consider when deciding on the
sample size.
1. Availability of resources and time. In most research projects, financial and time
restrictions limit the number of subjects that can be studied. However, it is generally
desirable to have a minimum of 30 cases.
2. Larger samples are necessary when groups must be broken into subgroups. For example,
the researcher may be interested in comparing the attitudes of different categories of
teachers towards environmental education. This would require dividing teachers by
professional carried out if the sample is large enough so that after the teachers have been
divided into groups, each subgroup has sufficient numbers of cases to permit a statistical
analysis.
3. When high attribution is expected some subjects or schools drop out of the project for
one reason or another. It is therefore necessary for the researcher to use a large sample.
4. When the target population is very heterogeneous, a large sample must be used in order
that persons with different characteristics will be satisfactorily represented.
5. When there is political instability in the country, it may not be possible to include pilot
schools in some parts of the country in the sample.
In studies that involves hypothesis testing the sample should be big enough so that it has
sufficient probability of declaring a relationship or difference of pre-defined magnitude if
statistically significant one exists. A small sample has small power and likelihood of type II
error, β. That is the probability of declaring a relationship or difference not significant if such a
difference exist (i.e. not a product of chance).
21
There is an incorrect supposition that the desirable sample size is a percentage of the size of the
population. It should be noted that the sample size is a function of the absolute number of sample
units, not the sampling fraction.
As Best and Kahn (1989) have noted: “The ideal sample is large enough to serve as an adequate
representation of the population about which the researcher wishes to generalize and small
enough to be selected economically – in terms of subject availability, expense in both time and
money, and complexity of data analysis. There is no fixed number of percentage of subjects that
determine the size of an adequate sample”. (Pp. 16-17).
They have made the following practical observations about sample size.
1. The larger the sample, the smaller the magnitude of sampling error.
2. Survey-type studies probably should have larger samples than needed in experimental
studies.
3. When sample groups are to be subdivided into smaller groups to be compared, the
researcher initially should elect large enough samples so that the subgroups are of
adequate size for his or her purposes.
4. In mailed questionnaire studies, because the percentage of responses may be as low as 20
to 30 percent, larger initial sample mailing is indicated.
5. Subject availability and cost factors are legitimate consideration in determining
appropriate sample size.
The sample size required is a function of the variability of the characteristics measured, and the
degree of the precision required. (Casley and Lury 1982, P. 75).
The optimum sample size is directly related to the type of research you are undertaking. For
different types of research, “rule of thumb” can be used to determine the appropriate sample size.
sample of students may then be selected from this population by selecting clusters of students as
classroom or school groups rather than individually.
Suppose that a researcher wishes to administer a test to random sample of 2000 pupils. In cluster
sampling, the researcher would draw a random sample of 5 classrooms from a list of all
classrooms in the target population. Then he would administer the test to every pupil in each of
the 5 classrooms.
6.10 Non-Probability sampling
Non-probability sampling does not use random sampling. Elements in the target population have
an unknown chance of being selected into the sample. A non-probability sample is based on
subjective judgement and is biased in the sense that some members of the target population have
more chance of being selected than others. There are three main types of non-probability
samples: Quota, Judgement, and Convenience.
6.10.1 Quota sampling
Quota sampling derives its name from the practice of assigning quotas or proportions of kinds of
people to interviewers. In this technique, sample members are drawn from various target
population strata e.g. untrained teachers, graduate teachers, and grade 1 teachers.
6.10.2 Purposive or Judgement sample
The second type of non-probability sampling used by educationists is judgement or purposive
sampling. In this procedure, the choice of sampling units depends on the subjective judgement of
the researcher. However, the researcher should ensure that the sample he selects is representative
of the population.
6.10.3 Convenience sampling
Convenience sampling consists of units that are convenient to the researcher. In most cases, the
schools selected are those which are easy to reach and are willing to take part in study.
6.10.4 Network sample (Snow ball sampling)
This is a non-random sample in which subjects are asked to refer the researcher to other people
who could serve as subjects.
6.10.5 Volunteer sampling
A non-random sample in which subjects choose purposely to participate in the study.
Discussion questions
1. Distinguish between probability sampling and non-probability sampling.
2. Describe the major types of probability sampling.
3. Define the following terms;
a) Target population
b) Sample
4. Explain briefly each of the following concepts
a) Multi-stage sampling
b) Judgement sample
c) Snowball sample
5. The following is a summary of a classroom study.
The purpose of the study was to evaluate a science project designed to improve science
achievement among group of standard six pupils. A comparison of group of standard six
pupils using a different programme was employed. Subjects were randomly assigned to
the two groups. Equal numbers of boys and girls were used in each group. A science
achievement test was administered both as pre-test and post-test measures.
Identify the steps in a research process outlined in the summary.
24
6. What are the most important distinctions between survey and experimental research
designs?
UNIT SEVEN
7.1.2 Chapter One: i- Background to the Problem ii- Statement of the Problem iii- Research
Questions iv- Hypotheses if any v- Significance of the Study vi- Scope and Delimitations of
the Study vii- theoretical frame work viii- conceptual frame work, ix-Operational definitions
of terms
A research topic should be a concise statement, identifying the variables and theoretical issues
under investigation and the relationship between them. It should have an independent and
dependent variables: the recommended length of a research topic no more than 12 words and
should be fully self-explanatory when standing alone. (APA, 2010. p.23) e.g. “Effects of
transformed letters on reading speed”
The introduction should answer questions like why is the problem important? How does this
study relate to previous work done? How does this differ from or build on earlier studies?
Research is drive by desire to resolve issues, discuss and summarise findings and conclusions of
relevant scholarship from global, regional and local levels Demonstrate the logical continuity
between previous and the present work.
• How will this study contribute to solutions of social and economic problems?
• Who values your study? Who needs and is likely to use the information?
The statement of the problem should include brief review of research in the field and questions
previous research studies left unanswered, finally a statement of the main question to be
addressed
In reviewing empirical studies you review the most recent researches conducted in your topic or
area of study, indicating the author, date, location, purpose, design, target population, sampling
techniques, instruments and data collection used, the findings recommendations and the
conclusions. At least 4 research studies of less than ten years old are encouraged.
Summary and Knowledge Gap include any gaps such as in methodology used, location, target
population and conclusions reached.
7.3 Chapter three: Research design and methodology a-Research Design b- Target population
c- Description Sample and sampling procedures d-Description of Research Instruments e-
Data Collection procedures f- Data Analysis procedures g- Ethical considerations h-
References i-Appendices: Questionnaire, interview guides and Map of study location.
7.4.1 Triangulation
Triangulation involves the use of numerous methods of data collection and sources. There are
five types of triangulation: methodological triangulation, source triangulation, investigator
triangulation, theory triangulation and environmental triangulation. Methodological triangulation
is applied when the researcher uses two or more methods of data collection to measure variables.
For example, a questionnaire could be supplemented with in-depth interview, observation and
existing records.
Source triangulation is utilised when comparing the information which is given by the source at
different times and in different situations. For example, the information given by the head
teacher during the informal conversation could be compared with the information he or she gives
in the actual interview.
Investigator triangulation is applied when two or more researchers are used to study the same
subject. During the study, the researchers regularly exchange opinions of what they see or hear.
The theory of triangulation involves the use of multiple professional perspectives to interpret a
single set of data.
Environmental Triangulation involves the use of different locations and other settings related to
the environment in which the study took place.
7.4.2Peer Debriefing
Peer debriefing involves testing findings with disinterested parties and individuals. The role of
the peer debriefer is that of a “devil’s advocate”, a person who listens to the researcher’s findings
26
and asks hard questions about the methods used and interpretations. The peer debriefer should be
someone who is familiar with the research or phenomenon being explored. “A peer reviewer
provides support, plays devil’s advocate, challenges the researcher’s assumptions, pushes the
researchers to the next step methodologically, and asks hard questions about methods and
interpretations.” (Lincoln and Guba, 1985, Creswell and Miller, 2000).
8.1 Introduction
This unit examines the different data collection instruments used collecting quantitative and
qualitative data. It will discuss questionnaire, interview schedule, content analysis, Focused
group discussions,
After the target groups have been selected, instruments have to be developed to assist in data
collection. Instruments associated with the quantitative paradigm are:
b- Questionnaire
c- Attitude scale
d- Test items
e- Interview schedule
27
8.2 QUESTIONNAIRE
A questionnaire is a carefully designed instrument (written, typed or printed) for collecting data
directly from people. A typical questionnaire consists of questions and statements. Two types of
questions are normally asked: closed-ended questions and open-ended questions. Closed-ended
questions are structured in such a way that the respondent is provided with a list of responses
from which to select an appropriate answer.
The following is an example of a closed-ended question.
1. (a) Have you ever attended an in-service course on methods of teaching craft education?
[ ] Yes
[ ] No
(b) If yes, about how many times have you received in-service training?
[ ] Once
[ ] Twice
[ ] Thrice
[ ] Four times
[ ] Five times
[ ] More than five times
Open-ended questions are those that require the respondent to provide her or his own answer to
the question. For example, you can ask the respondent the question: What problems have you
experienced when teaching craft education?
1. The questions must be clearly worded so that they can be comprehended by the respondent.
2. Items should be short because short items are easier to understand.
3. Avoid double-barrelled items which require the subjects to respond to two separate ideas.
28
4. Avoid leading questions which suggest that one response may be more appropriate than the
other.
5. Do not use words that some respondents may not understand
6. Avoid biased or leading questions.
7. Do not ask a question that assumes a fact not necessarily in evidence e.g. Have you stopped
giving birth? How does a woman who has never given birth respond?
8. Avoid touchy questions to which the respondent might not respond honestly.
8.2.4 FORMAT
1. Titles
Write the title of the study followed by the title of the questionnaire.
Example
2. Purpose
a) Instructions: Give brief general instructions, including instructions for the return of the
instrument. Do not give instructions for completion of individual sections. This should
appear at the beginning of each section.
b) Cover Letter: Seek permission from the respondent for participation in the study. Informed
consent may be included in the cover letter in the questionnaire.
c) Organise the questionnaire in sections according to the type of questions asked. Normally
section one consists of demographic questions.
8.3 PILOT-TESTING THE QUESTIONS
After developing the questionnaire, you should administer it to people who are already in the
target population or who are similar to those in the population to be studied.
The following are some of the issues clarified during the pilot study.
1. Whether there are flaws and ambiguities.
2. The feasibility of the proposed procedure for coding responses.
3. Whether or not the prospective respondents are available and accessible.
4. Whether the intended respondents possess the information being sought and are willing to
participate in the study.
5. Whether the intended respondents will understand the questions.
6. Whether the procedures for administering the questionnaire are appropriate.
Pre-test subjects should be encouraged to make comments and suggestions concerning specific
items, instructions and recording procedures.
• Questionnaires can research a large group of respondents within a short time and with little
costs.
• The biases which might result from the personal characteristics of interviews are avoided or
reduced.
30
• Since the respondents do not indicate their names, they tend to give honest answers. The
absence of an interviewer also makes respondents give honest answers without fear of giving
answers that they think the interviewer may not want to hear.
• Respondents have adequate time to consult documents or other people if questions require
doing so, and
• Respondents have enough time to reflect before answering questions.
8.4.2 Disadvantages of Questionnaires
• The researcher has no control over the person who fills out the questionnaire
• There is no opportunity for the respondent to see and obtain clarification about ambiguous
questions. Similarly, there is no opportunity for probing beyond the answers given by the
respondents
• It is normally difficult to obtain an adequate response rate
• Questionnaires cannot be filled out by illiterate people and people who do not understand the
language in which the questions are written.
• There is a tendency for the respondents to skip questions they consider difficult, sensitive or
controversial.
• There is no assurance that the intended respondent understands the questions, and
• Some people tend to ignore them if they find them unimportant
8.4.3 TYPES OF QUESTIONS IN A QUESTIONNAIRE
A typical questionnaire has four kinds of questions.
• Demographic questions
• Opinion and attitude questions
• Self-perception questions
• Informational questions.
8.4.4 Demographic Questions
These are questions which seek background information about the respondent, for example sex,
age, level of education, marital status, occupation, religious affiliation, place of residence, etc.
Examples
These types of questions solicit information about respondent’s attitudes, beliefs, feelings and
misconceptions relating to an area of inquiry.
Examples
These questions seek to find out the respondent’s knowledge of an area of concern to the
evaluator.
Example
RESPONSE FORMATS
1. UNSTRUCTURED FORMAT
In an open-ended, free or unstructured response format, the respondent has complete freedom to
answer as he/she chooses. One disadvantage of open- ended responses is that they are difficult to
code or score and some information may be lost during coding. They also take extra time to
complete and code.
Example
Qualitative open-ended question: What do you think best about your job?
Quantitative open-ended question: What is your highest qualification level?
The most common open-ended items are qualitative. Qualitative open-ended questions ask for a
non-numerical response, while quantitative open-ended questions ask for a numerical response.
Semi-Structured Format
This is of filled in response items; the response is open-ended but only a short answer is
expected. They are easier to code than unrestricted free items.
Checklists
Checklist present a number of options for which the respondent is expected to check or tick the
most relevant response or all the suitable responses.
Example
The following are some of the reasons students have given for studying education. Tick any one
of them, which you think represents the most important reason why you are studying education.
As we can see, in a schedule-structured interview, the questions and their sequences are fixed
and identical for every respondent.
8.5.1 Advantages of Interview Schedule
• It is flexible and adaptable to individual situation.
• It allows a glimpse of the respondent’s gestures, tone or voice etc; and thus reveal his/her
feelings.
• It permits the investigator to pursue leads and to ask for elaboration of points that the
respondent has not made clear.
• It permits the establishment of rapport between the investigator and the respondent. This
stimulates the respondents to give more complete and valid answers.
• It makes it possible for information to be obtained from illiterate respondents or respondents
who are reluctant to put things in writing.
• It promotes a higher percentage rate of return.
• It permits the interviewer to help the respondent clarify his/her thinking on a given point.
• It enables the investigator to pursue leads in order to gain insight into the problem.
8.5.2 Weaknesses of Interview Schedule
• It is costly in time and personnel.
• The interviewer is likely to influence the responses he/she receives.
• Interviewing requires skilled personnel.
33
Instead of interviewing or observing respondents, the evaluators take the communications that
people have produced and ask questions about them. Content analysis focuses on examining and
studying events that occurred before the investigation. The document analysis method is used to
gather information from project documents, public documents, institutional publications,
historical documents, educational records, documents in the mass media, archival records, public
records, i.e. political and judicial records, government documents and private records, namely
autobiographies and diaries.
8.6.1 ADVANTAGES
• A dominant individual in the group may override and intimidate other group members.
• An unskilled moderator may allow the discussion to drift off track.
• Particular weakness of a focus group is the possibility that the members may not express
their honest and personal opinions about the topic at hand.
• They may be hesitant to express their thoughts, especially when their thoughts oppose the
views of another participant
• Compared with surveys and questionnaires, focus groups are much more expensive to
execute
• Moderators may intentionally or inadvertently, inject their personal biases into the
participants' exchange of ideas
8.7.2 HOW TO PREPARE FOR A FOCUS GROUP DISCUSSION
The following are the main stages in preparing and conducting a focus group discussion.
Focus groups are made up of homogeneous groups of people, i.e. people who are similar to each
other in such characteristics as occupation, age, religion, sex, level of education, attitudes
towards an issue etc. Experience has shown that in many settings, and educational homogeneity
of members is very important. Whenever possible, the groups’ members should be total
strangers.
2. The Moderator
The moderator should be a stranger and not a relative or a friend of any of the participants.
He/she should be someone who has the ability to stimulate and guide the group. He should be a
good listener, enthusiastic, friendly, knowledgeable, a good communicator and have an excellent
memory and sense of timing. His/her appearance should be like that of participants. If the
discussion is gender sensitive, a male moderator should be used for male groups and female
moderator for female groups.
3. During the focus group discussion, one person should serve as a moderator and the other as a
recorder.
Steps in Conducting a Focus Group Discussion
(a)Encourage discussion
(b)Encourage participation.
(c)Guide the discussion, detest any contradictions and additions.
(d)Build rapport.
(e)Transcribe and synthesise what people say.
Step 4: Summarise the discussion, check for agreement and thank the participants.
8.8 OBSERVATION TECHNIQUES
One way of obtaining information about progress and outcomes of a programme or project is to
observe directly selected aspects of its development and implications as they occur.
8.8.1 SITUATIONS THAT MAY BE SERVED BY OBSERVATIONAL DATA
1. Measuring classroom process variables
• How the lesson is divided into a variety of activities.
36
• Students’ participation.
• Does the lesson arouse the interest of the pupils?
• Use of teaching-learning resources.
• Unexpected outcome.
2. Measurement of attainment of programme objectives e.g. using tools, properly performing
experiments, etc.
3. Measuring programme implementation
How instructions are being carried out. Not all teachers and even members of the
programme carry out programme instructions in the way intended by the programme
director.
Advantages
Disadvantages
• Attendance rate
• books and other materials taken from the library
• participation in extra-curricular activities
• leisure activities
38
It is important for researchers to show their audience that the measures used were valid and
reliable. Unless that is done, some people are likely to doubt the reliability and validity of your
findings and conclusions.
9.1 VALIDITY
Validity is ‘the degree to which the data support the inferences that are made from the
measurement”. (Kelly, 1999, p. 13). According to Gronlund (1976), when using the term
validity, in relation to testing and evaluation, there are a number of cautions to be borne in mind.
1. Validity pertains to the results of a test, or evaluation instrument, and not the instrument
itself. We sometimes speak of the validity of a test for the sake of convenience, but it is
more appropriate to speak of the validity of the test results or more specifically, of the
validity of the interpretations to be made from the results.
2. Validity is a matter of degree. Validity is best considered in terms of categories that specify
degree, such as high validity, moderate validity and low validity.
3. The prime requisite of a research instrument is that it must be valid. If research instruments
are not valid, the study is worthless. An instrument is considered valid only in terms of a
specific group. There are three categories of validity.
• Content Validity
• Criterion-related Validity
• Construct Validity
9.1.1 CONTENT VALIDITY
Content validity refers to the degree to which an instrument measures the subject matter and
behaviours the researcher wishes to measure. Content validity has an index. There are two types
of content validity: face validity and sampling validity.
39
a) Face Validity
Face validity is concerned with “the extent to which an instrument measures what it appears to
measure according to the researcher’s subjective assessment.” (Nachmias and Nachmias, 1992,
p.158). To evaluate the face validity of your instrument, you should review each item in the
instrument to assess the extent to which it is related to what you wish to measure. After this, it
may be necessary for you to consult experts.
b) Sampling Validity
Sampling validity refers to the degree to which a measure adequately samples the subject matter
content and the behaviour changes under consideration. Sampling validity is commonly used in
evaluating achievement tests. The setter must ensure that the test includes questions on all the
material covered in the course.
Validation Procedure
Content validation is judgemental. It is done by asking a panel of experts in the field of study,
such as teachers, inspectors of school and curriculum developers to critically examine the items
for their representativeness of the content of the property being measured. Another method of
determining the content validity of an instrument is by having a group of experts in the field rate
each questionnaire or test item in terms of its relevance to the research questions, for example,
on a five-point scale.
Where:
1 = Not relevant
2 = Somewhat relevant
3 = Quite relevant
4 = Very relevant
Two different assessments of the same content can be correlated using Pearson’s product
moment coefficients. Below is some guidance in assessing the information.
Items with coefficients of 0.5 and below can either be reworded to increase their validity or
substituted.
9.1.2 CRITERION-RELATED VALIDITY (EMPIRICAL VALIDITY)
This is the degree to which a measure is related to some other standard or criterion that is known
to indicate the construct accurately (Durrheim and Blanche1999, p.83). Criterion-related validity
is established by comparing an instrument with another instrument whose validity is known to be
accurate, an accurate measure of the same construct. Suppose that a researcher wants to find out
40
the attitudes of teachers towards the teaching profession. To evaluate the criterion-related
validity of the attitude scale, the researcher should find out whether the new instrument is related
to other measures of attitudes towards the teaching profession. If the new measure correlates
with other measures of attitudes towards the teaching profession, the researcher can conclude
that the new instrument is valid.
a) Concurrent Validity
Concurrent validity refers to the extent to which a new measuring instrument is related to the
pre-testing measure of the construct.
Procedure
In order to determine the concurrent validity of a test, you should administer the test to a sample
of students and then correlate the results with several criterion scores. For example, if a teacher
wants to establish whether the results of students’ scores on an achievement test are valid, he/she
should correlate the results with pupils’ scores on other tests.
b) Predictive Validity
Predictive validity refers to the extent to which a measuring instrument predicts future outcomes
that are logically related to the construct. The predictive validity of a measuring instrument such
as a test is evaluated by checking scores on the instrument against the subject’s future
performance. Validation Procedure
Suppose you want to determine the predictive validity of the South Sudan Certificate of Primary
Education examination. You should administer, obtain SSCPE results of a sample of students
and then carry out a follow up study of the performance of the pupils in the South Sudan
Certificate of Secondary Education examination. You should then find out the correlation
between their SSCPE scores and SSCSE scores. If the correlation is high, you can conclude that
SSCPE has a high predictive validity.
Construct validity is concerned with the extent to which performance of an instrument can be
interpreted by theoretical traits, psychological constructs or traits which it purports to measure.
Examples of constructs are attitude, motivation, self-esteem honesty, critically thinking, study
skills and scientific attitude.
There are two main method of determining the construct validity of a research instrument:
Convergent validity and discriminant validity.
a) Convergent Validity
This method attempts to determine whether scores from different measures of different
theoretically associated constructs are related to one another. For example, a test of driving
ability should relate well to other tests measuring driving ability.
b) Discriminant validity
This method is derived from the idea that measures of constructs that are theoretically unrelated
to each other should not be correlated with each other. If strong correlations are found, the
measures are said to lack construct validity.
9.2 RELIABILITY
“Reliability is the accuracy or precision of a measuring instrument (Kerlinger, 1973, p. 443).
Reliability refers to the consistency with which a measuring instrument yields the same results
for an individual to who the instrument is administered several times.
Reliability is the degree to which an instrument yields the same results on repeat trials. When
repeated measures of the same thing give similar results, the instrument is said to be reliable.
For example, if a teacher gives the same test on different occasions and gets different results, the
instrument contains measurement errors. Measurement errors may be due to inaccurate qualities
in the measuring instrument of disturbances in performance on the measure. Reliability refers to
the results obtained with a research instrument and not to the instrument itself (Gronlund, 1976,
p.107). This is because the results are dependent on sample characteristics, the context and the
time the instrument was administered.
Qualitative researchers use the term dependability, i.e. the extent to which the reader can be
convinced the results did occur as the research says they did.
PROCEDURES OF ESTIMATING RELIABILITY
There are two main methods of estimating reliability: repeated measurements and internal
consistency. All of them involve the procedure of correlating two sets of scores. The closer the
agreement between the two sets of scores the greater the reliability. The reliability measure
varies on a scale of 0 to 1. 0.00 (indicating total unreliability) and 1.00 (indicating perfect
reliability).
This is concerned with the ability of the instrument to measure the same thing at different times.
Three methods are used:
42
• Test-Retest Method
• Alternate Form Method
• Parallel Form Method
9.2.3 TEST-RETEST METHOD (Measure of stability)
This type of reliability is estimated by measuring individuals on the same instrument on different
occasions and correlating the scores obtained by the same persons on the two administrations.
Symbolically,
2
St
r xx =
1
2
Sx
i) Some subjects may memorise answers and if the second administration is given too soon, do
well during the second administration. To some extent the programme can be overcome by using
a longer interval between test administrations to give memory a chance to fade (Kubiszn and
Borich, 2000, p. 312)
ii) The scores of the individuals may be affected by the mood of the individual.
6. If too long a time lapses before the second administration, learning takes place and the second
recorded result will be different from the first.
9.2.4 ALTERNATIVE FORM METHOD
This method is similar to the test-retest method except that the questions on the second
instrument are renumbered to create an alternate form of the first instrument (Kelly, 1999, p.
130).
Internal consistency is estimated by determining the degree to which each item in a scale
correlates with each other item. Internal consistency reliability is based on a single
administration of a measure.
Internal consistency reliability indicates the degree of homogeneity among the items in an
instrument. There are three types of internal consistency reliability: (1) Split-half, (2)
KuderRichardson, and (3) Cronbach-alpha.
9.3.1SPLIT-HALF RELIABILITY
This is another method of estimating reliability. A measure is split into two parts. Each of them
is treated as a separate scale and scored accordingly. Reliability is then estimated by correlating
scores on the two scales. Individuals who score high on one scale should also score high on the
other scale. The Spearman-Brown prophesy formula is used to estimate reliability.
1 2 roe
rxx =
1−roe
roe = the reliability coefficient obtained by correlating the scores of the odd statements
with the scores of the even statements
Source: Nachmias and Nachmias, (1992, p. 165)
Pearson or correlation coefficient is normally used if the measures are at the interval or ratio
level. One weakness of this method is that the correlation between the two halves is dependent
upon the method used by the researcher to divide the items.
Reliability estimate:
( ( )
)
2
m
m−
n n
KR 21= 1−
n−1 SD2
9.3.4 CRONBACH-ALPHA
The Cronbach-alpha method is used with instruments in which there is no right or wrong answer
to each item, such as an attitude scale.
Formula:
Nr
a=
1+ ( N −1 ) r
N = No of items
The following may help you to assess the reliability of a research instrument.
A good measuring instrument is both valid and reliable. However, an instrument can be reliable
but not valid but it cannot be valid without being reliable. The figure below demonstrates an
analogy between a person firing a rifle at a target and the relationship between reliability and
validity of a measure.
Sampling validity is concerned with the extent to which the content of a measuring instrument
adequately samples the content of the property being measured. One commonly used method of
determining the face validity of a research instrument is to have a group of experts such as
teachers, curriculum specialists and teacher educators determine the extent to which the
instrument measures what it is supposed to measure. Items that do not have validity are either
replaced or reworded.
Reliability refers to the accuracy or precision of a measuring instrument. A minimal requirement
for an evaluation instrument should be that the respondent gives the same answer to the same
question if the circumstances have not changed.
45
UNIT 10
10.2 Anonymity
Anonymity refers to a situation whereby the researcher cannot link the information that a
participant gave to that precise research participants. Research subjects are even more likely to be
frank or afford accurate data if they presumed that no one will pin-point them or link them to
their answers. Anonymity can be superlatively accomplished by telling the research participants
to not divulge their names in the first place for instance, many surveys are conducted
anonymously and sometimes research participants are precisely indoctrinated not to sign or put
anything which identifies them on the questionnaires for instance names.
In some cases however, the researcher may want to make sure that there is anonymity but,
could uphold access to their participants for a prolonged time so as to ask them the same
questions and see if there are any changes after a couple of months or even years. This can be
attained by apportioning them with code-named or personal ID numbers and inculcating them to
make use of these aliases whenever the survey is being conducted, (Withrow 2013).
When conducting research, information obtained anonymously helps to make sure the privacy of
the participants is safeguarded. Researchers sometimes pledge anonymity of the participants in
the cover letters or by a word of mouth. It is often necessary for participants to be recognised for
instance, when follow-ups or reminders have to be sent to participants who have not responded
or who will be needed in the second round of the study.
10.3 Confidentiality
The researcher has the obligation to “protect the anonymity of the research participants
and the secrecy of their disclosures unless they consent to the release of personal information”.
Confidentiality can be threatened when third parties are involved in the study for instance, if
there is someone who is sponsoring the study or court seeking to identify research participants.
In some cases there might be interference by a sponsor but this is comparatively less difficult to
avert than court cases.
46
10.5 Competence:
Only persons who have the mental capacity to provide consent should participate in the study.
1. Give appropriate credit for one’s ideas. The most fundamental rule of all scholarship is to
credit other people’s ideas and statements, whether a direct quotation from a source or the
paraphrasing of ideas.
2. Report conflicting evidence and all that you find.
3. Describe the flaws in your research.
4. Use primary sources whenever possible.
47
APA stands for American Psychological Association. The Association outlines the style in the
Publication manual of the American Psychological Association (APA, 6th ed.).
Harvard. Harvard is very similar to APA. Harvard referencing is the most well used referencing
style in the UK and Australia
Vancouver. The Vancouver system is mainly used in medical and scientific papers.
Chicago and Turabian. These are two separate styles but are very similar, just like Harvard and
APA.
They can verify the information or read further on the topic, referencing also allows for you to
retrace your steps and locate information you have used for assignments and discover further
views or ideas discussed by the author. Everything you have cited in text appears in your
reference list and likewise... everything that appears in your reference list will have been cited in
text, Check the reference list prior to handing in your assignment.
Even though you have put someone else’s ideas or information in your own words (i.e.
Paraphrased), you still need to show where the original idea or information came from.
This is all part of the academic writing process, when citing in text within an assignment, use the
author/s (or editor/s) last name followed by the year of publication.
Citing at the end of a sentence: Water is a necessary part of every person’s diet and of all the
nutrients a body needs to function, it requires more water each day than any other nutrient
(Whitney & Rolfes, 2011).
Whitney and Rolfes (2011) state the body require many nutrients to function but highlight that
water is of greater importance than any other nutrient, Water is an essential element of anyone’s
diet and Whitney and Rolfes (2011) emphasise it is more important than any other nutrient
Whitney, E., & Rolfes, S. (2011). Understanding nutrition (12th ed.). Australia: Wadsworth
Cengage Learning.
Exclude words such as “A” or “The”. If the title is long, it may be shortened when citing in text.
5. The first line of the reference list entry is left-hand justified, while all subsequent lines are
consistently indented.
6. Capitalise only the first word of the title and of the subtitle, if there is one, plus any proper
names
7. Italicise the title of the book, the title of the journal/serial and the title of the web document.
8. Do not create separate lists for each type of information source. Books, articles, web
documents, brochures, etc. are all arranged alphabetically in one list.
49
Personal communication
This refers to letters, including email, interviews, telephone conversations and discussions on
placement or work experience. Personal communications are cited in text only and are NOT
included in the reference list. Refer to APA manual, 2010, p.179
In text citation:
No-tillage technologies have revolutionised the way arable farmers manage their farming
operation and practices (W.R. Ritchie, personal communication, September 30, 2014).
Citing Journals
Gore. (2003). Research for beginners. Journal of Social Science Research, 34(1), 39-50.
Avoid including the word volume. Citing on line journals: retrieved from http://www.jlz.org.
Creswell, J. W. (2012). Qualitative Inquiry and Research Design: Choosing Among Five
Approaches. California. SAGE Publication.
Creswell, J. W. (2017). Designing and Conducting Mixed Methods Research. California. Sage
Publications
Denzin, Norman, K. (2017). The sage hand book of qualitative research 5 th Ed. London: Sage
Publication
Faden, R.R. & Beauchamp, T.L. (1986). A History and theory of informed consent. New York:
Oxford University Press
Israel, M. and Hay, I. (2006). Research ethics for Social sciences. California: Sage Publications.
Kerlinger, F. N. (2000). Foundations of Educational Research. New York. Holt Richard and
Winston.
50
Kothari. C. R. & Garg, G. (2014) Research Methodology: Methods and Techniques. 3rd.(ed).
Dehli. New age international Publishers.
Kumar, R. (201). Research Methodology: A Step by Step Guide for Beginners. 3rd Edition.
London: Sage Publications.
Kuhn,T. S. (1962). The Structure of scientific Revolution- Chicago: University of Chicago Press.
Oso, W. Y& Onen, D. (2011). A general guide to writing research proposal and report: a hand
book for beginning researchers. Nairobi. Jomo Kenyata Foundation
Singleton, R. A., & Straits, B. C. (1999). Approaches to social research. (3rd Edition). New
York: Oxford University Press
Stacks, Don W.; Hocking, John E. (2001). Essentials of communications Research. USA:Allyn
& Bacon, Incorporated
Stevens, M. (2013). Ethical issues in qualitative research. King’s College London: Social Care
Workforce Research Unit.