Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
32 views16 pages

Pqrma Notes

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views16 pages

Pqrma Notes

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 16

Chapter 1: Introduction to qualitative research

Rules that apply to conducting scientific research: the study is theoretically


informed, it uses a systematic procedure, approved methods and techniques are used,
and the study is documented in a way that allows others to assess the findings.

Fundamental research: aims to gain knowledge.

Applied research: aims at the use of knowledge to change or improve situations.

Paradigm = a framework for thinking about research design, measurement, analysis


and personal involvement that is shared by members of a speciality area. Reflect
issues related to the nature of social reality and knowledge. The nature of social
reality (ontology): whether the social world is regarded as something external to
social actors or as something that people are in the process of fashioning. The
nature of knowledge (epistemology): whether there is one single route to truth or
that diverse methods are needed to grasp the meaning of social experience.

Qualitative research: individuals have an active role in the construction of social


reality and research methods that can capture this process are required.
Epistemological stance: interpretivism. Use theoretical sampling: seek comparative
cases to expand, confirm or deepened assertions. The choice for new cases depends
on the theoretical needs of the researcher.

Qualitative research = describe and understand social phenomena in terms of the


meaning people bring to them. Research questions are studied through flexible
methods enabling contact with the people involved to an extent that is necessary to
grasp what is going on in the field. The methods produce rich, descriptive data
that need to be interpreted through the identification and coding of themes and
categories leading to findings that can contribute to theoretical knowledge and
practical use.

Chapter 2 Research design


Researchers face the difficult task of finding a balance between their
preparations, resulting in a research plan and conducting the research in practice.
The research plan contains the research questions, the research purpose, an ethical
paragraph, a plan for disseminating the findings and an outline of the overall
research strategy as well as the specific methods, techniques and instruments to be
used. A plan provides structure, but it should not interfere with flexibility. A
plan provides certainty, but should not block other promising options.

Planning a research project

Maxwell (2004) describes the research proposal as an argument which should


convincingly demonstrate why this research should be done, what activities it will
consist of, and to which results it will lead.

Literature review

At one time it was considered inappropriate to read about a research topic before
embarking on a qualitative undertaking. Their argument was that if they were to
read in advance, they would have an opinion by the time they reached the field.

An argument for reading other people’s research is that scientific knowledge has to
accumulate. If no one takes notice of previous work the wheel keeps on getting re-
invented. The problem statement then came to be seen as a preliminary guideline for
the research instead of a fixed starting point that determined the entire research
procedure. It was acknowledged that the research problem and the research questions
are generated at the start of the study and based on available but limited
knowledge. Therefore it is permitted to adjust the research questions during the
research if there are good arguments to do so. Researchers try to put the knowledge
they extracted from the literature aside in order to approach their field work with
an open mind. Bracketing is the common term for this process. First of all,
literature helps you to come up with a research topic. Not only does previous
research provide numerous topics in your area of interest, it also shows which
answers have already been given to certain questions.

Consequently, it allows you to identify a gap in the existing knowledge and to


delineate your own research. Definitive concepts have a fixed content that is
reflected by its measure. In contrast, sensitizing concepts start out with a broad
and general description and as such they can function as the researcher’s lens
through which to view the field of research. A research proposal starts with a
research problem.

Research question

The research question in central question which the researcher wants to answer by
doing the research project. The research problem must be sufficiently focussed and
defined in order to formulate clear research questions. Qualitative research can
deal with so-called descriptive questions as well as with explanatory questions.
Descriptive questions deal with the ‘what’ of social phenomena, while explanatory
questions deal with the ‘why’ of these phenomena. The research question is often
broad and encompassing, and is usually divided into multiple sub-questions that
further structure the research. Formulating sub-questions is difficult for a number
of reasons:

They must use the terminology that fits the chosen approach or tradition.
They need to fall under the umbrella of the overall research question, confining
the research to topics described in the overall question and allowing you to make a
contribution to the knowledge in a particular scientific area.
Research questions must match one another and follow logically one after the other.
By answering the related research questions you build an argument in the research
report.
The questions need to be answerable by means of the proposed research. This implies
that abstract concepts may only be used when they can be defined and translated
into operational terms. Only then will they become researchable, which means that
empirical data can be collected in relation to the operational terms in order to
observe the concepts.
Research purpose

Two distinctions can be made with regard to the research purpose. The first is
between research mainly aimed at description and research mainly aimed at
understanding or explanation. The second distinction is between fundamental and
applied research. Description is considered more limited than explanation: it is
possible to describe without explaining, however it is not really possible to
explain without describing. Roughly speaking, research serves to gain scientific
knowledge purely to extend what is known on a certain topic. This type of research
adds to the existing, theoretical knowledge in a certain area of interest and is
also known as ‘fundamental’ or ‘basic’ research. Research can also provide
knowledge to facilitate change in problematic situations. This type of research is
referred to as ‘applied’ or ‘policy-oriented’ research.

Legitimizing the choice for qualitative research

The most salient reasons to account for qualitative methods:


Exploration: when a study has an explorative nature, you need methods with a
maximum of explorative power. Qualitative methods do live up to his because of
their flexible approach.
Description: qualitative methods offer the opportunity for participants to describe
the subject of study in their own words and to do so largely on their own
conditions.
Explanation: qualitative methods can lead to an interpretive rendering of the
studied phenomenon. Through the constant comparison of data with the emerging
ideas, a more abstract and conceptual model can be generated that is grounded in
the data.
Change: some subjects do really change fast as they gain momentum.
Since qualitative methods are flexible and cyclical, they can be adjusted to the
field and measure possibly important decisions and subtle activities that could
have major consequences.
Use: qualitative methods hold the promise to yield findings that reflect the
participants’ perspective and that fit the substantive field.
Sensitivity: qualitative researchers often choose to examine other people’s
experiences and emotions. It is assumed that these topics can be more easily
captured in research that leaves much of the control to the participants, although
with well-defined limits.
In a qualitative research proposal you have to argue that qualitative methods have
the potential to produce the findings that are going to realize your goals.

Sampling, recruitment and access

When the research question and purpose have been formulated, the next step is to
find a setting in which to conduct the research. Usually it is not possible to
study all aspects of your chosen subject; therefore you have to take a sample, that
is, to select cases. In selecting a setting, Morse and Field (1996) use the
principle of maximization. This means that a location should be determined where
the topic of study manifests itself most strongly. A sample consists of the cases
(units or elements) that will be examined and are selected from a defined research
population. Two types of purposive sampling can typically be distinguished in
qualitative research. One form of purposive sampling is suitable for qualitative
research, which is informed a priori by an existing body of social theory on which
the research questions are based. The other form is theoretical sampling, designed
to generate theory which is grounded in the data, rather than established in
advance of the fieldwork. Theoretical sampling is defined as the process of data
collection for generating theory whereby the analyst jointly collects, codes and
analyses his data and decides which data to collect next and where to find them, in
order to develop his theory as it emerges. When can you cease data collection and
stop sampling new units? This happens when a point of saturation has been reached.
Once again, this procedure is shaped in grounded theory, and it means that
researchers may stop collecting data when analysis of the newly selected cases
yields no further information with regard to the selected research topics. There is
one other important question left with regard to sampling: How do researchers gain
access to the field that they would like to investigate and how do they locate
participants? Methods of recruitment are very diverse: see page 40.

Chapter 4: Data collection


Data

In qualitative research, there are many ways in which datadate may be collected.
Which method is preferred depends on the research questions and the project’s
purposes, and the practical possibilities and limitations. The stance taken in this
book is that qualitative data reflect people’s experiences of daily life and that
by studying these data social scientists are able to understand aspects of the
social world. However, qualitative data are not exact representations of life
experiences. There are two reasons for this. First of all, solicited qualitative
data are the result of an interaction between the participant and the researcher in
one way or another. Data are produced in a specific context with a specific aim,
and this will colour them in some way. A second reason that data are not similar to
the experience itself is that data depend on the participants’ ability to
reflectively distinguish aspects of their own thoughts, ideas, observations and
experiences and to effectively communicate what they perceive through language.

Participant observation

A definition of participant observation is: The process in which an investigator


establishes and sustains a many-sided and relatively long-term relationship with a
human association in its natural setting for the purpose of developing a scientific
understanding of that association. It is an approach to research which takes place
in everyday situations rather than in laboratory conditions. This is why the method
is also known as ‘field work’. It is particularly useful when:

Little is known about the phenomenon


Views of insiders and outsiders are opposed or stereotyped
The phenomenon is somehow hidden from the view of outsiders.
Participant observations is an umbrella term that covers several methods and
techniques. The method is challenging for researchers as it taxes their social
skills as well as their memories. What is it that participant observers discern?

Three elements can be distinguished:

What people do: cultural behaviour such as events and interactions


What people know: cultural knowledge and opinions
What people create and use: cultural artefacts such as art objects, clothing,
buildings and tools, as well as symbolic marks
The literature provides a lot of clues for the making of field notes, for example:

Identification: who says what


Write everything down literally
Take concrete notes, not abstract ones
Gatekeepers have the power to help researchers understand and establish
relationships within the research population. They also have the power to negotiate
conditions that are acceptable to those they serve. Gaining entrance and building
and maintaining trust with participants is one of the key issues in participant
observation.

The qualitative interview

In interviews the researcher is the main instrument, just as in participant


observation. A definition of interview is: A form of conversation in which one
person – the interviewer – restricts oneself to posing questions concerning
behaviours, ideas, attitudes, and experiences with regard to social phenomena, to
one or more others – the interviewees – who mainly limit themselves to providing
answers to these questions. ‘Rapport’ holds that both partners have a genuine
interest in the asking, answering and listening during an interview. The more the
interview is planned beforehand, the more the interviewer determines the direction
of the interview. A preliminary structure determines:

Which questions will be posed


The formulation of the questions posed
The sequence in which questions will be posed
The answering options for the participants
Since qualitative researchers are often looking for a true understanding of what is
happening, the interviews are usually not entirely pre-structured with respect to
content, formulation, sequence and answers. Neither are they left entirely open
(open or qualitative interview); semi-structured or half-structured interview. See
figure below:

‘Probing’ refers to verbal or non-verbal behaviour of the interviewer when the


interviewee’s reply to the question is not relevant, clear or complete, and can
consist of posing questions, keeping silent of giving non- specified encouragement.
Sometimes probes are distinguished from prompts, the latter referring to issues
that the interviewer wants to direct attention to. Crucial for a successful
interview is that the questions fit the interviewee’s frame of reference. One
aspect of this is that the questions match the research topic exactly as it was
introduced by the interviewer and to which the participant has agreed to
participate. Another aspect of it is that the topic needs to be of concern to the
participant and the questions need to be posed in a language that is
understandable.

Focus groups

Qualitative interviews with more participants at a time are referred to as group


interviews. Focus groups represent a specific set of group interviews that
particularly emphasize the interactive patterns among group members and how they
come to generate mutual understandings and ideas. A definition: a focus group is a
group interview – centred on a specific topic and facilitated and co- ordinated by
a moderator or facilitator – which seeks to generate primarily qualitative data, by
capitalising on the interaction that occurs within the group setting. Focus groups
use interviewing techniques with discussion taking place under the guidance of a
moderator. An assistant moderator usually does not actively take part in the
discussion, but takes notes, observes group interactions and supervises recording
equipment. Censoring and conforming can take place. Censoring means that the
dominant norm of the group is not to mention or talk about a certain topic or to
express a certain view on it. Conforming means that people agree with the dominant
opinion of the group, sometimes even going further in voicing their own opinions.
Sometimes focus group data are considered to be of less value than individual
interview data: the contribution of its members consists of what they chose to
share in a particular group; therefore it is unlikely that in another mix of group
members the exact same information would be collected. Remember that this is
exactly why focus groups are organized: because we want to simulate how a group
progresses when brought together.

Visual data

Visual data refer to the recording, analysis and communication of social life
through photographs, film and video. Here we focus on research-generated visual
data. The quality of scientific research using visuals as data is of concern. With
manipulation of images by software packages being so easy, one aspect of the debate
is whether the picture is true in the sense that it is a real image of reality at
that moment. Visual materials can also be used in combination with other methods to
generate data, for instance when photographs are showed to a participant in an
interview or a film to members of a focus group: photo- elicitation interview. In
market research, photo-elicitation is a well-known technique to examine
associations that people are not explicitly aware of themselves.

Instruments used

To develop a measuring instrument, the research questions are converted into topics
which may be raised in the interview and into questions which may be posed. Hence,
the topics and the questions will correspond with the main research questions. The
development of a topic list involves linking the research questions to the topics
and vice versa, checking the legitimacy of every single question on the topic list.
When most of the topics have been decided, the next step is to group them into
blocks of questions that belong together and to organize those blocks into a
logical sequence that will facilitate the participant as much as possible. So how
does the topic list function during interviewing? First of all, it reminds you of
the topics and questions that matter. The topic list reduces the amount of instant
improvisation and acts as a prompt if you lose track.

Second, the instrument focuses data collection. Third, the experiences of previous
interviews are processed in the emergent topic list. Questions that fall short are
removed and missing questions added. Fourth, the instrument is a forerunner for
what you are after and for what you think will be of interest later on.

Writing memos

Memos can function as a link between thinking and doing. Memos form a kind of
project log: a chronological overview of the decisions made and how they guided
future actions of the researchers. In the literature several types of memos or
notes are distinguished, for instance between observational memos, methodological
memos and theoretical memos. Observational memos are also known as field notes. The
researcher uses them to describe observations made in the field. Theoretical memos
reflect how findings are derived from the data.

Methodological memos contain all thoughts relevant to the methods used. Memos have
a special function in tracking the evolution of a research project. Memos are not
often documented in research reports.

Preparing for data analysis

Data as they are originally generated are also known as ‘raw’ data. Getting these
raw data ready for analysis requires preparation. Preparing the data is part of
‘data management’, a term which refers to the systematic, adequate storage and
retrieval of data and of preliminary analyses. One aspect of data preparation is
the organization of the storage of different data files so that they can be easily
retrieved. A second aspect of data preparation is the transcription of audio and
visual sources. During the transcription of recordings, the data are altered. Non-
verbal behaviour such as facial expression, posture, tone, rhythm and intonation is
lost. In order to compensate for the loss of information, researchers can create
elaborate memos, as discussed above, in which they describe impressions,
observations and oddities. A third element of data preparation is taking out all
information that can identify participants and violate the promise of
confidentiality. This means that with a view to ethics, the informants will not be
identified on transcripts of interviews, notes or memos. A fourth and final part of
data preparation is the manipulation of data that might be necessary for processing
qualitative data analysis with the computer.

Chapter 5: Principles of qualitative analysis


What is analysis?

Segmenting and reassembling are considered the chief activities of qualitative data
analysis. Reassembling refers to looking for patterns, searching for relationships
between the distinguished parts, and finding explanations for what is observed. The
aim of reassembling is to make sense of the data from a theoretical perspective.
Definition: Qualitative analysis is the segmenting of data into relevant categories
and the naming of these categories with codes while simultaneously generating the
categories from the data. In the reassembling phase the categories are related to
one another to generate theoretical understanding of the social phenomenon under
study in terms of the research questions.

Segmenting data

Researchers segment data in what are thought to be relevant and meaningful parts.
Pieces of data are compared in order to determine their similarities and
differences and whether they should be grouped together or not. Segmenting is
considered to be the first modification of the data after data preparation.

Reassembling data

Reassembling of data is commonly described by a variety of terms, such as


synthesizing, structuring, integrating, putting together, recombining and
modelling. Before considering the relationships between the building blocks of the
research, the building blocks themselves have to be fairly clear. Reassembly
requires continuous consideration of the data, of the evolving relationships
between the categories, and of the credibility of those relationships. Researchers
must first look at their data in order to discern what to look for in their data.

In the literature, three basic procedures are frequently used when qualitative data
analysis is discussed: constant comparison, analytic induction and theoretical
sensitivity.

Constant comparison

Constant comparison is the main component of the analytical process in the grounded
theory approach. Constant comparison and theoretical selection go hand in hand.
Their purpose is to describe the variation that is found within a certain
phenomenon. The method of constant comparison arose from the idea that phenomena
will manifest themselves in different ways when circumstances differ. Wester has
developed the method of constant comparison into a strategy for qualitative
research. His approach consists of four phases:

Exploration: the discovery of concepts


Specification: the development of the concepts
Reduction: determining the core concept
Integration: developing the final theory
In the exploration phase, researchers explore the field and try to depict it
accurately in a number of codes. In the specification phase, the researcher selects
a number of key codes from all the codes that have been developed so far, and
searches for differences and similarities in the fragments that have been awarded
that code. In the reduction phase, the goal of the analysis is to describe the core
concept and the relationship this concept has with other concepts.

Finally, in the integration phase a theory is developed, and constant comparison is


used to search for cases with which the theory is then tested. Furthermore, the
cases are assessed to see whether they can be included in the theoretical
framework, and if so where.

Analytic induction

Analytic induction: researchers try to find the best fitting theoretical structure
for their research material. The strategy can be used to develop a definition of a
phenomenon. In that case, the researcher systematically determines which
characteristics are present every time a phenomenon occurs. However, analytic
induction is most commonly used to develop a theory about the causes of certain
behaviour. Hypotheses are continually tested on new material by explicitly
searching for instances that do not support the hypotheses. Whenever such an
inconsistent or deviant case is found, either the description of the phenomenon is
changed in order to exclude the case, or the hypothesis is restated. Maso and
Smaling have developed an application of analytic induction. This application is
appropriate for research aimed at developing theories. There are four phases to the
application:

Incubation
Confrontation
Generation
Closure
In the incubation phase, a theoretical framework is developed based on the
literature. This framework consists of concepts, propositions and hypotheses. A
proposition, in this case, is an idea that is still so vague it cannot yet be
formulated as a hypothesis. A hypothesis is a proposition that has been redefined
to a conditional statement, an ‘if … then’ relationship. In the confrontation
phase, the theoretical framework is pitted against the information that was derived
from the observations immediately after the first round of data collection has been
completed. In the generation phase, the new material plays the leading role: new
ideas and hypotheses are proposed as a result of the new material. These new
propositions and questions are then also compared to other new material and so
forth. The last two phases are then repeated again and again. In the closure phase,
an initial answer to the research questions is formulated, and includes clear
evidence of the source of the analysed data so that the value of this initial
conclusion is clear.

Theoretical sensitivity

Theoretical sensitivity is the researcher’s ability to develop creative ideas from


research data, by viewing the data through a certain theoretical lens. Analysis
requires an ability to interpret research data, and to derive ideas and themes from
them. The word ‘theoretical’ indicates an ambition not merely to describe but also
to theorize. Glaser attaches great importance to the researcher’s theoretical
sensitivity in the analysis of gathered data which he believes to be attained
through immersion in the data. He has developed a large number of moulds, which he
called ‘coding families’, to stimulate the researcher’s thinking. Strauss and
Corbin propagated the use of a single mould, which they called the ‘coding
paradigm’ and which they claimed is suitable for all data. The mould consists of
several parts, namely conditions, context, interactions/strategies and
consequences. These four elements should be sought for in the data. In Glaser’s
opinion, the application of Strauss and Corbin’s mould leads to forced modelling,
instead of just emerging.

A stepwise approach the spiral of analysis

A thorough review of literature and social theory precedes the empirical part of a
qualitative research project.
The approach to analysis that we will be using is based upon the principle of
constant comparison as developed in the grounded theory approach and the stages
devised by Wester.
Coding is seen as the most important aid in conducting an analysis. Coding is used
to segment and reassemble the data.
For a research project to be successful, structuring of the analytical process is
crucial. A step-by-step approach is needed.
Analysis forces the researcher to engage in two activities: thinking and doing.
Doing stimulates thinking, and vice versa.
See figure below:
The left-hand side of the diagram shows the role of data in the analysis, which is
considered the input of the analysis. The middle column contains the analytical
activities of the researcher. The right-hand side describes the output of the
analysis, that is, the interim results of the research. The diagram shows three
separate rounds of data collection. There may be more or less, depending on the
project.

Chapter 6: Doing qualitative analysis


Introduction to coding

Findings can consist of descriptions that are more or less theoretical as well as
interpretative explanations of the research subject. Findings of qualitative
research always include interpretations of the empirical data. It is a mistake to
consider raw data as findings. We can distinguish two types of analyses: one is
oriented towards the themes or categories present in the data, and the other is
oriented towards the cases, such as organizations, activities, events, situations
or participants. A code is a label that depicts the core topic of a segment. While
coding, a researcher is looking for descriptions and sometimes for theoretical
statements that go beyond the concrete observations in the specific sample.

Important terms in qualitative analysis:

Analytic – conceptual (expressed in concepts), or theoretical


Category – a group or cluster used to sort parts of the data during analysis
Code – a word or string of words used as a name for a category generated during
analysis
Concept – a term referring to a category and used as a building block in a theory
Interpretation – an explanation of the meaning if what is observed in empirical
data
Pattern – an orderly sequence consisting of a number of repeated or complementary
elements
Theme – the matter with which the data are mainly concerned
Theory – a coherent framework that attempts to describe, understand and explain
aspects of social life
Open coding

Open coding is the process of breaking down, examining, comparing, conceptualizing


and categorizing data. Open coding usually takes place at the beginning of the
research project and starts during the collection of the first round of data.
Little to no selection is made in terms of relevance of the research material,
because it is still largely unpredictable what will be of value and what will not.
Exploration of the data by open coding constitutes the start of conceptualization
of the field of research. Open coding encourages a thematic approach since it
forces the analyst to break up the text into pieces, to compare them and to assign
them to groups that address the same theme. Open coding contributes to a clear
organization of the data as well, since it results in an indexing system that fits
the researcher’s analytical needs. The process of open coding involves the
following steps:

Read the whole document


Re-read the text line by line and determine the beginning and end of a fragment
Determine why this fragment is a meaningful whole
Judge whether the fragment is relevant to the research
Make up an appropriate name for the fragment, a code
Assign this code to the text fragment
Read the entire document and code all relevant fragments
Compare the different fragments, because it is likely that multiple fragments in a
text address the same topic and they should therefore receive the same code.
The result of open coding is a list of codes, also referred to as a coding scheme.
Within a software program codes can be listed or organized in different ways. In
open coding, doing (actually assigning a code) and thinking (coming up with good
questions and codes) converge. Specific codes derived from the participants’
terminology are known as ‘in vivo’ codes or field-related concepts. In vivo codes
are not just catchy words; rather they pinpoint exactly what is happening or what
the meaning of a certain experience or event is. Charmaz described in vivo codes
as:

General terms everyone knows that flag condensed and important meanings
Terms made up by participants that capture meanings or their experiences
Insider shorthand terms specific to a particular group that reflect their
perspective
Another source of codes is the concepts derived from social theories that
researchers have come across during disciplinary education and while reading the
literature. These kinds of codes are commonly known as ‘theoretical concepts’ or
‘constructed codes’. Charmaz particularly advises coding for action and process.
This is made easier by coding with gerunds. So you should keep your codes active
and close to the data. In open coding, researchers should not globally examine the
text but do so in detail. However, a coding scheme which is too detailed is not
recommended either. When every distinguishable fragment in a text receives a
different code, the researcher goes beyond comparison and categorization. In this
phase of a research project, it is recommended that researchers work in a group
instead of on their own. Having others to confer with contributes to a well-
developed coding system, thereby ensuring that certain fragments are systematically
awarded the ‘correct’ code. This is known as ‘inter- rater reliability’. The phase
of open coding can be ended if no new codes are necessary. The entire process may
repeat itself until the second or third round of data collection. Every time new
observations provide a reason for generating a new code, open coding is resumed.

Axial coding

The term axial coding refers to a set of procedures whereby data are put back
together in new ways after open coding, by making connections between categories.
Axial coding is a more abstract process and consists of coding around several
single categories or axes. Axial coding relates categories to subcategories,
specifies the properties and dimensions of a category, and reassembles the data you
have fractured during initial coding to give coherence to the emerging analysis.
When researchers employ axial coding, the reasoning moves predominantly from codes
to data, whereas in open coding the reasoning moves in the opposite direction, from
data to codes.

Axial coding involves the following steps:

Determine whether the codes developed thus far cover the data sufficiently and
create new ones when the data provide incentives to do so.
Check whether each fragment has been coded properly, or if it should be assigned a
different code.
Decide which code is most suitable if synonyms have been used to create two equal
categories, and merge the categories.
Look at the overview of fragments assigned to a certain code. Consider their
similarities and differences.
Subdivide categories if necessary.
Look for evidence for distinguishing main codes and sub-codes and assign the sub-
codes to the main code.
See whether a sufficiently detailed description of a category can be derived from
the assigned fragments and if not, decide to collect new data to fill the gap.
Keep thinking about the data and the coding.
The primary purpose of axial coding is to determine which elements in the research
are the dominant ones and which are the less important ones. The second purpose of
axial coding is to reduce and reorganize the data set: synonyms are crossed out,
redundant codes are removed and the best representative codes are selected.
Sensitizing concepts come into play again during the phase of axial coding. The
sensitizing concepts can be filled in during the analytic phase of axial coding to
get their definitive content that fits in with how they are used by the
participants in the field. In the case of analytical use of codes, thinking about
inter-rater reliability is important. When the codes are used in a more practical
or descriptive way, the value of inter-rater reliability diminishes and can even
become a hindrance. In the axial coding phase, both the definition and the
properties of a category become clear. Axial coding can be finalized when the
distinction between main codes and sub-codes is clearly established and the
contents of the categories are known.

Selective coding

Selective coding refers to looking for connections between the categories in order
to make sense of what is happening in the field. Selective coding is aimed at
integrating the loose pieces of your earlier coding efforts and can be considered a
logical step after the segmenting of the data. Often, the findings of a qualitative
piece of work will aim at theory development. When this is the case, it is in the
process of selective coding that certain categories are adopted as theoretical
concepts. Theoretical coding as an alternative term for selective coding is
therefore a rather good choice for this sophisticated level of coding. The term
‘core category’ originally stems from the grounded theory approach and refers to a
category that is central to the integration of the theory. The core category or
core concept is a construction of the researcher which does not magically emerge
from the data.

Since the phase of selective coding marks the end of the analysis phase, it is
useful to answer the questions listed below. When an answer can be given, it shows
that you understand how the pieces of data fit together:

Which themes have turned up repeatedly in the observations?


What is the main message that the participants have tried to bring across?
How are the various relevant themes related?
What is important or the description and the understanding of the participant’s
perspective and behaviour?

Chapter 8: Findings
Results of qualitative research

Findings in qualitative research are conceptualized differently from findings in


quantitative research. The link between data and interpretations complicates what
can be thought of as findings. In quantitative research, interpretations of
numerical analysis results are commonly put into the discussion section of the
formal report. In qualitative research, however, it is the researcher’s
interpretations that constitute the results. Box 8.1 Mistakes in qualitative
research:

Data are not findings


Analyses are not findings
Under-analysis
Lack of thick description
Forcing a framework
Over-generalizations
See figure below:

Their criterion is the degree of transformation of the data. On the left of the
figure are findings that represent the least transformation of data, and on the
right are those that are most transformed.

Description of the different types of findings:

No findings – In a no-finding report the data are presented as findings. The


researchers simply reproduce the data, with almost no interpretation.
Topical surveys – In topical surveys the data are minimally transformed. The topics
used are often preselected as a formation for data collection, and structured
instruments are used. The system for ordering the topics into groups is based on
simple descriptions, sometimes influenced by number of occurrences rather than
interpretations.
Thematic survey – In the thematic survey a greater degree of transformation is
found. More effort is put into exploring themes, and describing and interpreting
them instead of merely listing topics. More attention is paid to the nuances and
subtleties and to the language used and the way participants express themselves.
Conceptual/thematic description – Reports with a conceptual/thematic description
contain findings rendered in the form of one or more themes or concepts either
developed from the data or imported from existing theories or literature. These
themes represent a pattern in the data that the researcher, by analysing the data,
had to find and extract. Concepts and themes from existing literature are used to
transform the data and to provide new insights instead of merely organizing the
data.
Interpretive explanation – The explanations are composed of linkages between
different categories that represent the studied phenomenon in a new way. The core
of this type is an explanation that fully covers variations in both sample and
data.
Not all studies aim for the same goals, and as such they do not demand the same
analytical efforts.

Meta-synthesis of qualitative studies

In this chapter we are particularly interested in the nature of qualitative


findings, in their origins, and in the factors that influence the ultimate
findings. Qualitative meta-synthesis is defined as ‘a kind of research integration
study in which the findings of completed qualitative studies are combined’. Four
stages of meta- synthesis:

Meta-theory involves examination of the theories that led researchers to identify


relevant research topics, frame research questions in certain ways and determine
such factors as inclusion criteria, angle of vision and interpretive lens.
Meta-method involves thoughtful examination of the manner in which the
methodological approach that was used to gather and interpret data shapes the
findings that emerge from a particular study
Meta-data-analysis involves reinterpretation of the actual findings from the
original qualitative studies in light of data and findings from other studies
Meta-synthesis refers to testing for the degree that overarching conceptualizations
and the theories that derive from them surmount the analytic challenges that have
been detected within the body of research
Meta-data-analysis implies that the entire results section of a qualitative report
is examined. For qualitative meta-data-analysis to take place it should be clear
exactly what the researchers have found out about their topic. Synthesizing
qualitative research is no easy job since it involves, among others:

Discerning the influence of method on findings


Deciding whether a study is worthy of being included in an integration project
Preserving the integrity of each study
Choosing the right approach to combining the findings
Deciding what integration means in qualitative research
Muddling qualitative methods

Slippages between methods do not automatically make the findings worthless. Morse
argues that different research methods are suitable for answering different
research questions, and that different research methods led to different outcomes.
Possible problems arise when the canons of a tradition are compromised through
intentional or unintentional muddling methods. ‘Muddling’ refers to the
incorporation of methods and techniques that did not originate in the tradition and
that at first sight do not fit in. Some scientists argue that researchers need to
make a clear-cut choice between the different traditions and approaches.

Mixed method research

Mixed methods can be broadly defined as ‘research in which the investigator


collects and analyses data, integrates the findings, and draws inferences using
both qualitative and quantitative approaches or methods in a single study or a
program of inquiry’. The legitimacy of combinations of qualitative and quantitative
research has been questioned mainly within the debate about the paradigm-method
fit. When research methods are chosen because of philosophical considerations, this
is coined a paradigmatic choice. This means, for instance, that for social
scientists holding a constructivist view, it would be unjust to study human beings
as if they were passive respondents. Conversely, it would be incorrect for
researchers holding an objectivistic view to bias the respondents’ answers by
introducing the researcher personally as is done in qualitative research. Mixed
method designs are commonly distinguished on two criteria. The first is whether any
of the two branches is prioritized or whether both have equal weight. The second is
whether the studies take place simultaneously or sequentially. In a sequential
explanatory design, qualitative research is mainly chosen for its explanatory
power, and in the sequential exploratory design it is chosen mostly for its
exploratory capacities. In concurrent triangulation the choice is mainly motivated
by the fact that qualitative research complements quantitative research by
providing different types of knowledge.

Sequential explanatory (QUAN  qual)

Sometimes quantitative research provides unexpected outcomes, contradicting


outcomes or outcomes that are not very well understood. Unexpected or unexplained
outcomes can then be followed up with qualitative research. The challenge in such a
design is to integrate both parts in a way that the qualitative research answers
the questions that puzzle the researchers after finishing the quantitative part.

Sequential exploratory (QUAL  quan)

This is chosen, among other reasons, when researchers aim at measuring phenomena
with standardized and valid instruments in an area that still needs exploration.
The next step of this approach has a predominantly quantitative character, in most
cases the generation and administering of a survey. The challenge is to translate
the qualitative findings into appropriate items to create a valid and reliable
measuring instrument.
Concurrent triangulation (QUAN + QUAL)

In this, quantitative and qualitative data are generated simultaneously to examine


a particular phenomenon from different angles, also referred to as concurrent
triangulation design. The purpose of such a design is often to generalize findings
from the quantitative research to the entire population by means of a large sample
and to understand the mechanisms that underlie the outcomes at the local or micro-
interactional level by means of the qualitative research. The difficulty with
triangulation is the question whether both methods measure the same phenomenon and
how evidence resulting from both methods should be interpreted.

Practical use

It is still uncertain if and how the findings are going to be used. On one hand
this is due to the nature of research, and on the other hand it is due to the
nature of policy making. Utilization of research if often conceived as its
instrumental use. Instrumental use means that policymakers act on the findings –
implications and advice – of specific research and adjust policies or interventions
accordingly.

However, findings seldom offer clear-cut indications or advice on how to implement


them in practice. Evaluation studies are often conducted as quantitative
experiments and preferably as randomized controlled trials since these designs are
considered the best choice for determining causal relationships between
intervention and effect. An obstacle for generalizing the outcomes of these
experiments is that they are often conducted in highly controlled laboratory
settings and that the outcomes cannot automatically be considered evidence for
practice in real-life situations. It is acknowledged that qualitative research can
help bridge the gap between research and practice. Its strength lies in its
flexibility, enabling researchers to make a detailed examination of specific
phenomena in the field with the required instruments to reveal aspects of human
experience with treatment, therapy, programmes or intervention. However, it is not
as simple as it seems. A solution for one problem may work in specific conditions
and not work in others.

Chapter 9: Quality of the research


Thinking about quality

Judgement of the quality of research implies an assessment of the accuracy of the


insights gained as a result of the research. A judgement takes into account whether
the findings and conclusions convincingly represent the social phenomenon that was
claimed to be examined. To arrive at a judgement, it is necessary to track down how
these findings and conclusions were realized. A judgement of the legitimacy of the
conclusions is the ultimate criterion by which the findings of a research project
may be added to the already existing body of knowledge on a certain social
scientific area. Quality of research is also referred to as objectivity (but not in
this book). ‘The making of’ the findings needs to be transparent and up to review
at all times. Reliability is referred to as the consistency of the measures used in
social research. When the same phenomenon is repeatedly measured using the same
instrument it should lead to the same outcomes. Reliability is often determined by
calculating internal consistency and stability over time. The reliability of
observations may be increased by standardization of data collection methods,
arguing that accidental errors and thus inconsistencies occur less frequently when
all procedures are specific. Additionally, researchers hope that unsystematic
errors even out when sufficient observations have been made in a study. Validity is
being specific about what you set out to assess. This is dependent on the use of
the correct measures. Measurement validity or construct validity refers to whether
the measure that is formulated for a particular concept really does reflect the
concept that it is supposed to measure. Face validity means that others have a look
at the way in which a construct is laid out into elements in an attempt to measure
the construct itself. Content validity involves a panel of experts who view the
instrument and give their approval, often in a few rounds of exchanging opinions
and views. Internal validity means that we can be confident that researchers
describe and/or explain what they had set out to describe and explain. Many things
can undermine or threaten the internal validity of a study. For example, when a
segment of the selected population cannot be reached or does not want to cooperate.

Glaser and Strauss developed four quality criteria:

Fitness: the theory must closely fit the substantive area in which it will be used
Understanding: the theory must be readily understandable by laymen concerned with
this area
Generality: the theory must be sufficiently general to be applicable to a multitude
of diverse daily situations within the substantive area, not to just a specific
type situation
Control: the theory must allow the user partial control over the structure and
process of daily situations as they change through time
Perspectives on quality

The first perspective: all scientific research must strive to arrive at the truth
of objective reality and should be neutral, value free, impartial and distanced. In
this view on social science research, much value is attached to reliability by the
standardization of measurement instruments and by its replication. Qualitative
research is only partially able to meet the demands that this perspective places on
the research practice. The second perspective: social scientists holding this
perspective break with the traditional conceptions of quality being validity and
reliability. They formulate other demands for research quality, such as
confirmability, dependability, transferability and credibility. Criticisms:
disengages itself from mainstream social science. The newly formed criteria are
derived from the original ones. A third point of criticism is made by those who
feel that political or humanistic criteria should not have such a large place in
social scientific research. The third perspective: the relationship between
reliability and validity is sometimes viewed differently from the way it is viewed
in quantitative research.

‘Doing’ quality

Methodological accountability means that the researchers accurately document what


they have done, how it was done and why it was done. By including a proper account
of all activities, others can judge whether the outcomes can be trusted, and they
can repeat the whole investigation if desired. To retrace what the researcher has
actually done, the topic list can be added to a publication as well as an account
of the researcher’s perspective. Replication of a research remains difficult, even
if the steps taken during the research are carefully described in the report.
Therefore methodological transparency seems an alternative, because it enables at
least virtual replication: it dominantly serves to assess the justification of the
researcher’s choices and the lines of reasoning and it facilitates possible
replication and comparative studies.

Reflection on the researcher’s role: one effect of researcher’s involvement is that


people have the tendency to change their behaviour when they know they are being
studied. This phenomenon, called ‘reactivity’, has a negative influence on
validity. People are believed to get used to the presence of researchers however,
when they are in the field for a prolonged period. When researchers are in the
field, it is possible that they too change during the course of the research:
‘going native’. How can researchers reflect on their own role and influence in the
field? Patton advises to stay sensitive to shifts in one’s own perspective by
systematically recording it at various times throughout the field work.

Triangulation refers to the examination of a social phenomenon from different


angles. First and foremost, triangulation entails the use of more than one method
or source of data in a research endeavour. Researcher triangulation and theoretical
triangulation follow from here. Theoretical triangulation requires that more than
one theory is applied to interpret the data. Recently critical reviews of
triangulation have appeared, especially when qualitative and quantitative research
are combined: how does different evidence need to be weighed against each other?
Patton argues that a common misunderstanding about triangulation is that the point
is to demonstrate that different data sources or research methods yield essentially
the same result; the point is really to test for such consistency.

Member validation is also known as feedback to participants or member checks. If


members of the social world that was studied confirm that the researcher has
correctly understood that social world, this adds to the credibility of the results
and ultimately to the acceptability of others.

Multiple researchers: Using multiple as opposed to singular researchers is often


referred to as researcher or analyst triangulation. Teams are better in
standardizing coding and improving accuracy of the coding process as well as
avoiding bias because of incorporating control on each other’s interpretations.
Peer debriefing is thought of as a special type of researcher triangulation, with
peers or colleagues that are not part of the research team providing a fresh
perspective on the analysis process and exploring explanations the researcher(s)
may have overlooked.

Checklist for asserting quality of the analysis

In order to find out whether a certain study meets a number of essential criteria,
several checklists have been generated. Lofland and Lofland organized the questions
to ask about a field study report in four categories:

Basic organization: How well is the report presented?


Data and methods: What is the quality of the data collection, analysis and
presentation?
Analysis: What is the quality of the analytic effort?
Overall evaluation: What, overall, is the value of this report?
External validity or generalizability

External validity or generalizability is one of the most difficult subjects in


qualitative research. For research to be externally valid it is of crucial
importance how the sample is selected, and it is exactly this that causes doubts
about the possibilities of generalizing qualitative findings. Smaling distinguishes
three types of inductive generalization, namely statistical generalization,
theoretical generalization and variation-based generalization. Statistical
generalization is based on random sampling so that each case has an equal chance of
becoming part of the sample. Theoretical generalization is based on the principle
of replication. The researcher theorizes on the basis of a certain sample, then
tests the provisional findings and conjectures with new sample cases. The third
type is variation-based generalization. This type is applicable when sampling takes
place by means of a purposive descriptive form that is non-theory directed. The
researcher then may attempt at some form of generalization of the research results
by focussing on describing the variations in which a phenomenon occurs. ‘Analogical
generalization’ means that the researcher considers the relevant similarities and
differences between the cases studied and not studied to argue the possibility that
the findings are applicable for the non-researched cases as well.

You might also like