1-1a: Types of Studies: Foundations
1-1a: Types of Studies: Foundations
Time is an important element of any research design, and here I want to introduce one of the most fundamental distinctions in research design nomenclature: cross-sectional versus longitudinal studies. A cross-sectional study is one that takes place at a single point in time. In effect, you are taking a slice or cross-section of whatever it is you're observing or measuring. A longitudinal study is one that takes place over timeyou have at least two (and often more) waves (distinct times when observations are made) of measurement in a longitudinal design. A further distinction is made between two types of longitudinal designs: repeated measures and time series. There is no universally agreed upon rule for distinguishing between these two terms; but in general, if you have two or a few waves of measurement, you are using a repeated measures design. If you have many waves of measurement over time, you have a time series . How many is many? Usually, you wouldn't use the term time series unless you had at least twenty waves of measurement, and often far more. Sometimes the way you distinguish between these is with the analysis methods you would use. Time series analysis requires that you have at least twenty or so observations over time. Repeated measures analyses aren't often used with as many as twenty waves of measurement. A relationship refers to the correspondence between two variables (see the section on variables later in this chapter). When you talk about types of relationships, you can mean that in at least two ways: the nature of the relationship or the pattern of it. The Nature of a Relationship Although all relationships tell about the correspondence between two variables, one special type of relationship holds that the two variables are not only in correspondence, but that one causes the other. This is the key distinction between a simple correlational relationship and a causal relationship. A correlational relationship simply says that two things perform in a synchronized manner. For instance, economists often talk of a correlation between inflation and unemployment. When inflation is high, unemployment also tends to be high. When inflation is low, unemployment also tends to be low. The two variables are correlated; but knowing that two variables are correlated does not tell whether one causes the other. It is documented, for instance, that there is a correlation between the number of roads built in
Europe and the number of children born in the United States. Does that mean that if fewer children are desired in the United States there should be a cessation of road building in Europe? Or, does it mean that if there aren't enough roads in Europe, U.S. citizens should be encouraged to have more babies? Of course not. (At least, I hope not.) While there is a relationship between the number of roads built and the number of babies, it's not likely that the relationship is a causal one. This leads to consideration of what is often termed the third-variable problem . In this example, it may be that a third variable is causing both the building of roads and the birthrate and causing the correlation that is observed. For instance, perhaps the general world economy is responsible for both. When the economy is good, more roads are built in Europe and more children are born in the United States. The key lesson here is that you have to be careful when you interpret correlations. If you observe a correlation between the number of hours students use the computer to study and their grade point averages (with high computer users getting higher grades), you cannot assume that the relationship is causalthat computer use improves grades. In this case, the third variable might be socioeconomic statusricher students, who have greater resources at their disposal, tend to both use computers and make better grades. Resources drive both use and grades; computer use doesn't cause the change in the grade point averages. Patterns of Relationships Several terms describe the major different types of patterns one might find in a relationship. First, there is the case of no relationship at all. If you know the values on one variable, you don't know anything
about the values on the other. For instance, I suspect that there is no relationship between the length of the lifeline on your hand and your grade point average. If I know your GPA, I don't have any idea how long your lifeline is. Then, there is the positive relationship . In a positive relationship, high values on one variable are associated with high values on the other and low values on one are associated with low values on the other. Figure 1.1b shows an idealized positive relationship between years of education and the salary one might expect to be making. On the other hand, a negative relationship implies that high values on one variable are associated with low values on the other. This is also sometimes termed an inverse relationship. Figure 1.1c shows an idealized negative relationship between a measure of self-esteem and a measure of paranoia in psychiatric patients. These are the simplest types of relationships that might typically be estimated in research. However, the pattern of a relationship can be more complex than these. For instance, Figure 1.1d shows a relationship that changes over the range of both variables, a curvilinear relationship. In this example, the horizontal axis represents dosage of a drug for an illness and the vertical axis represents a severity of illness measure. As the dosage rises, the severity of illness goes down; but at some point, the patient begins to experience negative side effects associated with too high a dosage, and the severity of illness begins to increase again. You won't be able to do much in research unless you know how to talk about variables. A variable is any entity that can take on different values. Okay, so what does that mean? Anything that can vary can be considered a variable. For instance, age can be considered a variable because age can take different values for different people or for the same person at different times. Similarly, country can be
considered a variable because a person's country can be assigned a value. Variables aren't always quantitative data or numerical. The variable gender consists of two text values: male and female, which we would naturally think of as a qualitative variable because we are distinguishing between qualities of a variable rather than quantities. If it is useful, quantitative values can be assigned instead of (or in place of) the text values, but it's not necessary to assign numbers for something to be a variable. It's also important to realize that variables aren't the only things measured in the traditional sense. For instance, in much social research and in program evaluation, the treatment or program is considered to consist of one or more variables. (That is, the cause can be considered a variable.) An educational program can have varying amounts of time on task, classroom settings, student-teacher ratios, and so on. Therefore, even the program can be considered a variable, which can be made up of a number of subvariables. An attribute is a specific value on a variable. For instance, the variable sex or gender has two attributes: male and female, or, the variable agreement might be defined as having five attributes: 1 = strongly disagree 2 = disagree 3 = neutral 4 = agree 5 = strongly agree Another important distinction having to do with the term variable is the distinction between an
independent and dependent variable. This distinction is particularly relevant when you are investigating cause-effect relationships. It took me the longest time to learn this distinction. (Of course, I'm someone who gets confused about the signs for arrivals and departures at airportsdo I go to arrivals because I'm arriving at the airport or does the person I'm picking up go to arrivals because they're arriving on the plane?) I originally thought that an independent variable was one that would be free to vary or respond to some program or treatment and that a dependent variable must be one that depends on my efforts (that is, it's the treatment). However, this is entirely backward! In fact the independent variable is what you (or nature) manipulatesa treatment or program or cause. The dependent variable is what you presume to be affected by the independent variableyour effects or outcomes. For example, if you are studying the effects of a new educational program on student achievement, the program is the independent variable and your measures of achievement are the dependent ones. Finally, there are two traits of variables that should always be achieved. Each variable should be exhaustive , meaning that it should include all possible answerable responses. For instance, if the variable is religion and the only options are Protestant, Jewish, and Muslim, there are quite a few religions that haven't been included. The list does not exhaust all possibilities. On the other hand, if
you exhaust all the possibilities with some variablesreligion being one of themyou would simply have too many responses. The way to deal with this is to explicitly list the most common attributes and then use a general category like Other to account for all remaining ones. In addition to being exhaustive, the attributes of a variable should be mutually exclusive , meaning that no respondent should be able to have two attributes simultaneously. While this might seem obvious, it is often rather tricky in practice. For instance, you might be tempted to represent the variable Employment Status with the two attributes employed and unemployed. However, these attributes are not necessarily mutually exclusivea person who is looking for a second job while employed might be able legitimately to check both attributes! But dont researchers often use questions on surveys that ask the respondent to check all that apply and then list a series of categories? Yes, but technically speaking, each of the categories in a question like that is its own variable and is treated dichotomously as either checked or uncheckedas attributes that are mutually exclusive. A hypothesis is a specific statement of prediction. It describes in concrete (rather than theoretical) terms what you expect to happen in your study. Not all studies have hypotheses. Sometimes a study is designed to be exploratory (see Section 1-2b, Deduction and
Induction, later in this chapter). There is no formal hypothesis, and perhaps the purpose of the study is to explore some area more thoroughly to develop some specific hypothesis or prediction that can be tested in future research. A single study may have one or many hypotheses. Actually, whenever I talk about a hypothesis, I am really thinking simultaneously about two hypotheses. Let's say that you predict that there will be a relationship between two variables in your study. The way to set up the hypothesis test is to formulate two hypothesis statements: one that describes your prediction and one that describes all the other possible outcomes with respect to the hypothesized relationship. Your prediction is that variable A and variable B will be related. (You don't care whether it's a positive or negative relationship.) Then the only other possible outcome would be that variable A and variable B are not related. Usually, the hypothesis that you support (your prediction) is called the alternative hypothesis , and the hypothesis that describes the remaining possible outcomes is termed the null hypothesis . Sometimes a notation such as HA or H1 is used to represent the alternative hypothesis or your prediction, and HO or H0 to represent the null case. You have to be careful here, though. In some studies, your prediction might well be that there will be no difference or change. In this case, you are essentially trying to find support for the null hypothesis and you are opposed to the alternative. If your prediction specifies a direction, the null hypothesis is the no-difference prediction and the prediction of the opposite direction. This is called a one-tailed hypothesis . For instance, let's imagine that you are investigating the effects of a new employee-training program and that you believe one of the outcomes will be that there will be less employee absenteeism. Your two hypotheses might be stated something like this: The null hypothesis for this study is HO: As a result of the XYZ company employee-training
program, there will either be no significant difference in employee absenteeism or there will be a significant increase,
which is tested against the alternative hypothesis: HA: As a result of the XYZ company employee-training program, there will be a significant decrease in employee
absenteeism.
In Figure 1.2, this situation is illustrated graphically. The alternative hypothesisyour prediction that the program will decrease absenteeismis shown there. The null must account for the other two possible conditions: no difference, or an increase in absenteeism. The figure shows a hypothetical distribution of absenteeism differences. The term one-tailed refers to the tail of the distribution on the outcome variable.
Figure 1.2. A one-tailed hypothesis
When your prediction does not specify a direction, you have a two-tailed hypothesis . For instance, let's assume you are studying a new drug treatment for depression. The drug has gone through some initial animal trials but has not yet been tested on humans. You believe (based on theory and the previous research) that the drug will have an effect, but you are not confident enough to hypothesize a direction and say the drug will reduce depression. (After all, you've seen more than enough promising drug treatments come along that eventually were shown to have severe side effects that actually worsened symptoms.) In this case, you might state the two hypotheses like this: The null hypothesis for this study is: HO: As a result of 300mg/day of the ABC drug, there will be
no significant difference in depression,
which is tested against the alternative hypothesis: HA: As a result of 300mg/day of the ABC drug, there will be
a significant difference in depression.
Figure 1.3 illustrates this two-tailed prediction for this case. Again, notice that the term two-
tailed refers to the tails of the distribution for your outcome variable.
Figure 1.3. A two-tailed hypothesis
The important thing to remember about stating hypotheses is that you formulate your prediction (directional or not), and then you formulate a second hypothesis that is mutually exclusive of the first and incorporates all possible alternative outcomes for that case. When your study analysis is completed, the idea is that you will have to choose between the two hypotheses. If your prediction was correct, you would (usually) reject the null hypothesis and accept the alternative. If your original prediction was not supported in the data, you will accept the null hypothesis and reject the alternative. The logic of hypothesis testing is based on these two basic principles:
Two mutually exclusive hypothesis statements that, together, exhaust all possible outcomes need to be developed. The hypotheses must be tested so that one is necessarily accepted and the other rejected.
Okay, I know it's a convoluted, awkward, and formalistic way to ask research questions, but it encompasses a long tradition in statistics called the hypothetical-deductive model , and sometimes things are just done because they're traditions. And anyway, if all of this hypothesis testing was easy enough that anybody could understand it, how do you think statisticians and methodologists would stay employed?
distinction between two types of data: qualitative and quantitative. Typically data is called quantitative data if it is in numerical form and qualitative data if it is not. Notice that qualitative data could be much more than just words or text. Photographs, videos, sound recordings, and so on, can be considered qualitative data. Personally, while I find the distinction between qualitative and quantitative data to have some utility, I think most people draw too hard a distinction, and that can lead to all sorts of confusion. In some areas of social research, the qualitative-quantitative distinction has led to protracted arguments with the proponents of each arguing the superiority of their kind of data over the other. The quantitative types argue that their data is hard, rigorous, credible, and scientific. The qualitative proponents counter that their data is sensitive, nuanced, detailed, and contextual. For many of us in social research, this kind of polarized debate has become less than productive. In addition, it obscures the fact that qualitative and quantitative data are intimately related to each other. All quantitative data is based upon qualitative judgments; and all qualitative data can be described and manipulated numerically. For instance, think about a common quantitative measure in social researcha self-esteem scale where the respondent rates a set of self-esteem statements on a 1-to-5 scale. Even though the result is a quantitative score, think of how qualitative such an instrument is. The researchers who developed such instruments had to make countless judgments in constructing them: how to define selfesteem; how to distinguish it from other related concepts; how to word potential scale items; how tomake sure the items would be understandable to the intended respondents; what kinds of contexts they could be used in; what kinds of cultural and language constraints might be present; and so on. Researchers who decide to use such a scale in their studies have tomake another set of judgments: how well the scale measures the intended concept; how reliable or consistent it is; how appropriate it is for the research context and intended respondents; and so on. Believe it or not, even the respondents make many judgments when filling out such a scale: what various terms and phrases mean; why the researcher is giving this scale to them; how much energy and effort they want to expend to complete it; and so on. Even the consumers and readers of the research make judgments about the self-esteem measure and its appropriateness in that research context. What may look like a simple, straightforward, cutand-dried quantitative measure is actually based on lots of qualitative judgments made by many different people. On the other hand, all qualitative information can be easily converted into quantitative, and many times doing so would add considerable value to your research. The simplest way to do this is to divide the qualitative information into categories and number them! I know that sounds trivial, but even that simple nominal enumeration can enable you to organize and process qualitative information more efficiently. As an example, you might take text information (say, excerpts from transcripts) and pile these excerpts into piles of similar statements. When you perform something as easy as this simple grouping or piling task, you can describe the results quantitatively. For instance, Figure 1.4 shows that if you had ten statements and grouped these into five piles, you could describe the piles using a 10 10 table of 0s and 1s. If two statements were placed together in the same pile, you would put a 1 in their row-column juncture.
Figure 1.4. Example of how you can convert qualitative sorting information into quantitative
data.
If two statements were placed in different piles, you would use a 0. The resulting matrix or table describes the grouping of the ten statements in terms of their similarity. Even though the data in this example consists of qualitative statements (one per card), the result of this simple qualitative procedure (grouping similar excerpts into the same piles) is quantitative in nature. "So what?" you ask. Once you have the data in numerical form, you can manipulate it numerically. For instance, you could have five different judges sort the ten excerpts and obtain a 0-1 matrix like this for each judge. Then you could average the five matrices into a single one that shows the proportions of judges who grouped each pair together. This proportion could be considered an estimate of the similarity (across independent judges) of the excerpts. While this might not seem too exciting or useful, it is exactly this kind of procedure that is used as an integral part of the process of developing concept maps of ideas for groups of people (something that is useful). Concept mapping is described later in this chapter.
One of the most important ideas in a research project is the unit of analysis . The unit of analysis is the major entity that you are analyzing in your study. For instance, any of the following could be a unit of analysis in a study:
Individuals Groups Artifacts (books, photos, newspapers) Geographical units (town, census tract, state) Social interactions (dyadic relations, divorces, arrests)
Why is it called the unit of analysis and not something else (like, the unit of sampling)? Because it is the analysis you do in your study that determines what the unit is. For instance, if you are comparing the children in two classrooms on achievement test scores, the unit is the individual child because you have a score for each child. On the other hand, if you are comparing the two classes on classroom climate, your unit of analysis is the group, in this case the classroom, because you have a classroom climate score only for the class as a whole and not for each individual student. For different analyses in the same study, you may have different units of analysis. If you decide to base an analysis on student scores, the individual is the unit. However you might decide to compare average classroom performance. In this case, since the data that goes into the analysis is the average itself (and not the individuals' scores) the unit of analysis is actually the group. Even though you had data at the student level, you use aggregates in the analysis. In many areas of social research, these hierarchies of analysis units have become particularly important and have spawned a whole area of statistical analysis sometimes referred to as hierarchical modeling . This is true in education, for instance, where a researcher might compare classroom performance data but collect achievement data at the individual student level. A fallacy is an error in reasoning, usually based on mistaken assumptions. Researchers are familiar with all the ways they could go wrong and the fallacies they are susceptible to. Here, I discuss two of the most important. The ecological fallacy occurs when you make conclusions about individuals based only on analyses of group data. For instance, assume that you measured the math scores of a particular classroom and found that they had the highest average score in the district. Later
(probably at the mall) you run into one of the kids from that class and you think to yourself, "She must be a math whiz." Aha! Fallacy! Just because she comes from the class with the highest average doesn't mean that she is automatically a high-scorer in math. She could be the lowest math scorer in a class that otherwise consists of math geniuses. An exception fallacy is sort of the reverse of the ecological fallacy. It occurs when you reach a group conclusion on the basis of exceptional cases. This kind of fallacious reasoning is at the core of a lot of sexism and racism. The stereotype is of the guy who sees a woman make a driving error and concludes that women are terrible drivers. Wrong! Fallacy! Both of these fallacies point to some of the traps that exist in research and in everyday reasoning. They also point out how important it is to do research. It is important to determine empirically how individuals perform, rather than simply rely on group averages. Similarly, it is important to look at whether there are correlations between certain behaviors and certain groups.
Once the basic data is collected, the researcher begins trying to understand it, usually by analyzing it in a variety of ways. Even for a single hypothesis, there are a number of analyses a researcher might typically conduct. At this point, the researcher begins to formulate some initial conclusions about what happened as a result of the computerized math program. Finally, the researcher often attempts to address the original broad question of interest by generalizing from the results of this specific study to other related situations. For instance, on the basis of strong results indicating that the math program had a positive effect on student performance, the researcher might conclude that other school districts similar to the one in the study might expect similar results. Components of a Study What are the basic components or parts of a research study? Here, I'll describe the basic components involved in a causal study. Because causal studies presuppose descriptive and relational questions, many of the components of causal studies will also be found in descriptive and relational studies. Most social research originates from some general problem or question. You might, for instance, be interested in which programs enable the unemployed to get jobs. Usually, the problem is broad enough that you could not hope to address it adequately in a single research study. Consequently, the problem is typically narrowed down to a more specific research question that can be addressed. Social research is theoretical , meaning that much of it is concerned with developing, exploring, or testing the theories or ideas that social researchers have about how the world operates. The research question is often stated in the context of one or more theories that have been advanced to address the problem. For instance, you might have the theory that ongoing support services are needed to assure that the newly employed remain
employed. The research question is the central issue being addressed in the study and is often phrased in the language of theory. For instance, a research question might be:
Is a program of supported employment more effective (than no program at all) at keeping newly employed persons on the job?
The problem with such a question is that it is still too general to be studied directly. Consequently, in most research, an even more specific statement, called a hypothesis is developed that describes in operational terms exactly what you think will happen in the study (see Section 1-1e, Hypotheses). For instance, the hypothesis for your employment study might be something like the following:
The Metropolitan Supported Employment Program will significantly increase rates of employment after six months for persons who are newly employed (after being out of work for at least 1 year) compared with persons who receive no comparable program.
Notice that this hypothesis is specific enough that a reader can understand quite well what the study is trying to assess. In causal studies, there are at least two major variables of interest: the cause and the effect. Usually the cause is some type of event, program, or treatment. A distinction is made between causes that the researcher can control (such as a program) versus causes that occur naturally or outside the researcher's influence (such as a change in interest rates, or the occurrence of an earthquake). The effect is the outcome that you wish to study. For both the cause and effect, a distinction is made between the idea of them (the construct) and how they are actually manifested in reality. For instance, when you think about what a program of support services for the newly employed might be, you are thinking of the construct. On the other hand, the real world is not always what you think it is. In research, a distinction is made between your view of an entity (the construct) and the entity as it exists (the operationalization ). Ideally, the two should agree. Social research is always conducted in a social context. Researchers ask people questions, observe families interacting, or measure the opinions of people in a city. The units that participate in the project are important components of any research project. Units are directly related to the question of sampling. In most projects, it's impossible to involve all of the people it is desirable to involve. For instance, in studying a program of support services for the newly employed, you can't possibly include in your study everyone in the world, or even in the country, who is newly employed. Instead, you have to try to obtain a representative sample of such people. When sampling, a distinction is made between the theoretical population of interest and the final sample that is actually included in the study. Usually the term units refers to the people who are sampled and from whom information is gathered, but for some projects the units are organizations, groups, or geographical entities like cities or towns. Sometimes the sampling strategy is multilevel; a number of cities are selected and within them families are sampled.
In causal studies, the interest is in the effects of some cause on one or more outcomes. The outcomes are directly related to the research problem; usually the greatest interest is in outcomes that are most reflective of the problem. In the hypothetical supported-employment study, you would probably be most interested in measures of employmentis the person currently employed, or, what is his or her rate of absenteeism? Finally, in a causal study, the effects of the cause of interest (for example, the program) are usually compared to other conditions (for example, another program or no program at all). Thus, a key component in a causal study concerns how you decide which units (people) receive the program and which are placed in an alternative condition. This issue is directly related to the research design that you use in the study. One of the central themes in research design is determining how people wind up in or are placed in various programs or treatments that you are comparing. These, then, are the major components in a causal study:
The research problem. The research question. The program (cause). The units. The outcomes (effect). The design.
Inductive reasoning works the other way, moving from specific observations to broader generalizations and theories (see Figure 1.7). Informally, this is sometimes called a bottom-up approach. (Please note that it's bottom up and not bottoms up, which is the kind of thing the bartender says to customers when he's trying to close for the night!) In inductive reasoning, you begin with specific observations and measures, begin detecting patterns and regularities, formulate some tentative hypotheses that you can explore, and finally end up developing some general conclusions or theories.
Figure 1.7. A schematic representation of inductive reasoning
Deductive and inductive reasoning correspond to other ideas that have been around for a long time: nomothetic, which denotes laws or rules that pertain to the
general case (nomos in Greek); and idiographic, which refers to laws or rules that relate to individuals. In any event, the point here is that most social research is concerned with the nomotheticthe general caserather than the individual. Individuals are often studied, but usually there is interest in generalizing to more than just the individual. These two methods of reasoning have a different feel to them when you're conducting research. Inductive reasoning, by its nature, is more open-ended and exploratory, especially at the beginning. Deductive reasoning is narrower in nature and is concerned with testing or confirming hypotheses. Even though a particular study may look like it's purely deductive (for example, an experiment designed to test the hypothesized effects of some treatment on some outcome), most social research involves both inductive and deductive reasoning processes at some time in the project. In fact, it doesn't take a rocket scientist to see that you could assemble the two graphs from Figures 1.6 and Figure 1.7 into a single circular one that continually cycles from theories down to observations and back up again to theories. Even in the most constrained experiment, the researchers might observe patterns in the data that lead them to develop new theories. Let's start this brief discussion of philosophy of science with a simple distinction between epistemology and methodology. The term epistemology comes from the Greek word epistm, their term for knowledge. In simple terms, epistemology is the philosophy of knowledge or of how you come to know. Methodology is also concerned with how you come to know, but is much more practical in nature. Methodology is focused on the specific waysthe methodsyou can use to try to understand the world better. Epistemology and methodology are intimately related: the former involves the philosophy of how you come to know the world and the latter involves the practice. When most people in society think about science, they think about someone in a white lab coat working at a lab bench mixing up chemicals. They think of science as boring and cut-and-dried, and they think of the scientist as narrow-minded and esoteric (the ultimate nerdthink of the humorous but nonetheless mad scientist in the Back to the Future movies, for instance). Many of the stereotypes about science come from a period when science was
dominated by a particular philosophypositivismthat tended to support some of these views. Here, I want to suggest (no matter what the movie industry may think) that science has moved on in its thinking into an era of postpositivism, where many of those stereotypes of the scientist no longer hold up. Let's begin by considering what positivism is. In its broadest sense, positivism is a rejection of metaphysics (I leave it to you to look up that term if you're not familiar with it). Positivism holds that the goal of knowledge is simply to describe the phenomena that are experienced. The purpose of science is simply to stick to what can be observed and measured. Knowledge of anything beyond that, a positivist would hold, is impossible. When I think of positivism (and the related philosophy of logical positivism), I think of the behaviorists in mid-20th century psychology. These were the mythical rat runners who believed that psychology could study only what could be directly observed and measured. Since emotions, thoughts, and so on, can't be directly observed (although it may be possible to measure some of the physical and physiological accompaniments), these were not legitimate topics for a scientific psychology. B. F. Skinner argued that psychology needed to concentrate only on the positive and negative reinforcers of behavior to predict how people will behave; everything else in between (like what the person is thinking) is irrelevant because it can't be measured. In a positivist view of the world, science was seen as the way to get at
truth, to understand the world well enough to predict and control it. The world and the universe were deterministic; they operated by laws of cause and effect that scientists could discern if they applied the unique approach of the scientific method. Science was largely a mechanistic or mechanical affair. Scientists use deductive reasoning to postulate theories that they can test. Based on the results of their studies, they may learn that their theory doesn't fit the facts well and so they need to revise their theory to better predict reality. The positivist believed in empiricismthe idea that observation and measurement was the core of the scientific endeavor. The key approach of the scientific method is the experiment, the attempt to discern natural laws through direct manipulation and observation. Okay, I am exaggerating the positivist position (although you may be amazed at how close to this some of them actually came) to make a point. Things have changed in the typical views of science since the middle part of the 20th century. Probably the most important has been the shift away from positivism into what is termed post-positivism. By post-positivism, I don't mean a slight adjustment to or revision of the positivist position; postpositivism is a wholesale rejection of the central tenets of positivism. A post-positivist might begin by recognizing that the way scientists think and work and the way you think in your everyday life are not distinctly different. Scientific reasoning and common sense reasoning are essentially the same
process. There is no essential difference between the two, only a difference in degree. Scientists, for example, follow specific procedures to assure that observations are verifiable, accurate, and consistent. In everyday reasoning, you don't always proceed so carefully. (Although, if you think about it, when the stakes are high, even in everyday life you become much more cautious about measurement. Think of the way most responsible parents keep continuous watch over their infants, noticing details that nonparents would never detect.) In a post-positivist view of science, certainty is no longer regarded as attainable. Thus, much contemporary social research is probabilistic, or based on probabilities. The inferences made in social research have probabilities associated with them; they are seldom meant to be considered as covering laws that pertain to all cases. Part of the reason statistics has become so dominant in social research is that it enables the estimation of the probabilities for the situations being studied. One of the most common forms of post-positivism is a philosophy called critical realism . A critical realist believes that there is a reality independent of a person's thinking about it that science can study. (This is in contrast with a subjectivist , who would hold that there is no external realityeach of us is making this all up.) Positivists were also realists. The difference is that the post-positivist critical realist recognizes that all observation is fallible and has error and that all theory is revisable. In other words,
the critical realist is critical of a person's ability to know reality with certainty. Whereas the positivist believed that the goal of science was to uncover the truth, the postpositivist critical realist believes that the goal of science is to hold steadfastly to the goal of getting it right about reality, even though this goal can never be perfectly achieved. Because all measurement is fallible, the post-positivist emphasizes the importance of multiple measures and observations, each of which may possess different types of error, and the need to use triangulation across these multiple error sources to try to get a better bead on what's happening in reality. The postpositivist also believes that all observations are theory-laden and that scientists (and everyone else, for that matter) are inherently biased by their cultural experiences, worldviews, and so on. This is not cause to despair, however. Just because I have my worldview based on my experiences and you have yours doesn't mean that it is impossible to translate from each other's experiences or understand each other. That is, post-positivism rejects the relativist idea of the incommensurability of different perspectives, the idea that people can never understand each other because they come from different experiences and cultures. Most postpositivists are constructivists who believe that you construct your view of the world based on your perceptions of it. Because perception and observation are fallible, all constructions must be imperfect. So what is meant by objectivity in a
post-positivist world? Positivists believed that objectivity was a characteristic that resided in the individual scientist. Scientists are responsible for putting aside their biases and beliefs and seeing the world as it really is. Post-positivists reject the idea that any individual can see the world perfectly as it really is. Everyone is biased and all observations are affected (theoryladen). The best hope for achieving objectivity is to triangulate across multiple fallible perspectives. Thus, objectivity is not the characteristic of an individual; it is inherently a social phenomenon. It is what multiple individuals are trying to achieve when they criticize each other's work. Objectivity is never achieved perfectly, but it can be approached. The best way to improve objectivity is to work publicly within the context of a broader contentious community of truth-seekers (including other scientists) who criticize each other's work. The theories that survive such intense scrutiny are a bit like the species that survive in the evolutionary struggle. (This theory is sometimes called evolutionary epistemology or the natural selection theory of knowledge and holds that ideas have survival value and that knowledge evolves through a process of variation, selection, and retention.) These theories have adaptive value and are probably as close as the human species can come to being objective and understanding reality. Clearly, all of this stuff is not for the faint of heart. I've seen many a graduate student get lost in the maze of philosophical assumptions that contemporary philosophers of
science argue about. Don't think that I believe this is not important stuff; but, in the end, I tend to turn pragmatist on these matters. Philosophers have been debating these issues for thousands of years, and there is every reason to believe that they will continue to debate them for thousands of years more Practicing researchers should check in on this debate from time to time. (Perhaps every hundred years or so would be about right.) Researchers should think about the assumptions they make about the world when they conduct research; but in the meantime, they can't wait for the philosophers to settle the matter After all, they do have their own work to do.
reach about the quality of different parts of their research methodology. Validity is typically subdivided into four types. Each type addresses a specific methodological question. To understand the types of validity, you have to know something about how researchers investigate a research question. Because all four validity types are really only operative when studying causal questions, I will use a causal study to set the context. Figure 1.8 shows that two realms are involved in research. The first, on the top, is the land of theory. It is what goes on inside your head. It is where you keep your theories about how the world operates. The second, on the bottom, is the land of observations. It is the real world into which you translate your ideas: your programs, treatments, measures, and observations. When you conduct research, you are continually flitting back and forth between these two realms, between what you think about the world and what is going on in it. When you are investigating a cause-effect relationship, you have a theory (implicit or otherwise) of what the cause is (the cause construct ). For instance, if you are testing a new educational program, you have an idea of what it would look like ideally. Similarly, on the effect side, you have an idea of what you are ideally trying to affect and measure (the effect construct ). But each of thesethe cause and the effecthave to be translated into real things, into a program or treatment and a measure or observational method. The term operationalization is used to describe the act of translating a construct into its manifestation. In effect, you take your idea and describe it as a series of operations or procedures. Now, instead of it being only an idea in your mind, it becomes a public entity that others can look at and examine for themselves. It is one thing, for instance, for you to say that you would like to measure self-esteem (a construct). But when you show a ten-item paper-and-pencil self-esteem measure that you developed for that purpose, others can look at it and understand more clearly what you intend by the term self-esteem.
Figure 1.8. The major realms and components of research
Now, back to explaining the four validity types. They build on one another, with two of them (conclusion and internal) referring to the land of observation on the bottom of Figure 1.8, one of them (construct) emphasizing the linkages between the bottom and the top, and the last (external) being primarily concerned about the range of the theory on the top. Imagine that you want to examine whether use of a World Wide Web virtual classroom improves student understanding of course material. Assume that you took these two constructs, the cause construct (the Web site) and the effect construct (understanding), and operationalized them, turned them into realities by constructing the Web site and a measure of knowledge of the course material. Here are the four validity types and the question each addresses:
Conclusion Validity: In this study, is there a relationship between the two variables? In the context of the example, the question might be worded: in this study, is there a relationship between the Web site and knowledge of course material? There are several conclusions or inferences you might draw to answer such a question. You could, for example, conclude that there is a relationship. You might conclude that there is a positive relationship. You might infer that there is no relationship. You can assess the conclusion validity of each of these conclusions or inferences. Internal Validity: Assuming that there is a relationship in this study, is the relationship a causal one? Just because you find that use of the Web site and knowledge are correlated, you can't necessarily assume that Web site use causes the knowledge. Both could, for example, be caused by the same factor. For instance, it may be that wealthier students, who have greater resources, would be more likely to have access to a Web site and would excel on objective tests. When you want to make a claim that your program or treatment caused the outcomes in your study, you can consider the
internal validity of your causal claim. Construct Validity: Assuming that there is a causal relationship in this study, can you claim that the program reflected your construct of the program well and that your measure reflected well your idea of the construct of the measure? In simpler terms, did you implement the program you intended to implement and did you measure the outcome you wanted to measure? In yet other terms, did you operationalize well the ideas of the cause and the effect? When your research is over, you would like to be able to conclude that you did a credible job of operationalizing your constructsyou can assess the construct validity of this conclusion. External Validity: Assuming that there is a causal relationship in this study between the constructs of the cause and the effect, can you generalize this effect to other persons, places, or times? You are likely to make some claims that your research findings have implications for other groups and individuals in other settings and at other times. When you do, you can examine the external validity of these claims.
Notice how the question that each validity type addresses presupposes an affirmative answer to the previous one. This is what I mean when I say that the validity types build on one another. Figure 1.9 shows the idea of cumulativeness as a staircase, along with the key question for each validity type.
Figure 1.9. The validity staircase, showing the major question for each type of validity
For any inference or conclusion, there are always possible threats to validity reasons the conclusion or inference might be wrong. Ideally, you try to reduce the plausibility of the most likely threats to validity, thereby leaving as most plausible the conclusion reached in the study. For instance, imagine a study examining whether there is a relationship between the amount of training in a specific
technology and subsequent rates of use of that technology. Because the interest is in a relationship, it is considered an issue of conclusion validity . Assume that the study is completed and no significant correlation between amount of training and adoption rates is found. On this basis, it is concluded that there is no relationship between the two. How could this conclusion be wrongthat is, what are the threats to validity? For one, it's possible that there isn't sufficient statistical power to detect a relationship even if it exists. Perhaps the sample size is too small or the measure of amount of training is unreliable. Or maybe assumptions of the correlational test are violated given the variables used. Perhaps there were random irrelevancies in the study setting or random heterogeneity in the respondents that increased the variability in the data and made it harder to see the relationship of interest. The inference that there is no relationship will be strongerhave greater conclusion validityif you can show that these alternative explanations are not credible. The distributions might be examined to see whether they conform with assumptions of the statistical test, or analyses conducted to determine whether there is sufficient statistical power. The theory of validity and the many lists of specific threats provide a useful scheme for assessing the quality of research conclusions. The theory is general in scope and applicability, well-articulated in its philosophical suppositions, and virtually impossible to explain adequately in a few minutes. As a framework for judging the quality of evaluations, it is indispensable and well worth understanding.
This is a time of profound change in the understanding of the ethics of applied social research. From the time immediately after World War II until the early 1990s, there was a gradually developing consensus about the key ethical principles that should underlie the research endeavor. Two marker events stand out (among many others) as symbolic of this consensus. The Nuremberg War Crimes Trial following World War II brought to public view the ways German scientists had used captive human subjects as subjects in often gruesome experiments. In the 1950s and 1960s, the Tuskegee Syphilis Study involved the withholding of known effective treatment for syphilis from African-American participants who were infected. Events like these forced the reexamination of ethical standards and the gradual development of a consensus that potential human subjects needed to be protected from being used as guinea pigs in scientific research. By the 1990s, the dynamics of the situation changed. Cancer patients and persons with acquired immunodeficiency syndrome (AIDS) fought publicly with the medical research establishment about the length of time needed to get approval for and complete research into potential cures for fatal diseases. In many cases, it is the ethical
assumptions of the previous thirty years that drive this go-slow mentality. According to previous thinking, it is better to risk denying treatment for a while until there is enough confidence in a treatment, than risk harming innocent people (as in the Nuremberg and Tuskegee events). Recently, however, people threatened with fatal illness have been saying to the research establishment that they want to be test subjects, even under experimental conditions of considerable risk. Several vocal and articulate patient groups who wanted to be experimented on came up against an ethical review system designed to protect them from being the subjects of experiments! Although the past few years in the ethics of research have been tumultuous ones, a new consensus is beginning to evolve that involves the stakeholder groups most affected by a problem participating more actively in the formulation of guidelines for research. Although, at present, it's not entirely clear what the new consensus will be, it is almost certain that it will not fall at either extreme: protecting against human experimentation at all costs versus allowing anyone who is willing to be the subject of an experiment.
As in every other aspect of research, the area of ethics has its own vocabulary. In this section, I present some of the most important language regarding ethics in research. The principle of voluntary participation requires that people not be coerced into participating in research. This is especially relevant where researchers had previously relied on captive audiences for their subjectsprisons, universities, and places like that. Closely related to the notion of voluntary participation is the requirement of informed consent . Essentially, this means that prospective research participants must be fully informed about the procedures and risks involved in research and must give their consent to participate. Ethical standards also require that researchers not put participants in a situation where they might be at risk of harm as a result of their participation. Harm can be defined as both physical and psychological.
Two standards are applied to help protect the privacy of research participants. Almost all research guarantees the participants confidentiality ; they are assured that identifying information will not be made available to anyone who is not directly involved in the study. The stricter standard is the principle of anonymity , which essentially means that the participant will remain anonymous throughout the study, even to the researchers themselves. Clearly, the anonymity standard is a stronger guarantee of privacy, but it is sometimes difficult to accomplish, especially in situations where participants have to be measured at multiple time points (for example in a pre-post study). Increasingly, researchers have had to deal with the ethical issue of a person's right to service . Good research practice often requires the use of a notreatment control groupa group of participants who do not get the treatment or program that is being studied. But when that treatment or program may have beneficial effects, persons assigned to the no-treatment control may feel their rights to equal access to services are being curtailed. Even when clear ethical standards and principles exist, at times the need to do accurate research runs up against the rights of potential participants. No set of standards can possibly anticipate every ethical circumstance. Furthermore, there needs to be a procedure that assures that researchers will consider all relevant ethical issues in formulating research plans. To address such needs most institutions and organizations have formulated an Institutional Review Board (IRB) , a panel of persons who review grant proposals with respect to ethical implications and decide whether additional actions need to be taken to assure the safety and rights of participants. By reviewing proposals for research, IRBs also help protect the organization and the researcher against potential legal implications of neglecting to address important ethical issues of participants.
1-4: Conceptualizing
the least discussedis how to develop the idea for the research project in the first place. In training students, most faculty members simply assume that if students read enough of the research in an area of interest, they will somehow magically be able to produce sensible ideas for further research. Now, that may be true. And heaven knows that's the way researchers have been doing this higher education thing for some time now; but it troubles me that they haven't been able to do a better job of helping their students learn how to formulate good research problems. One thing they can do (and some texts at least cover this at a surface level) is give students a better idea of how professional researchers typically generate research ideas. Some of this is introduced in the discussion of problem formulation that follows. But maybe researchers can do even better than that. Why can't they turn some of their expertise in developing methods into methods that students and researchers can use to help them formulate ideas for research. I've been working on that area intensively for over a decade now, and I came up with a structured approach that groups can use to map out their ideas on any topic. This approach, called concept mapping (see Section 1-4b, Concept Mapping) can be used by research teams to help them clarify and map out the key research issues in an area, to help them operationalize the programs or interventions or the outcome measures for their study. The concept-mapping method isn't the only method around that might help researchers formulate good research problems and projects. Virtually any method that's used to help individuals and groups think more effectively would probably be useful in research formulation; but concept mapping is a good example of a structured approach and will introduce you to the idea of conceptualizing research in a more formalized way.
Where Research Topics Come From So how do researchers come up with the idea for a research project? Probably one of the most common sources of research ideas is the experience of practical problems in the field. Many researchers are directly engaged in social, health,
or human service program implementation and come up with their ideas based on what they see happening around them. Others aren't directly involved in service contexts but work with (or survey) people to learn what needs to be better understood. Many of the ideas would strike the outsider as silly or worse. For instance, in health services areas, there is great interest in the problem of back injuries among nursing staff. It's not necessarily the thing that comes first to mind when you think about the health care field; but if you reflect on it for a minute longer, it should be obvious that nurses and nursing staff do an awful lot of lifting while performing their jobs. They lift and push heavy equipment, and they lift and push heavy patients! If 5 or 10 out of every 100 nursing staff were to strain their backs on average over the period of 1 year, the costs would be enormous and that's pretty much what's happening. Even minor injuries can result in increased absenteeism. Major ones can result in lost jobs and expensive medical bills. The nursing industry figures this problem costs tens of millions of dollars annually in increased health care. In addition, the health-care industry has developed a number of approaches, many of them educational, to try to reduce the scope and cost of the problem. So, even though it might seem silly at first, many of these practical problems that arise in practice can lead to extensive research efforts. Another source for research ideas is the literature in your specific field. Certainly, many researchers get ideas for research by reading the literature and thinking of ways to extend or refine previous research. Another type of literature that acts as a source of good research ideas is the Requests For Proposals (RFPs) that are published by government agencies and some companies. These RFPs describe some problem that the agency would like researchers to address; they are virtually handing the researcher an idea. Typically, the RFP describes the problem that needs addressing, the contexts in which it operates, the approach they would like you to take to investigate to address the problem, and the amount they would be willing to pay for such research. Clearly, there's nothing like potential research funding to get researchers to focus on a particular research topic. Finally, let's not forget the fact that many researchers simply think up their research topic on their own. Of course, no one lives in a vacuum, so you would expect that the ideas you come up with on your own are influenced by your background, culture, education, and experiences.
Feasibility Soon after you get an idea for a study, reality begins to kick in and you begin to think about whether the study is feasible at all. Several major considerations come into play. Many of these involve making trade-offs between rigor and practicality. Performing a scientific study may force you to do things you wouldn't do normally. You might want to ask everyone who used an agency in the past year to fill in your evaluation survey only to find that there were thousands of people and it would be prohibitively expensive. Or, you might want to conduct an in-depth interview on your subject of interest only to learn that the typical participant in your study won't willingly take the hour that your interview requires. If you had unlimited resources and unbridled control over the circumstances, you would always be able to do the best quality research; but those ideal circumstances seldom exist, and researchers are almost always forced to look for the best trade-offs they can find to get the rigor they desire. When you are determining the project's feasibility, you almost always need to bear in mind several practical considerations. First, you have to think about how long the research will take to accomplish. Second, you have to question whether any important ethical constraints require consideration. Third, you must determine whether you can acquire the cooperation needed to take the project to its successful conclusion. And finally, you must determine the degree to which the costs will be manageable. Failure to consider any of these factors can mean disaster later. The Literature Review One of the most important early steps in a research project is the conducting of the literature review. This is also one of the most humbling experiences you're likely to have. Why? Because you're likely to find out that just about any worthwhile idea you will have has been thought of before, at least to some degree. I frequently have students who come to me complaining that they couldn't find anything in the literature that was related to their topic. And virtually every time they have said that, I was able to show them that was true only because they looked only for articles that were exactly the same as their research topic. A literature review is designed to identify related research, to set the current research project within a conceptual and theoretical context. When looked at that way, almost no topic is so new or unique that you can't locate relevant and informative related
research. Here are some tips about conducting the literature review. First, concentrate your efforts on the scientific literature. Try to determine what the most credible research journals are in your topical area and start with those. Put the greatest emphasis on research journals that use a blind or juried review system. In a blind or juried review, authors submit potential articles to a journal editor who solicits several reviewers who agree to give a critical review of the paper. The paper is sent to these reviewers with no identification of the author so that there will be no personal bias (either for or against the author). Based on the reviewers' recommendations, the editor can accept the article, reject it, or recommend that the author revise and resubmit it. Articles in journals with blind review processes are likely to have a fairly high level of credibility. Second, do the review early in the research process. You are likely to learn a lot in the literature review that will help you determine what the necessary trade-offs are. After all, previous researchers also had to face trade-off decisions. What should you look for in the literature review? First, you might be able to find a study that is quite similar to the one you are thinking of doing. Since all credible research studies have to review the literature themselves, you can check their literature review to get a quick start on your own. Second, prior research will help ensure that you include all of the major relevant constructs in your study. You may find that other similar studies routinely look at an outcome that you might not have included. Your study would not be judged credible if it ignored a major construct. Third, the literature review will help you to find and select appropriate measurement instruments. You will readily see what measurement instruments researchers used themselves in contexts similar to yours. Finally, the literature review will help you to anticipate common problems in your research context. You can use the prior experiences of others to avoid common traps and pitfalls.
Social scientists have developed a number of methods and processes that might help you formulate a research project. I would include among these at least the following: brainstorming, brainwriting, nominal group
techniques, focus groups, affinity mapping, Delphi techniques, facet theory, and qualitative text analysis. Here, I'll show you a method that I have developed, called concept mapping (Kane and Trochim, 2007) which is especially useful for research problem formulation and illustrates some of the advantages of applying social-science methods to conceptualizing research problems. Concept mapping is a general method that can be used to help any individual or group to describe ideas about some topic in a pictorial form. Several methods currently go by names such as concept mapping, mental mapping, or concept webbing. All of them are similar in that they result in a picture of someone's ideas, but the kind of concept mapping I want to describe here is different in a number of important ways. First, it is primarily a group process, so it is especially well suited for situations where teams or groups of researchers have to work together. The other methods work primarily with individuals. Second, it uses a structured facilitated approach. A trained facilitator follows specific steps in helping a group articulate its ideas and understand them more clearly. Third, the core of concept mapping consists of several state-of-the-art multivariate statistical methods that analyze the input from all of the individuals and yield an aggregate group product. Finally, the method requires the use of specialized computer programs that can handle the data from this type of process and accomplish the correct analysis and mapping procedures. Although concept mapping is a general method, it is particularly useful for helping social researchers and research teams develop and detail ideas for research. It is especially valuable when researchers want to involve relevant stakeholder groups in the act of creating the research project. Although concept mapping is used for many purposes strategic planning, product development, market analysis, decision making, measurement developmentI concentrate here on its potential for helping researchers formulate their projects. So what is concept mapping? Essentially, concept mapping is a structured process, focused on a topic or construct of interest, involving input from one or more participants, that produces an interpretable pictorial view (concept map) of their ideas and concepts and how these are interrelated. Concept mapping helps people to think more effectively as a group without losing their individuality. It helps groups capture complex ideas without trivializing them or losing detail (see Figure 1.10).
Figure 1.10. The steps in the concept-mapping process
A concept-mapping process involves six steps that can take place in a single day or can be spread out over weeks or months depending on the situation. The process can be accomplished with everyone sitting around a table in the same room or with the participants distributed across the world using the Internet. The steps are as follows:
Preparation: Step one accomplishes three things. The facilitator of the mapping process works with the initiator(s) (those who requested the process initially) to identify who the participants will be. A mapping process can have hundreds or even thousands of stakeholders participating, although there is usually a relatively small group of between ten and twenty stakeholders involved. Second, the initiator works with the stakeholders to develop the focus for the project. For instance, the group might decide to focus on defining a program or treatment, or it might choose to map all of the expected outcomes. Finally, the group decides on an appropriate schedule for the mapping. Generation: The stakeholders develop a large set of statements that address the focus. For instance, they might generate statements describing all of the specific activities that will constitute a specific social program, or generate statements describing specific outcomes that could result from participating in a program. A variety of methods can be used to accomplish this including traditional brainstorming, brainwriting, nominal group techniques, focus groups, qualitative text analysis, and so on. The group can generate hundreds of statements in a concept-mapping project. In most situations, around 100
statements is the practical limit in terms of the number of statements they can reasonably handle. Structuring: The participants do two things during structuring. First, each participant sorts the statements into piles of similar statements. They often do this by sorting a deck of cards that has one statement on each card, but they can also do this directly on a computer by dragging the statements into piles that they create. They can have as few or as many piles as they want. Each participant names each pile with a short descriptive label. Then each participant rates each of the statements on some scale. Usually the statements are rated on a 1 to 5 scale for their relative importance, where a 1 means the statement is relatively unimportant compared to all the rest, a 3 means that it is moderately important, and a 5 means that it is extremely important. Representation: This is where the analysis is done; this is the process of taking the sort and rating input and representing it in map form. Two major statistical analyses are used. The first multidimensional scalingtakes the sort data across all participants and develops the basic map where each statement is a point on the map and statements that were piled together by more people are closer to each other on the map. The second analysiscluster analysistakes the output of the multidimensional scaling (the point map) and partitions the map into groups of statements or ideas, into clusters. If the statements describe program activities, the clusters show how to group them into logical groups of activities. If the statements are specific outcomes, the clusters might be viewed as outcome constructs or concepts. Interpretation: The facilitator works with the stakeholder group to help develop its own labels and interpretations for the various maps. Utilization: The stakeholders use the maps to help address the original focus. On the program side, stakeholders use the maps as a visual framework for operationalizing the program; on the outcome side, the maps can be used as the basis for developing measures and displaying results.
The concept-mapping process described here is a structured approach to conceptualizing. However, even researchers who do not appear to be following a structured approach are likely to be using similar steps informally. For instance, all researchers probably go through an internal exercise that is analogous to the brainstorming step described previously. They may not actually brainstorm and write their ideas down, but they probably do something like that informally. After theyve generated their ideas, they structure or organize them in some way. For each step in the formalized concept-mapping process you can probably think of
analogous ways that researchers accomplish the same task, even if they dont follow such formal approaches. More formalized methods like concept mapping have benefits over the typical informal approach. For instance, with concept mapping there is an objective record of what was done in each step. Researchers can be both more public and more accountable. A structured process also opens up new possibilities. With concept mapping, it is possible to imagine more effective multiple researcher conceptualization and involvement of other stakeholder groups such as program developers, funders, and clients.
Another method of conceptualizing research uses graphics to express the basic idea of what is supposed to happen in a program. This graphic representation can then be used to guide researchers in the process of identifying indicators or measures of the components of the graphic model. The idea is straightforward: identify the components of the program in terms of specific inputs (what goes into a program), relationships (how the components should be related to each other), and outputs (what should happen as a result of the program). A very nice basic example of such a model was produced by the Kellogg Foundation in their Logic Model Development Guide (2004). This example is shown in Figure 1.11. Notice that the steps are shown in a left-to-right order that indicates the step-by-step logic used in planning the program, with the arrows suggesting a causal sequence of influences. A second example is shown in Figure 1.12. This model was developed to illustrate how a program could identify needs of children and families, then provide certain kinds of services, resulting in some positive outcomes for program participants. In this model, both the program components and some indicators (measures) are shown. This figure illustrates something very general about social research, too; that we must think at two levels: the abstract idea and the observable indicator of the idea. Of course, life and research are not this simple. If you would like to read more about logic models, particularly with regard to the translation of the idea to real life, I recommend reading Renger and Hurley's paper on this topic (2006).
Figure 1.11. A basic logic model