- Let Us Count the Ways: Strategies for Doing Qualitative Research
Last time around, dear friends, we laid the groundwork for the qualitative approach. Specifically, we talked about the general purpose of qualitative research, and how it differs from the more traditional quantitative procedures.
Now let's get a bit more specific! We'll start by trying to match qualitative procedures to more general "families" of research questions. Along with this design match-up, we'll identify some ways of collecting qualitative data. (Bet there are more of these than you probably thought possible!)
Hopefully, this will start you thinking along the lines of your own specific research question or problem. What do you want to investigate qualitatively, and what specific procedure(s) will you choose to apply to do so? In this way, we'll get started on your eventual "end product" of this course: a mini-qualitative investigation.
So -- let's get the bird's-eye view!
The "Master Plan:" Matching the Approach (Design and Analysis) to the Need to Know (Problem Statement, Research Question)
As we've discussed in our Intro to Research and Research Design classes, the heart and soul of any investigation is the research question. This is what we desire to investigate using the systematic, scientific procedures which constitute "research." It therefore drives all the other parts of the research process: design methodology and procedures, sampling, instrumentation, data analysis, and so forth.
From the first module, you've probably gotten the idea that qualitative procedures are especially useful and appropriate when we desire some "in-depth" knowledge about a situation, setting, subject/group of subjects, and the like. That is because of its potential to give us the "rich, thick description" (a wonderful phrase coined by Norman Denzin to characterize qualitative research!) to answer such questions. This would imply exploratory, descriptive (what is/what are/identify) types of studies.
And you'd be right! Ah, but this is only the beginning!
The following chart, developed by Catherine Marshall and Gretchen Rossman, is the best I've found to date for "matching up" research questions, designs and qualitative data collection procedures. What I especially like about it is that:
- It keeps you focused on the overall goal: answering your question(s).
- And then, finding the best way (design and procedures) to go about doing that!
Matching Research Questions with Strategy
- to investigate little-
- to identify/
- to generate
- What is happening in
this social program
- What are the salient
themes, patterns, &
- How are these
patterns linked with one
another (to provide the
for what I'm trying to
- In-depth interviewing
- Elite interviewing
- to explain the forces causing the
- to identify plausible
- What events, beliefs
attitudes, & polices are
- How do these forces
interact to result in this
- Field study
- Case study
- In-depth interviewing
- Document analysis
- Unobstrusive measures
- to document the
phenomenon of interest
- What are the salient
structures & processes
occuring with regard to
my phenomenon of
- Field study
- Case study
- In-depth interviewing
- Document analysis
- Unobstrusive measures
- to predict the
outcome of the
- to forecast the events
& behaviors resulting
from the phenomenon
- What will occur as a
result of this
- Who will be affected
- In what ways?
- Survey (large sample)
- Content analysis
Adapted from Marshall & Rossman (1989).
That's quite a menu of choices, isn't it?! If your initial reaction is anything like mine, you've just realized the horizons for applying qualitative procedures are much wider than you might have thought of at first glance! Let's briefly focus on some of the possibilities...
Design Families/Related Types of Questions
(Columns 1 and 2)
If you glance down at the main headings of the first column, in particular, you'll realize that we've now expanded and refined our definition of "what is/what are" descriptive types of designs. If you ask me, I would see those four families on the following continuum:
Mary D's "Take" on the Four Design Families in
Table 1, above:
A Continuum in Terms of
Extent of Researcher's Knowledge about the Phenomenon
1.Exploratory research can indeed be characterized as "mucking around!" Something like this: "I don't yet know what it is, exactly, that I want to investigate (for my dissertation or otherwise). But, I do get a general feel that there's a problem in this school (clinic, vocational training program, etc.). So -- to help me crystallize what 'it' (the cause; phenomenon) is, I'm going to spend some time soaking up this setting. I'll go to the school, talk with some people, maybe get permission to sit in on some classes, meetings, hang out in the teacher's lounge, and the like. At the end of this "mucking around," I'll take my notes and see if I can close in on what the key phenomenon (source of the problem, causal factor) is."
Now ... just as a reminder, the designs listed in Column #1 and elaborated above represent variations on "how much we already know about our target phenomenon." However, Intro to Research and Research Design friends will recognize the this list is by no means exhaustive!
So -- you might think of it as 'sort of backwards:' the end result of this one would hopefully be the beginning of a problem statement, research question, etc.! You go in 'a blank slate' but hope to key in on at least the major variable or variables ("it seems to be interpersonal communication!" or "patterns of decision-making") that in turn will be more intensively investigated in some/all of the following design families.
For that reason, this one would be a good "pre-dissertation study" to do if you don't know for sure what you want to do! You can in essence do a purely exploratory study in the hopes of ending up with your eventual dissertation research topic or question!
2/3. Explanatory/descriptive research: Unlike Marshall and Rossman, I really don't see a distinction here -- I think of them as synonymous. Here, we know the key variables we want to study and then proceed to gather data to identify more information about them.
In other words, Intro to Research and Research Design fans: This would be our classic "descriptive:" what is/what are research question:
"What are the key determinants of school climate?"
"What are the perceived barriers to teacher retention on reservation schools?"
"What are the major motivating factors perceived by peer tutors?"
Think of it this way: you now know the basic constructs, phenomena or broad variables that you want to focus on -- perhaps, as a result of first doing a purely exploratory study, as in # 1, above. But you find that, perhaps due to lack of lots of prior similar work on this variable -- or even, conflicting findings in the literature -- we really don't yet "know for sure" as much about this construct yet. Thus, we must of necessity phrase our investigative questions very broadly -- e.g., "what is/what are" -- to begin to understand it.
Open the link to the Descriptive Research lesson from Introduction to Research for a quick review.
4. Predictive research: This one is the other extreme of "prior knowing." Not only has the phenomenon been extensively investigated; but prior theoretical work is extensive and conclusive enough so that we know lots about "what leads to (precedes, causes) this phenomenon; how it's impacted by possible mediating variables (e.g., does it vary by gender? political party affiliation? level of self esteem?); and what it, in turn, 'causes' or leads to. In other words, for our Research Design fans, we can sketch out quite a "web" of antecedent, mediating/moderating and outcome variables when we start to draw what we already know about this phenomenon. To put it still another way: we essentially already have a more-or-less usable "theory" to understand and explain it. Examples from social-behavioral and education would include, for instance, motivation, satisfaction, and acquisition of learning.
But we also learned that a theory is never absolutely proven per se! It may be shown to "continue to hold up" across time, similar subjects, settings and situations. We may also find ourselves adding to the body of knowledge by introducing still more variables to "sketch into the theoretical web of our phenomenon:" more antecendent causes; more potential mediating/moderating variables; and yet more outcomes. Finally, all it takes is a single disconfirming case to call the theory (or part of it, anyway) into question. This is why, even at the right-hand extreme of knowledge/prior work regarding a phenomenon, we may continue to choose to investigate it via scholarly study.
Another way of saying that is: qualitative research procedures may also be successfully applied to other types of designs: e.g., correlational, ex post facto and even (as we'll see in the following subsection) as adjuncts to experimental designs! ***: THIS WOULD BE A GOOD TIME TO REVIEW THE RELATED "DESIGN FAMILIES" FROM OUR INTRO TO RESEARCH WEBSITE!!!!
To refresh your memory on the details, open the link below to the Introduction to Research lesson on experimental research.
- Case study. Here, too, I take some slight exception with Marshall and Rossman. Specifically, I see "field studies" (#2) as a special case (oops, bad pun!) of "case studies."
If you love ambiguity, you'll love this definition of a "case" proposed by Sharan Merriam! "A case is whatever you define it to be!"
The sky's the limit:
- a setting/location (this is the usual, sort-of implied meaning): a school, a clinic, a town;
- a program - method of teaching or counseling, for instance;
- a policy - such as how teachers are recruited and rewarded;
- an individual subject or group of subjects - bringing to mind the classic "single-subject" study design for our Intro to Research modem fans!
- even a historical time period and events in it, for instance.
Depending on how you define your "case," you'd be doing a "single-site"
case study if you are only investigating one of it. This
could be visiting a single school, gathering documentation on a single
recruitment policy, and the like. "Multisite," on the other
hand, means that you'll be building in a sort of "replication
cross-check" by looking at more than one of that case. In
essence, you'll be cross-checking to see if the key determinants of
"positive school climate" are general ones -- or if they could be
confounded with some aspects of that one particular school
setting. You would replicate your study in one or more additional
schools and then line up your findings at each case site to see which
determinants appeared to hold constant across the various different
sites. These would be, certainly not "proof," but more convincing
evidence, that these seem to be "real, underlying, general" determinants
of your target phenomenon of "positive school climate."
- Field study. As indicated in #1, above, I really consider this to be a special type of a case study. For some qualitative researchers, this one involves your spending some actual time and energy at the field site. (As we'll soon see, this may not necessarily involve extensive, decades-long immersion! Or I would hope not, if this happens to be your dissertation - hard to believe at times, I know, but there is life after the "Big D," so I wouldn't plan to spend the rest of my life on it if I were you! Leave that for the megamillions post-doctoral grant -- hey, you're worth it ... !)
- Ethnography. A still more specialized type of field study. This one brings to mind the image of Margaret Mead, spending decades totally immersed in exotic locales, getting a truly close-up in-depth understanding of other cultures and places. In general, it involves an extensive immersion in the case, setting, phenomenon, etc., by the researcher(s).
*** MISTAKEN IDENTITY AND AN ERROR STILL MADE IN SOME PUBLISHED WRITINGS:
"All qualitative research" = "Ethnographic research!"
Not so at all! Ethnography is but one very specific subtype of qualitative research -- and not necessarily the most popular or applicable for a given area of investigation. Yet, early on in the "legitimization" of qualitative procedures as "real research," a decade or so ago, some writers and researchers began to use the two terms synonymously. As we'll see throughout this course, it's entirely possible to do a "legitimate qualitative investigation" by analyzing spoken interview comments from a single, small-group one-hour interview session. Ethnography? hardly. Rich and revealing qualitative research? you bet! So -- when it comes to calling all qualitative research "ethnography...." ....
Open the link below to look at a wide range of examples of qualitative
research methods, including ethnography:
Off the track…but a valuable resource. Open the link below to go to site that offers a variety of free software that can help you with ethnographic field work.
Computer Programs used in Ethnography (Stanford)
- History. Likewise, I consider this to be a special case of the "case study," #1, above. This would involve a reconstruction of a past event, life chronology, or process. In doing so, you would of necessity need to draw on a variety of sources - archival documents, and perhaps interviews with descendants or associates of the target person or groups.
- (True) Experimental/Quasi-Experimental studies. This one may have surprised you! As you'll recall from our Intro to Research module on families of designs, the experimental family is characterized by:
- Tightly controlled conditions. Generally this involves a manipulation or treatment of one such condition, holding others constant.
- Randomization. We learned that random selection of subjects and their random assignment to treatment or control group(s) help ensure a "good mix" on "all other possible contaminating variables" -- e.g., those that we can't as readily control in our experimental study. Both of the preceding features are the hallmarks of a true experimental study. In a quasi-experiment, subjects sort of "come intact" in pre-existing groups -- e.g., classrooms, clinics. The best we can do is "take the groups as is," randomly select those groups (as opposed to individuals within them) and then flip a coin or use some other procedure to randomly assign our treatment. We run the risk that those intact groups may not represent as good and thorough a "mix" on all other factors which might contaminate our study.
- The ability to make "cause/effect" statements. This is due to the preceding control and randomization. As a result, we can come out of our study with stronger statements as to what causes what, what is the effect of what, and the like.
Now -- we have traditionally come to associate such "strong, tight" studies as the experimental family with quantitative data collection and analysis procedures. Indeed, as we've learned in statistics, the preceding randomization, when combined with large numbers of subjects, observations, etc., allows for the application of the inferential hypothesis-testing statistical procedures such as chi square, t-tests, ANOVA, and so forth. The associated p-value of the test statistic therefore tells us "how confidently" we can generalize the results we have found to other such similar populations, settings, etc. This would be the "fox-type generalization" of tight cause-effect relationships that we first examined back in Module # 1.
Here is another great Web-based compendium on various qualitative research
Ways to Conduct Qualitative Research
But -- addition of some qualitative observation procedures to the above 'tight' experiment would add (hedgehog-type) rich, close-up contextual detail as to how and why those 'tight' cause-effect relationships operate! Something as relatively simple as asking a small number of study subjects to journal their thoughts, feelings and perceptions would do it -- and provide some interesting "human-perspective" insight. Or it could consist of the experimenter and his/her associates doing some observation of subjects' behaviors during the experimental trial and recording these perceptions in the form of words (as opposed to, or in addition to, strict "counts" of occurrences of behaviors).
An Overview of
Some Basic Qualitative Data Collection Techniques
We'll sort of bird-walk through these, since there are lots of them! -- probably more than you might have realized at first! -- and then return for a more in-depth look at certain ones later!
- - -
- Observation/participant observation. As mentioned directly above, in our related discussion, this one involves identifying the phenomenon/phenomena you want to look at and then recording it through some sort of observation procedures.
More about participant observation
Usually, it will involve coming up with an operational definition -- in essence, 'breaking down' that larger, broader construct/phenomenon -- so that you can readily "tell when it happened" (e.g., observe/record it). For instance, if you are trying to study "teacher reinforcement" in the classroom, you'll need to determine beforehand what constitutes teacher reinforcement. Verbally spoken responses? praise? the other extreme of negative reinforcement: criticism, censure, etc.? And how about the nonverbal manifestations (an important qualitative area of information to which we'll return very shortly when we talk about proxemics and kinesics!) Facial expressions of approval/disapproval? and so on and so forth. It's important to do so, in order that "you'll know it when you see it so that it can be recorded!"
The mode of recording may also vary. You may come up with pre-printed checklists or recording sheets where behaviors are tallied. At the other extreme, you may opt for relatively unstructured "journaling" in the observers' own words of their perceptions of the behaviors, spoken words, body language, etc. This last one is especially useful when you and/or your observers already possess training and experience regarding the phenomenon/phenomena to be recorded. That is: you can be 'trusted' to 'know it when you see it.' Relatively more inexperienced observers may need some training and practice. You may even have them "decode" a mock videotape to help ensure that "they're getting the right objects of observation" and recording them correctly. Research fans will recognize this built-in safety check as a reliability (consistency, accuracy) issue.
And speaking of reliability, you may have multiple observers for the same reasons: as a cross-check. This is akin to the cross-check of the multi-site case study that we spoke of earlier. This helps ensure vs. bias, mistakes in recording, etc.
Participant observation is a special case of the more general observation. In the former, the researcher has some "tie-in" him/herself to the study setting, case, situation, etc. and is also functioning as an observer. As you can imagine, extra care must be taken in this situation to help ensure validity and reliability -- e.g., protection vs. researcher bias. Having multiple observers would be one such way.
- Interviewing. There are many variations of this one too:
- Amount of planned structure in the interview questions;
- Number of interview subjects;
- > We'll be spending some extra time on a special case of the group interview - one where we get together a 'small' group (4-12 subjects) and spend about an hour with them asking them for their opinions, attitudes, emotions, etc., in a relatively relaxed setting. This is called a focus group interview and is a particularly rich and revealing way to observe the influences of the group on one's attitudes, opinion shifts, emotional reactions, and the like!
- Number of interview sessions;
- Selection/eligibility criteria of interview subjects;
- > As we'll be discussing, a random sample would be most counter-productive for interviews! We may actually need and want to 'pre-target,' e.g., go after for interviews, the "unusual," especially experienced, particularly articulate, subjects! Another way of saying this is: the 'middle' or 'average' may in other words not be all that interesting or revealing to us in terms of what we want to learn from those interviews! An elite interview, despite the rather 'loaded' nature of that word, is nothing more than what we learned in our Intro to Research as a purposive sample -- subjects who are pre-targeted because they possess some desired selection criteria (e.g., experience, extremity of opinions on some issue, articulateness).
- Questionnaires/surveys. These can include open-ended, fill-in-the-blank items of greater or lesser structure. The former might be, "What specific inservice courses have you participated in to date during the 1993-4 academic year?" Less structure might be exemplified by, "Please respond to the following in your own words: I chose to apply for the Career Ladder Teacher Incentive Program because ... " and even less structure was evident in our first assignment, where I simply invited you to "tell your story!" and let you pick the context, setting, circumstances, events, etc., that you chose to share!
By the way, adding some open-ended items such as these to a more traditionally scaled quantifiable survey, such as one with Likert-scaled attitudinal items, and/or "check/off" questions on demographic background variables, is a good way to make the survey "multimethod" in nature. This is because you'd be using that "single" data collection vehicle (survey instrument) to collect your data (responses) in more than one form: quantitative and qualitative! You could ask the same general questions in both forms and then compare the two alternative forms of responses to see if they 'converged,' or agreed, regarding the phenomenon that you are trying to measure via the survey (e.g., attitudes towards school climate). If they do, you can have greater assurance that "there's something real being captured/measured" regarding to climate, rather than some "fluke" or artifact of the measurement process itself.
For more information on survey research methodology and issues look at the links below.
Types of Survey Research
- Proxemics and kinesics. If the major strength of qualitative research is to provide "rich, thick description" and detail, these are two primary vehicles for doing so.
Proxemics involves documentation and analysis of spatial relationships. For instance, a delivery of a particular teaching method is so much more than the method itself, isn't it?! Arrangement of chairs in the classroom (rows? circles?), proximity of chairs to one another, windows (how many? size? shape? distraction or welcome source of light), and any other furnishings or contents of that classroom, all ultimately impact the alleged "treatment." Proxemics would involve careful and detailed documentation of all such specifics of the situation. By doing so, the reader is taken into the situation, as it were. He/she "walks the walk" as did the participants, and thereby gains added insight into the way in which the context (surroundings) impacted the process (the 'thing' being studied).
Open the link below for an interesting history and reflection on proxemics.
The Body Language of Proxemics
Kinesics, on the other hand, are what we know as body language. This too would be similarly documented and analyzed. How close together did participants stand? What was the nature of facial expression? touching one another (or not) during the interaction? and similar nonverbal indicators. Barby Carlile, a former CEE/NAU doctoral candidate, did a fascinating qualitative dissertation on meanings and differences of gestures in different cultures. She did focus group interviews to gather her data and even asked permission to videotape so that she could show some of these gestures in abbreviated form during the defense.
More about Kinesics and Proxemics
Ethnography. What primarily distinguishes observation from ethnography is the relative duration and extent of immersion of the researcher in the situation being studied. There are no hard-and-fast rules; it is a judgment call. For example, if you arrange to visit a fixed number of class sessions of limited duration - say, five classes that last for 1 1/2 hours apiece - this might be more in the realm of observation. However, if instead you arranged for a transfer to that particular school for, say, a semester, then you'd be immersing yourself more intensively and directly in the culture of that school. You wouldn't be "quickly in and out" as in the case of the five class sessions. Rather, you'd take coffee breaks with the teachers; participate in administrative meetings; probably interact with a wider variety of 'key stakeholders,' such as staff members, parents, and administrators, and generally have a broader, in-depth view of the culture and setting of that school as well as whatever particular phenomenon you were primarily choosing to observe. This latter situation, I think, would put it in the realm of an ethnography.
In fact, some qualitative researchers have coined the phrase street ethnography for those special cases which concentrates on a particular site. William Whyte's classic Street Corner Society detailing his immersive experience with a Chicago gang in the late 1940's is a famous case in point that you may have read about. In the broader sense, the "street" in "street ethnography" could again be virtually any site: a drug-treatment clinic; a vocational counseling center; a Native American reservation school dormitory; etc.
An example of a street ethnography
An Urban Ethnography of
- Archival document analysis/content analysis. This one involves collection of existing data and 'reanalyzing' them for your own purposes, which may differ from the purposes for which they were initially created. They may include: photographs, memos, minutes of meetings, diaries, memos, manuscripts, songs, poetry, folklore, and existing questionnaires/surveys. This method of data collection is sometimes associated with historical research studies, although it can conceivably apply to any type of problem statement and study design, whether historical or not. Content analysis is a rather general term to indicate how you will extract your own themes, response patterns, and trends from these existing documents. You might, for instance, be looking for number and types of mention of familial themes in journal entries and written poetry.
- Unobtrusive measures. This one is a goldmine for the creatively inclined researcher! In great and joyful contrast to the rigidly fixed, controlled recorded observations of, say, quantitative experimental designs, this one says - "The sky's the limit! No matter how wild it seems -- if it's informative, then it's data!"
Example: you are doing observations of a day care center playroom. Perhaps as part of recording the kinesics (# 4, above), you notice that a pile of toys in one particular corner has more dust accumulated on the surface than the others. Indicator - > These toys must not be "as played with," as say, that frayed teddy bear in the other corner whose stuffing has about been hugged out of him!
Example: you are doing similar type 'walk-around observations' in a museum. You notice that the carpet in front of a particular group of paintings is especially scuffed. Indicator - > That must be a particularly popular or interesting painting, to attract comparatively so much more traffic than the others!
See what I mean by "anything goes?!" A good qualitative researcher, in line with the "rich, thick contextual description," will leave virtually no stone unturned in his/her pursuit of any data that might shed light on the phenomenon in question! It helps to think as broadly and creatively as possible in terms of these possibilities. P.S. By the way, in the preceding two scenarios, both observations and conclusions could have (and probably should have) been done with no actual subjects (preschoolers, museum visitors) around! And yet look what valuable clues we got about them from the "physical traces" or "artifacts" that they left behind! This, then, is the general idea behind such unobtrusive measures, and how they differ from the more general class of observations (# 1, pgs. 14-15), where you would be expected to come into direct contact with the actual subjects of the study.
We've now accomplished an overview of the strategies and procedures for doing qualitative research! As mentioned previously, we'll be revisiting many of these in greater depth in the future. Next time, though, we'll take a similar overview with regard to compiling and reporting qualitative data. What are some basic ways to summarize these data in order to answer our research questions? See you back in our corner of cyberspace soon!!!
EDR 725, Qualitative Research Procedures:
Assignment # 2:
"Let Us Count the Ways:"
Strategies for Doing Qualitative Research
- For the two articles or other pieces of qualitative research that you located for the second part of Assignment # 1, please identify the particular qualitative design(s) and data collection procedure(s) that each article employed. Also please comment on the appropriateness of these choices.
- And -- it's time to start thinking about the "mini-qualitative study" that you will carry out as a part of this course!
For Part 2, I'd like you to indicate what the topic of your investigation will be. Some suggestions along these lines:
- For those who have a general idea of what they will do for their dissertation research, this could be a small subset of your overall problem statement. For instance, you could identify a sub problem that you would like to investigate using qualitative procedures, and then collect and analyze those data for your course requirement. Thus, the output of this course becomes a key part of the input of your own dissertation!
- Another idea that has appealed to "live and in person" qualitative students I've taught is this. How about using this course to qualitatively pilot test your instrumentation for your eventual dissertation? This could be a quantitative Likert-scaled survey, for instance; or an eventual interview question protocol/guide. Probably the easiest, quickest and most convenient way to pilot test any such instrumentation -- regardless of whether the instrumentation is qualitative or quantitative in nature -- is to obtain input from a "group of expert (pilot) judges." These judges would review your instrument draft and make comments and suggestions as to its content, directions, clarity, order of questions, etc., etc. You would then compile and use their aggregate feedback to make revisions to your instrumentation. This would go into your eventual Chapter Three Pilot Test Procedures and Results under your Instrumentation subheading!
Hope the preceding gives you some ideas and possible alternative paths to choosing your eventual project for this course! Also, please keep in mind, if you would like to work with a partner or group on a joint paper or project, that is A-OK! Just let me know who you're choosing to work with so that I'll have it for my records!
Continued Happy Trails in Cyberspace to You!!!
Once you have finished you should:
Go on to Assignment 1
Go back to Research and Design Strategies
E-mail M. Dereshiwsky
Call M. Dereshiwsky
at (520) 523-1892
Copyright © 1999
Northern Arizona University
ALL RIGHTS RESERVED