|
"Making
Sense of It All:" A Second Look at Qualitative Data Compilation & Analysis
Once again, dear qualitative superstar scholars: we are "back to the future!" In our Module # 2 & the related supplementary materials, we began to take a look at some ways to summarize and report qualitative data. Let us now revisit the issue & continue to 'creatively brainstorm' regarding how to summarize and display the qualitative findings and results! This excellent material comes to us courtesy of superstar evaluation guru, Michael Quinn Patton, and his excellent handbook entitled, How to Use Qualitative Methods in Evaluation (1987, Sage Publications, Inc.). As others have found, Patton's perspective on qualitative research is so immensely valuable that it actually transcends evaluation studies per se. Simply put: it's a goldmine for all sorts of qualitative research needs!
We'll start off by taking a look at some 'procedural, housekeeping issues:' namely, how to get started and organize the masses of 'rich, thick' qualitative data in order to be ready to condense these data into a concise analysis report. Next, we'll share some clues on what Miles and Huberman and others have referred to as "making meaning:" that is, applying creative qualitative-type labels to the summarized data that we collect. Then, in a hypothesis-testing sense, we will discuss the issue of "negative cases," or "disconfirming evidence:" how to really subject our findings and conclusions to the test of 'competing evidence,' to see if our results manage to hold up.
I would urge you, at this point, to review and have handy the EDR 725 Qualitative Research Module # 2 materials. These 'advanced issues' are intended to build upon the two basic frameworks or paths for deciding how to summarize and report qualitative data. To remind you of these methods, they are:
1. The summary narrative method (condensed write-up of key findings & results, along with judicious 'sprinkling-in' of key illustrative quotes and similar 'raw qualitative data'); and
2. The matrix/table shell method (broadly speaking, any sorts of 'vivid visual displays' of the qualitative findings and results - not just the tables themselves, but any innovative type of graphic, chart, etc. to 'tell the story to the reader at a glance').
Ah, the process of sequentially closing in on your desired target outcome - of condensing and making meaning out of all of your qualitative data .. ! We are up to this challenge...!
I. "Getting Your House in Order:" Organizing Your Qualitative Data for Analysis
Patton & other qualitative researchers once again warn us to expect an avalanche here! Qualitative data, by their very nature, tend to be voluminous. I have often scripted focus group interview sessions on my PowerBook notebook computer. It is typical for me to come out of a single one-hour session with 15 pages or so of word-processed notes!
Patton suggests the following plan of action to help get organized & get ready for the analysis:
- Ensure that it is 'all there' and that you have made backup
copies. This is certainly important with any kind of
database. But it may be especially critical in the case of qualitative
data, for if you are planning to do the coding, annotation and
summary by hand, you will probably want several 'working copies'
of the interview transcripts, written open-ended survey responses
and such. Even if you are planning to use one of the newly emerging
computerized packages for your qualitative data analysis, the same
rule of thumb applies as to any kind of computer documents. Back
up your files faithfully!
In the case of hand-coding, as briefly mentioned in Module
#2, different qualitative researchers have different preferences
in this regard. You might choose any or all of the following methods
to "reread, distill, and summarize:"
- Marking up working copies of interview transcripts with
different colored translucent markers, as well as your own
comments in the margins, to reflect the general categories
into which you are placing the responses;
- Making notes of the comments that fall under these categories
on different-colored stacks of index cards - with each 'rubber-banded'
colored stack representing a different concept or category;
- The "butcher-paper-on-the-basement-wall" technique that
Denise Ehlerman used, as described in Lesson Packet #3,
with each sheet representing a different concept or category;
- Literally scissoring up a working copy of the original
transcripts, quotes, etc., and then pile-sorting the pieces
and rubber-banding them to reflect the 'summary and sorting'
of the raw qualitative data under general 'umbrella-type'
concepts or categories.
Again, there is no single procedure that can be said to be
superior to any other! The same, by the way, is true when
comparing the relatively recent proliferation of computerized
packages for summarizing and sorting qualitative data.
But - this may itself be a plus! Just as we are all different
& unique as individuals, it's kind of neat that qualitative
data compilation and reporting allows us to take our own preferences
and working styles into account this way!!!
SIDE NOTE at this point: As with all computer hardware
and software, changes happen virtually overnight. Just keeping
up with 'what's out there right now' can be a challenge! But there
are two outstanding sources where you can at least, in my opinion,
get a quick 'grounding' in the various types of qualitative software
and how the packages compare:
- I’d like to share a valuable related resource with you entitled,
A Software Sourcebook: Computer Programs for Qualitative
Data Analysis, by Eben A. Weitzman and Matthew B. Miles,
1995, Sage Publications, Inc.
- If you don't need or want quite this much detail, i.e.,
an entire book, there is an excellent comparative chart and
brief discussion of the types (i.e., Macintosh, Windows, MS-DOS
based) and relative 'tradeoffs' of the more popular and established
qualitative software packages. This readable material appears
in an appendix entitled, "Choosing Computer Programs for Qualitative
Data Analysis," in Qualitative Data Analysis: An Expanded
Sourcebook, by Matthew B. Miles and A. Michael Huberman,
2nd ed., 1994, Sage Publications, Inc.
- Next, Patton and others (i.e., most notably, case study authority
Robert K. Yin) recommends that you also go on to establish and write
a case record. This is an artifact that will contain,
literally, the essential elements or 'traces' of the steps of your
study. You might picture it, for instance, as a series of file
folders, organized by, say, type of document; time period; topic
area; etc. (with the same to hold true if you are computerizing
it - i.e., for the Macintosh, you can literally make 'electronic
folders' containing this information!). It might have things in
it such as your letters of permission to enter a site & do your
study; training manuals; pre- and post-pilot test copies of instrumentation
such as interview protocols; memos and meetings of minutes pertinent
to your study; and working copies and final drafts of each successive
stage of your summarization, down to the eventual final qualitative
data analysis report.
There are at least two distinct benefits of establishing
such a case record:
- You will greatly expedite your own compilation and organization
of your data analysis by having such key information organized
and handy - you just never know when you might need to pull
up and re-examine original source documents, early drafts, etc;
and
- In terms of providing a documented, step-by-step archive
of what you did, what source documents you used, and how you
gathered and summarized the data, you are greatly enhancing
reliability in a 'replicability' sense. Again, as we stated
at the outset of the course, the concept of 'replication' is
interpreted a bit more loosely in the case of situation- and
context-specific qualitative research, than it is for the more
tightly controlled, quantifiable, experimental-type studies.
Nonetheless, it is important, in a 'scholarly community, research
life cycle' sense, for you to have as complete and verifiable
a record of what and how (with, of course, anonymity
of respondents protected), in case another researcher who is
intending to replicate/extend your study should request to see
your 'road map.'
Now that your raw data are compiled and organized, the real challenge is at hand: "making meaning," concisely yet completely summarizing, in order to (as with all research!!! that we've been saying since our EDR 610, Intro to Research, days) "answer our research questions!" We've talked about ways to do this in Module #2 and also in our related discussion above regarding "annotating, placing data under umbrella concepts or labels." This is probably the most generally established way to 'condense and make meaning.' In that regard, I want to revisit the issue of 'label-making' and introduce two general families of such labels that Patton explicitly and colorfully defines for us!
II. "Telling It Like It Is:" The Concepts of "Indigenous Typologies" vs. "Analyst-Constructed Typologies"
In your quest to identify the over-arching "umbrella-type" concepts, labels, or categories under which the raw qualitative data seem to fit, there are two general ways of going about identifying this "framework of labels!" I like Patton's two-way division because it really seems to make practical sense! Here is my own 'take' on how they are interpreted:
- Indigenous typologies - are categories that come
directly out of the jargon or everyday popular talk of the field in
which you are researching! This is akin to the famous 12-step "talking
their talk," with "they" being the subjects in the field.
That is why the framework, or typology, is "indigenous:" it already
exists for you.
For this one, then, your job is to learn the underlying jargon
of that field - whether it comes from an existing theoretical/conceptual
model, or whether instead from popular practice, as labelled and identified
by the subjects (i.e., target population and sample) themselves. You
then use the existing terms and labels to try and compile your
qualitative data. Furthermore, you make the judgment if the 'fit'
is good between your data and that existing framework.
As just one example, we have a number of EDL doctoral candidates
researching various applications and extensions of the "12 Skill Dimensions
of Leadership." They would then attempt to sort their data according
to the 12 dimensions and using the particular terminology - in whole
or in part - of each one.
As another example, suppose that you are replicating and extending
a study along the lines of the Packard/Dereshiwsky model of the "Factors
of Organizational Effectiveness." This one consists of a rocket-type
model, in honor of Christa McAuliffe. It has two 'layers:' support
and focus factors, both of which, we feel, are essential to consider
in terms of 'reaching one's ultimate goal(s),' the tip or pinnacle
of the rocket. For a school, this would certainly include "academic
achievement". We identify and label two specific families of particular
support and focus factors, both of which, we feel, are sometimes overlooked
but can actually 'make or break' our chances of reaching those ultimate
'pinnacle' goals.
Again, in each of the above scenarios, as well as with indigenous
typologies generally, your goal in starting out is to in some way
"road test" an existing model or a way that subjects 'naturally talk
about' the phenomenon you are studying. Thus, you make a first
pass in summarizing your qualitative data at "talking their
talk."
To be more specific, you use the parts of the model, labels,
terminology, etc., that they use, and you see if indeed, your qualitative
raw data does seem to 'fit' under those pre-existing labels.
*** If it does not, this is important information
too! Namely, it is akin to 'rejecting a hypothesis:' you appear
to be finding that 'the model needs modifying!' Maybe some categories
don't quite fit anymore. Or you appear to be discovering other, emergent,
new labels or categories under which a good proportion of your qualitative
raw data appears to fit lots better than with the pre-imposed model
or framework of terminology.
*** This is why, even with a pre-existing framework of concepts
or terms - an indigenous typology - it is still vitally important
to stay open to new ways of compiling your data!
To new categories, or even entire new models or
frameworks, in a classic 'grounded theory,'
emergent sense! Be sure that you use the existing indigenous typology
as a guide or starting point - and then you see if it
appears to 'fit' with your data - or not!!!! Either way - you do have
a set of findings!!!
- Analyst-constructed typologies - ah, what a golden
opportunity for the creative "True Colors Orange," free spirits and
innovative thinkers among you!!! Yes, you: the one who enjoyed
writing poetry and creative short stories! Who's got the makings of
"The Great American Novel" in his/her 'mental ROM!' Here's your big
chance!!! For this one, you, the analyst, try your
hand at making up 'vivid, creative, visual labels' under which
to compile and report your actual raw qualitative data!
Whether you do this because you find the existing indigenous
typology lacking, or whether there simply isn't a generally accepted
framework - the end result could well be a major, memorable contribution
to how we think about this phenomenon - if you get 'colorfully creative'
in how you do it and if it appears to 'fit' your data!!!
Michael Quinn Patton provides a neat example from a study originally
done in 1977 by Robert L. Wolf and Barbara Tymitz. They were doing
visual observations of visitors to a museum exhibit entitled, "Ice
Age Mammals and Emergence of Man." From documenting and comparing
their observational field notes of museum visitors' choices, body
language and behavior, Wolf and Tymitz came up with the following
creative labels to summarize and characterize the subjects' behaviors:
The Commuter
This is the person who merely uses the hall as a vehicle to get
from the entry point to the exit point...
The Nomad
This is a casual visitor - a person who is wandering through
the hall, apparently open to become interesting in something.
The Nomad is not really sure why he or she is in the hall and not
really sure that s/he is going to find anything interesting in this
particular exhibit hall. Occasionally, the Nomad stope, but it does
not appear that the nomadic visitor finds any one thing in the hall
more interesting than any other thing.
The Cafeteria Type
This is the interested visitor who wants to get interested
in something, and so the entire museum and the hall itself is treated
as a cafeteria. Thus, the person walks along, hoping to find something
of interest, hoping to "put something on his or her tray" and stopping
from time to time in the hall. While it appears that there is something
in the hall that spontaneously sparks the person's interest, we perceive
this visitor has a predilection to becoming interested, and the exhibit
provides the many things from which to choose.
The V.I.P. - Very Interested Person
This visitor comes into the hall with some prior interest in
the content area. This person may not have come specifically to the
hall, but once there, the hall serves to remind the V.I.P.'s that
they were, in fact, interested in something in that hall beforehand.
The V.I.P. goes through the hall much more carefully, much slower,
much more critically - that is, they move from point to point, they
stop, they examine aspects of the hall with a greater degree of scrutiny
and care. (Wolf & Tymitz, 1977, pp. 10-11; all emphasis - italicized
- in original text)
Aren't the preceding labels vivid?! I like to think of them as
"narrative metaphors and similies!" They convey, in summary
fashion and 'at a glance,' an entire expanded image of the full range
of the subjects' behavior, attitudes, choices, etc.!
The Miles and Huberman Qualitative Data Analysis Sourcebook
also has additional examples of such 'creative label-making' in Chapter
10, "Drawing and Verifying Conclusions."
Perhaps, then, in contrast to "indigenous typologies" and 'road-testing
existing models,' as explained above, the most common scenario for
the "analyst-constructed typologies" is one where no single widely
accepted model or framework exists. Thus, the qualitative researcher
'goes with the flow,' soaks in his/her data and lets the creative
juices take over regarding how to compile and sort the responses.
But I would again urge even the 'existing/indigenous' researchers
to stay open to the fact that the existing model may not fit the current
data, in whole or in part. So they, too, may need to switch gears
and get 'similarly creative' regarding newly added, better-fitting
labels, terms, or even entire models!
So ... now you are in the thick of things as far as not only
organizing and compiling, but also beginning to "make meaning of,"
your raw qualitative data. As you do this, however, a brief comparative
analogy to the quantitative side of the fence should serve as a gentle
warning!!! Remember the concept of "Type I error," or p-values, in
analytic statistics? In a nutshell, this refers to the fact that you
can 'do everything right:' i.e., carefully select a random sample
from a well-defined population, compile your data without coding error,
pick the 'best' statistic(s) and intperpret it/them - and still
you cannot guarantee "with 100% confidence" that
the results you find within that sample will apply for certain to
the entire population! Fact is: you have to live with a certain
level of risk - i.e., 5% or whatever level you can set - that
your sample was simply 'flukey' and despite your best efforts, the
comparable population results would in fact be different! You
could, for instance, have randomly drawn "lots of extreme (high/low)
scorers" on your particular phenomenon, where in fact, in the population
at large, there is more of a 'mix.' As you may recall from your Intro
to Statistics class: with analytic statistics you can't guarantee
'how right' you'll be - i.e., 100% certainty from sample to population
would be a wonderful goal, but it is simply impractical due to the
vagaries/possibility of an 'unlucky sample draw' and associated sampling
error! But you can 'bound' 'how wrong' you'll be by
pre-selecting that 'risk you can live with:' i.e., 5% or 1% most commonly
- and then interpreting your calculated test statistic in light of
that 'risk of being wrong.'
Well -- in qualitative research, as indeed in life itself (!!!)
- we cannot escape the possibility that 'we may be wrong' in our own
interpretation of things. On top of that, in qualitative research
we do not have the 'comfort and security' of p-values, Type I errors,
etc., to guide us in 'how wrong we may or may not be.' That is: we
can't compute a test statistic and then look it up in a table to be
able to say, "My calculated value is greater than the critical value,
so I can be at least 95% confident that my results will hold
up!"
Given this inherent 'fuzziness' and the greater role of human/researcher
judgment and interpretation in analyzing our qualitative data, what
can we do to help protect vs. bias and/or error?
One remedy may be summed up in the following popular phrase:
"Be ready to play 'Devil's Advocate' with yourself and your findings!"
In other words: make certain, in your heart, mind and conscience,
that if "contradictory cases or data" exist - in opposition to your
own findings and results - you really have made an 'honest attempt'
to find them and account for them!!!! And if necessary, modify
your original findings and results in light of these "don't-fit-the-pattern"
cases! That is the topic of the following discussion!
Once you have finished you should:
Go back to A Second Look at Data Compilation and Analysis
E-mail M. Dereshiwsky
at statcatmd@aol.com
Call M. Dereshiwsky
at (520) 523-1892
Copyright © 1999
Northern Arizona University
ALL RIGHTS RESERVED
|