|
The Five Dimensions
of Participant Observations
As I mentioned in our preview at the outset of this learning module,
I think this section might just shake up some preconceived notions or
images you may have - even subconsciously! - regarding participant observation.
In other words: there are more ways to vary this type of data collection
than what you might initially think!
Specifically, here's what I mean. The classic stereotype of such
participant observation is Margaret Mead spending a decade living in
another culture. This is certainly one way to do participant observation.
But as we'll see, it is far from the only way - by the way, did I just
hear a big sigh of relief on the part of some dissertation candidates?
What - you mean you didn't plan to spend 10 years completing
your data collection?!
Seriously, as we learned earlier in this course, such a prolonged,
intensive immersion is also known by the methodologic label of ethnographic
research. It is one way, for sure. Yet, as we'll see in the following
diagram from Michael Quinn Patton, it is far from the only
way! Please see Figure 2 below!
Figure 2.
Five Dimensions of Participant Observation
(adapted from Michael Quinn Patton - 1986)
Thus, in thinking about these various dimensions of participant observation, we come to recognize that the prolonged, highly immersive Margaret-Mead types of ethnographic time series studies would constitute:
- The left-hand side of Dimension I; and
- The right-hand side of Dimensions IV and V.
Please note, too, how the right-hand side of Dimension V is ideally suited to purely exploratory, grounded theory work. We may not even have any idea as to what variables or factors we are going in to observe. That is, in fact, our desired outcome: we're hoping that they will 'shake out' as a result of our immersion in the setting.
Also, with regard to the right-hand side of Dimension I, one example of this might be a focus group interview being conducted but also being observed by a one-way mirror on the other side of the room.
Which leads us into a thicket of potential controversy!
It won't surprise you to know that there has been considerable controversy surrounding Dimensions II and III. How much "candor" is desirable? Is too much openness on the part of the researcher likely to pre-bias or 'sensitize' subjects' responses? e.g., halo effect, Hawthorne effect, 'we're being watched so we had better put on a happy face!' On the other hand, is too much secrecy harmful to these subjects? Issues of trust and credibility are natural concerns. And again, these are especially likely to 'nag at the researcher' if he/she has spent considerable, prolonged immersive time with these subjects, as oppposed to, say, an impersonally mailed open-ended survey instrument. You may have come to the point where you know the names and the faces. You've socialized with them, met their families and friends, and so forth. Are you harmfully betraying their trust by either withholding (covert) or blatantly disguising (overt) information germane to the purpose of your study?
Thorny issues, indeed -- and also not surprisingly, there are no clear-cut guidelines to follow. Gray areas can and do occur, along with the potential of erring in one direction or the other.
However, a couple of procedural safeguards do exist. While, again, these hold no guarantees, they do help to focus your awareness of the key issues:
- Institutional Review Board/Human Subjects Approval Process. Our dissertation-stage
candidates are already aware of this one! IRB's exist at major public
universities. Their mandate is to review all proposals of research that
is to be conducted under the auspices of the university setting - which
means any faculty member, student, administrator, staff member, etc.,
affiliated with that university must 'clear the IRB' before proceeding
with his/her data collection. The IRB examines and deliberates on the
research proposal from the vantage point of human subjects. Are their
rights adequately protected? Will there be any risk or harm accruing
to them as a result of any phase of this research? In cases where there
might be psychological harm as a result of deception, secrecy, etc.,
the IRB could require the researcher(s) to attend a hearing in which
they are asked additional questions regarding the research. They may
also be given some guidance from the IRB as to how to reduce the risks
to the subjects. I recall one case when I served on the IRB not too
long ago as one of the 2 CEE representatives. The concern over having
a 'secret revealed' and associated distress to the subjects was addressed
in this way. The researcher, a doctoral student, conducted a followup
"extensive debriefing/support" focus group interview with the subjects.
This was done in order to probe for any psychological distress they
might have suffered as a result of the 'deception.' In addition, in
the event that such stress did surface, a university professor who is
also a licensed clinical psychologist was also present at these debriefing
interviews. His role was to step in as needed at that point, as per
his expert assessment, and assist the subject(s) in resolving any such
psychological stress.
- The IRB is a "must" if you are doing your research as part of the
university setting. There also exists a second, non-binding (except
of course in the moral sense of a professional code of ethics) set
of guidelines. These were established in 1981 by the Joint Committee
on Standards for Educational Evaluation. Among other things, these
professional evaluation standards advise explicit awareness of the rights
of human subjects throughout the research process; sincere respect for
the "dignity and worth" of subjects; and sharing the findings and results
with such subjects after they are compiled (e.g., the "member check"
briefly mentioned in the preceding discussion). This one is akin to
"coming clean with them after the fact," with the opportunity for you
to observe any such signs of distress and do similar debriefings to
those mentioned above if need be. Again, while these are not mandated
in a legal sense, they are similar to "official codes of ethics" developed
and revised by professionals in other fields and treated with commensurate
moral obligation.
-
Institutional
Review Board for the Protection of Human Subjects at NAU
IV. What to Observe
- The setting - this one follows the caveats given earlier
regarding detail and descriptive, rather than interpretive,
narrative. Michael Quinn Patton illustrates this very nicely with the
following:
The room was large enough for a three-person couch across
one side, six chairs on the walls next to the couch, and three
chairs by the wall facing the couch, which included the door.
With twenty people in the room, each person had space to fit,
but when everyone was standing there was very little space between
people. Several participants were overheard to say, "This room
is too crowded."
Now: compare the preceding to the following entry: "A crowded room!"
- The human, social environment - could include, but not
necessarily be limited to, the following:
* Characteristics of the subjects (e.g., gender,ethnicity, approximate
age grouping, style of dress)
* Patterns, frequency, direction of interaction and communication
* Decision making behaviors - who initiates it, who ultimately makes
the decision and type/manner of communication regarding the decision
- Activities and behaviors - could include, but not necessarily
be limited to, the following:
* Who initiaties the activity?
* In what way?
* What were the participants' verbal and non-verbal reactions?
* What happens at each step of the activity?
* Who is involved?
* What is communicated, both verbally and non-verbally?
* What are the 'closure points' or signals that this activity is
about to end?
* Who is present?
* What is communicated regarding such intended closure, both verbally
and non-verbally?
* How is the completion of this particular activity related to other
activities and behaviors which are observed?
- Informal interactions and unplanned activities - ah,
the richest and most revealing things -- in life, too, never mind research!
- can be the 'surprises!' For example, this could refer to "free/unstructured
time" if what you are observing is a planned activity, such as a conference.
What do participants choose to do during this time? With whom? This
is a wonderful opportunity to, as Patton puts it, "stay open to the
data." The researcher might simply mingle with them, as at a coffee
break or refreshment table during a conference break time. Furthermore,
the researcher's involvement could consist of:
- Explicit impromptu questioning/probing: "What did
you think of Speaker X's presentation this morning? How do you think
this presentation fits into the theme of the conference?" etc.
- Simply listening and observing "on the sidelines,"
without such direct intervention as in (a), above.
As Patton astutely points out: "The fact that none of the participants
talk about a session when it is over is data. [More on this 'non-'
type of observation shortly!] The fact that people immediately split
in different directions when a session is over is data. The fact that
people talk about personal interests and share gossip that has nothing
to do with the program is data."
- The language of program participants - Each field, as
we know, has its own 'jargon,' both formal and informal. It is important
for you, as the participant observer, to be as familiar with such 'jargon'
beforehand as possible. This is so that "when you hear it, you'll know
what they mean by it."
- Non-verbal communication - we've already discussed this
one in terms of proxemics and kinesics. Body language, facial expressions,
how people choose to arrange themselves in a room for a group session,
customary and accepted ways of greeting one another - all contribute
to your accurately and completely documenting the "true lived experience."
- Unobtrusive measures - these may be part of thoroughly
documenting the setting, as explained in Point # 1, earlier. We've also
talked about these in terms of the "traces," or physical clues that
are left behind at the scene but which may tell us much about other
variables we are trying to understand. We used the example of a group
of toys with lots of dust gathered on them at a preschool play center.
We also talked about an art exhibit and noticing that the rug is particularly
worn in front of certain paintings or exhibits. Such "archival traces"
may reveal valuable clues as to underlying patterns, behaviors, choices,
etc. even if the subjects themselves are not there when you gather these
observations.
- Documents - we've briefly discussed these in
earlier lessons, as well. Examples include policy manuals, training
materials, minutes of meetings, memoranda, computerized data files,
etc. These are particularly valuable in comparing whether programs and
policies in the field -- as per your on-site observations -- are indeed
being carried out and working as intended -- as per the documents.
- Observing what does not happen & other surprise findings
- Ah, what a goldmine of information! As pointed out with regard to
# 4, above, it's the 'surprises' that can be most rich and revealing!
I like to say the following about this point. In a classic, rigidly
controlled experimental design, an unexpected occurrence usually means,
"stop the experiment and start again."
But I ask you, reality-based friends: doesn't life throw us surprises
as a matter of course? And isn't there much to be learned by watching
and observing how we and others react to those "curve balls of life?"
regarding human behavior?
Thus, in qualitative research, we "joyfully keep going" in the
face of the unexpected! Those can provide particularly revealing data
regarding how we deal with unexpected events!
An example of the first, or "what does not happen:"
Say that a counseling session for teenage girls with eating disorders
is intended to be "highly participative in nature," at least according
to the training manuals. Aha - but you observe some of these sessions
in action and you consistently notice that the girls do not speak
up, volunteer their own experiences, share their own management of
the disease, etc. Rather, you notice that the counselor seems to be
sort of "top-down" controlling the conversation - doing most of the
talking in the form of directives and suggestions. Noticing and recording
that an intended event (clients freely initiate conversation) did
not happen may be a goldmine discovery to program planners.
They now have information that the program, for whatever reason(s),
is not being implemented as planned. This discovery and mismatch,
in turn, can spur further investigations as to why such intended outcomes
are not occurring.
As an example of the second, surprise occurrences,
I like to share the following hypothetical scenario from my own real-life
experience as the evaluator of the annual Arizona Leadership Academy.
Suppose that an outdoor volleyball game is planned as one of the events.
It is intended to foster team building, cooperation and similar positive
outcomes. Well, lo and behold, it's monsoon season in Flagstaff. Quite
unexpectedly, the skies open up, the torrential rains start pouring
down, and there goes the volleyball game. Again, in classic experimental
design, this would be a 'failure' and would imply 'start again.' Not
so in qualitative! I as a participant observer watch with intense
interest as to how everyone reacts to this. Do they get angry? disappointed?
frustrated? and then what do they decide to do? I may observe that
after an initial 'ain't it awful' gripe session, someone pipes up
with, "Say, I know where I can get my hands on a table tennis set.
How about if we set up the tables indoors in our large-conference
meeting room and have a game?" And then another person chimes in,
"OK, that's a great idea!" Before you know it, they've divided up
who will go get the equipment,who will set it up and where, how they
will choose up teams, etc.! Lost opportunity? Hardly! Did we not learn
as much, if not more about our intended target variables of "team
building, decision making, and cooperation" by continuing to observe
and record the reactions in the face of this surprise development?!
Such is the beauty of qualitative research: many such golden opportunities
to observe and learn!
VI. Field Notes
You've already got the bottom line on this one:
rich, thick, descriptive (as opposed to oversummarized interpretive)
detail!
Let me close with yet another vivid example from
Patton:
Too General and Interpretive: |
The new client was uneasy waiting for her intake interview. |
Richer Detail Descriptive & More Concrete: |
At first the client sat very stiffly on
the chair next to the receptionist's desk. She (client) picked
up a magazine and let the pages flutter through her fingers
very quickly without really looking at any of the pages. She
then set the magazine down, looked at her watch, tugged at
her skirt, and picked up the magazine again. This time she
didn't look at the magazine. She set it back down, took out
a cigarette, and began smoking. She would watch the receptionist
out of the corner of her eye, and then look back down at the
magazine, and back up at the two or three other people waiting
in the room. Her eyes moved from the people to the magazine
to the cigarette to the people to the magazine in rapid succession.
She avoided eye contact with anyone in the room. When her
name was finally called by the receptionist, she jumped up
like she was startled. |
Here, then, is Patton's summary of the key guidelines for doing successful
field work:
1. Be descriptive, rather than interpretive, in taking field
notes.
2. Gather a variety of information from different perspectives.
3. Cross-validate and triangulate by gathering different kinds
of data -- i.e., a multimethod study -- such as observations balanced
with interviews, content analysis of program documents, recordings
and photographs.
4. Use quotations; represent program participants in their own
words to the greatest extent possible. Try and capture participants'
perceptions of their experiences, emotions, etc., in their own words.
5. Select key informants wisely and use them carefully. Draw
upon the wisdom of their "cultural insider" informed perspectives,
but keep in mind that those perspectives could be limited
6. Be aware of and sensitive to the different stages of fieldwork:
a) Build trust and rapport at the entry stage. Remember that
you, as the evaluator and observer, are also being evaluated.
b) Stay alert and disciplined during the more routine middle-phase
of fieldwork.
c) Focus on pulling together a useful synthesis as your fieldwork
draws to a close.
d) Be disciplined and conscientious in taking detailed field
notes at all stages of fieldwork.
7. Be as involved as possible in experiencing the program, situation,
subjects, setting, etc. as fully as possible while maintaining an
analytical perspective grounded in the purpose of your fieldwork:
i.e. 'scientific research designed to answer a research question.'
8. Clearly separate description from interpretation and judgment.
9. Provide formative feedback to the subjects as part of the
verification process of your fieldwork - e.g., "member checks." Time
that feedback carefully. Observe its impact.
10. Include in your field notes your own observations, experiences,
thoughts and feelings. Several prominent qualitative researchers have
referred to this as "memoing." These, too, are field data.
- - -
I highly recommend all of Michael Quinn Patton's
outstanding books to you! The one I've used is from his Program Evaluation
Kit and is entitled How to Use Qualitative Methods in Evaluation
(1986, Sage Publications, Inc.) He also has more recent, equally readable
and vivid, compendia of qualitative evaluation research procedures!
Next time, we'll begin a look at one of the most popular ways
of collecting qualitative data: interviewing. Meanwhile, here's
to rich and revealing observations in every facet of your lives!
Once you have finished you should:
Go back to Topic 2: The Five Dimensions of Participant Observation
E-mail M. Dereshiwsky
at statcatmd@aol.com
Call M. Dereshiwsky
at (520) 523-1892
Copyright © 1999
Northern Arizona University
ALL RIGHTS RESERVED
|