EDR725
 StartSyllabusClassLibraryCommunicate
Help EDR725 : The Class : Making Sense : Credibility : Believe it or not

"Believe It or Not:" Evaluating the Credibility of Qualitative Studies

Dear qualitative cyber-scholars! A wonderful book called Writing the Qualitative Dissertation: Understanding by Doing by Judith M. Melloy (1994: Lawrence Erlbaum Associates, Inc.) has a chapter entitled, "The End Is the Beginning." The final chapter of yet another outstanding qualitative primer, Becoming Qualitative Researchers: An Introduction, by Corrine Glesne and Alan Peshkin (1992: Longman Publishing Group) has a final chapter called, "The Continuing Odyssey." This is so appropriate, particularly for qualitative research, where the 'challenge of making meaning' is put upon you as the researcher! Where the boundaries, closure, end-points, are not as clear as with some quantitative, experimental-type studies. And where, as with all studies in the "research life cycle" (memories of our first lesson packet, Intro to Research by modem friends?!), the end of one good study should contain the seeds of the beginning of yet another! In that regard, this may be our "final chapter," but the journey is never really over...!!! (I'd hate to think so! I'd miss you all too much to say "goodbye!")

Please take a minute to open and read the link below which offers a number of pointers on good practices in presenting your qualitiative data.

Presenting Qualitative Data

For this final chapter, we'll revisit the idea of credibility of qualitative studies. How do we subject our work to a sort of critical self-test as to its quality and believability?

Our Intro to Research and Research Design partners will be reminded in this regard of two key related concepts:

  1. Internal validity - i.e., credibility, believability, plausibility of findings and results. Another way of looking at this issue is: what are the potential contaminating, uncontrolled, "iceberg" variables that could have crept in and caused our findings and results? We may think or believe the results are due to one or more identified variables, but they may have actually been contaminated, influenced, or caused by other variables (than the ones we are explicitly focusing on in our study). As a peek into the future for our Dissertation Seminar partners, any such potential barriers or "threats to internal validity" are discussed in Chapter One under a subheading entitled, "Limitations."
  2. Check out this tutorial on internal validity for an in-depth look at internal validity and the threats to internal validity in experimental research.

    Internal Validity Tutorial

  3. External validity - i.e., generalizability or applicability of the study's findings, results and conclusions to other circumstances - i.e., the issues of to whom/to what/to where/to when these findings, results and conclusions may, or may not, necessarily generalize. Another way of looking at this issue is: the preceding circumstances of who/what/where/when serve, in essence, to form the boundaries of our study. What we find and conclude may, or may not, necessarily generalize outside our particular boundaries of our study. With regard to the doctoral dissertation, these boundaries, in terms of potential "threats to external validity," are similarly identified in a Chapter One subheading entitled, "Delimitations."

And here is a tutorial on External Validity, again with emphasis on experimental research.

External Validity Tutorial

As we have noted in preceding topics in this qualitative 'odyssey' of ours, qualitative designs and analysis procedures typically have greater external validity, at the expense of some internal validity. Due to their 'real-life' setting and context, qualitative studies are more 'reality-based' than the more traditional, tightly controlled experimental-type designs. The latter may be therefore said to possess greater internal validity - 'things happened for the reasons we think' - i.e., the outcome can be assumed to be due to the manipulation of the treatment because so many other potential contaminating causes/variables have been randomized, matched or controlled for. However, this greater internal validity often comes at the expense of external validity, for as we know, 'real life isn't a controlled laboratory' by any means! We must take people and situations as they come - in all their often-somewhat-messy complexity. Thus, there is little or no certainty, in many cases, that what held true in the laboratory will 'happen the same way' in the uncontrolled field setting.

Despite this tradeoff, there still tends to be a sort of 'burden' or 'bias' attributed to qualitative studies at times. In some (more traditionally, experimentally, quantitatively schooled) researchers' eyes, qualitative studies seem to 'lack rigor.' Thus, right or wrong, the burden is often placed on the qualitative researcher to in essence "go the extra mile" in establishing the soundness of his/her study.

This is our goal for this time around! Specifically, we will re-examine the preceding 'soundness/credibility' argument, which many of us have already encountered via the "internal/external validity" issue in Intro to Research and Research Design. We will reframe these qualities of soundness as per a "quality checklist" originally proposed by qualitative research giants Lincoln and Guba, and re-interpreted by Catherine Marshall and Gretchen Rossman. There is an outstanding, highly readable book by the latter two authors entitled,

Designing Qualitative Research. It is now in its 2nd edition (1995: Sage Publications, Inc.), but we have several copies of the first edition on reserve in Cline Library for your perusal.You may find them under: 1) EDR 798, Dissertation Seminar, Packard; and 2) EDR 725, Qualitative Research Design and Analysis Procedures, Dereshiwsky. If you do nothing else, I would urge you to spend some time with Chapter 3 of this book: it contains a gold mine of terminology and information pertinent to design (prospectus Chapter 3) issues!

Now -- let's enter the realm of evaluating soundness and credibility of qualitative studies!!!

Lincoln and Guba's Four Criteria for Assessing the Soundness of a Qualitative Study

  1. Credibility - this one is the same general "believability" issue as stated on the earlier but with a bit of a twist as to how we establish it! Remember from our earlier discussion that qualitative/naturalistic studies have a relative advantage over quantitative/experimental studies, in that qualitative work is heavily embedded in real-life situations, settings and circumstances. Therefore, Lincoln and Guba argue, for qualitative work, it consists of ensuring that "the data speak to the findings." Have you, as the qualitative author, provided enough 'rich, thick description' regarding the setting, program, subjects, procedures, interactions, etc., so that the boundaries and parameters of that study are well specified? If so, then the study will indeed be 'credible' in terms of the preceding discussion regarding external validity.

  2. Transferability - This one is a bit tricky! In Lincoln and Guba's use of the term, "transferability" implies generalizability of the findings and results of the study to other settings, situations, populations, circumstances, etc. In other words, this is the quality we have been calling "external validity" or generalizability" in our use of the term in Intro to Research and Research Design.

    As pointed out earlier in this course, Robert Yin and others have made a crucial distinction between "statistical generalization" (projection of quantitative findings across broad populations from which an experimental-type sample was randomly drawn) and "analytic generalization" (the more customary qualitative objective of obtaining a greater depth, richness, detail and understanding of some phenomenon).

    This, however, does not mean that the findings and results of a particular qualitative study need not 'generalize' or apply to another situation, setting or population.

    Thus, we may in fact have the best of BOTH worlds in a well-designed and executed qualitative study: greater in-depth understanding, and a set of results that may generalize or apply to a great degree outside the specific boundaries of that original study circumstance!

    Here, too, as with "credibility," Lincoln and Guba advise that the key is a thorough description of the specific setting, circumstances, subjects, procedures, etc. However, in the case of transferability, the actual 'burden' of generalizing is placed not upon the original researcher, but upon whoever is considering applying this original work to his/her own circumstances - be it in an applied policy setting or designing a new study. In other words, it is up to the 'new' researcher or practitioner to determine if his/her own circumstances are "sufficiently enough like" those of the first study - with regard to all of the key elements of setting, procedures, subjects, etc. - to warrant 'safe' generalization.

    A second procedure that may be available to establish transferability, applicable to all but the most exploratory of qualitative studies, is to see whether a given theory or model that the qualitative researcher claims to be testing or applying has, in fact, been accurately interpreted and used in the research. This may be interpreted as a check of 'content accuracy.'

    Finally, perhaps the most defensible indicator of transferability is to look for evidence of multimethod procedures in the design and/or analysis of the qualitative study. As we have discussed in earlier contexts, applying such different methods and procedures (i.e., both focus group interviewing and open-ended surveys) and then triangulating or comparing the different 'paths' or results to see if they 'converge' upon the same findings and results, serve to enhance the believability and robustness of the results - more so than if a single method were used. Just as a refresher, we can 'multimethod it' through one or more of the following procedures in our qualitative study:

     

     

  3. Dependability - This one, too, is a switch in thinking from the more traditional, tightly controlled experimental design! Experimental designs are focused on control and 'keeping things constant' (except for, of course, the independent variable or 'treatment' being manipulated. However, in the 'real world' which is the naturalistic setting for qualitative research: change is to be expected! This is of course particularly true in more longitudinal-type studies, such as classic ethnographic qualitative inquiry. Therefore, in order to assess the degree of dependability, Lincoln and Guba advise us to look for accurate and adequate documentation of changes, surprise occurrences, and the like, in the phenomena being studied. If change is to be expected, has it been thoroughly described? Similarly, have any unexpected but material occurrences which might affect our variables of study been identified and documented with adequate detail?

  4. Confirmability - This quality, according to Lincoln and Guba, is synonymous with objectivity. Evidence for this quality may be established in two ways:

    1. via our more traditional notions of credibility: is there a smooth logical progression, as evidenced in the research report, from:

       

      This one, then, depends on the 'internal logic' of the study and particularly how thoroughly and skillfully it is substantiated in the narrative of the research report. Is there a 'natural flow,' or a 'Grand Canyon leap of faith?!' Does it "feel real?!"

    2. via some evidence of lack of the researcher's own bias: such as, for instance, doing 'member checks' and running his/her findings and conclusions past third parties, "key informants" from the same or similar field setting as the original study, etc. Perhaps this one is 'established in reverse:' that is, do you see anything in the research report to indicate to you a potential bias on the researcher's part? premature closure regarding the findings? unwillingness to thoroughly search out and account for potential 'disconfirming' evidence? and so forth.

    Again, to remind you of the overall goal of applying these, or similar criteria - they serve as an extra cross-check of overall "logic and soundness" of the qualitative study design, implementation, findings, results, conclusions, and implications, according to Marshall and Rossman (in their interpretation of Lincoln and Guba). Careful attention to your own work in terms of whether it meets the above criteria will help ensure an additional cross-check as to its overall quality.

    - - -

    In concluding our qualitative 'odyssey,' dear friends, I want to share with you a couple of key quotes from the Judith M. Meloy book cited on page 1. I like to read the last three sentences below (emphasis mine; italicized) to candidates in a dissertation defense and ask for their reaction, as one of my questions (hint, hint...!):

    Although I was never alone in my graduate research classes, I found that I was always alone as I was collecting and analyzing data for my thesis. I did not have the companionship of an a priori hypothesis or a statistical design to guide and structure me. None of my courses had required the intense interaction between doing and thinking on such sustained and multiple levels. With the general focus of my dissertation taped on the wall in front of my desk, I continuously had to attend to the tangents of analysis, letting them play themselves out in order to understand which paths, if any, were worth pursuing, or if the emerging foci or, indeed, the general one with which I began needed adjusting. I was alone with notes all over the place - organized chaos - and yet never alone, as there were always thoughts sprouting in a brain partially numbed to anything but them. I had no idea what 'doing all this' meant and, at times, if I could do it at all. It was like struggling with a team of wild horses pulling a runaway wagon. (pg. 1)

Finally, lest you think your feelings of confusion and frustration at having to 'make the meaning yourself' out of such masses of "messy" qualitative data are unique: here is a quote from a doctoral candidate as shared with Judith Molloy (emphasis - capitalization - occurs in the original text):

It occurred to me that I have been conditioned -- all through my schooling and even now in graduate school -- to think that the teachers/professors had THE ANSWERS. Even now I have been tempted to want my chair to tell me THE WAY to do it. Old habits die hard...! I keep reminding myself that there is not just ONE WAY, obviously a view inherent in qualitative research. I also realize that completing a dissertation is in part an exercise in learning to make decisions and trust one's own judgment. (pg. 26)

Hear, hear...! Been there, am there still when I plunge into qualitative research!! And you know what: you reach a point where you decide it can be rewarding to live without fences, to make your own fences!!!

Savor your qualitative adventures

with confidence and pride!!!


Once you have finished you should:

Go back to Evaluating the Credibility of Qualitative Research

E-mail M. Dereshiwsky at statcatmd@aol.com
Call M. Dereshiwsky at (520) 523-1892


NAU

Copyright © 1999 Northern Arizona University
ALL RIGHTS RESERVED