EDR725
 StartSyllabusClassLibraryCommunicate
Help EDR725 : The Class : Making Sense : Fit : Doesn't Fit the Mold

"When It Doesn't Fit the Mold:" Looking Carefully for "Negative Cases"

This is that sometimes painful but necessary step mentioned on the preceding page. Heaven knows, just arriving at an initial set of 'condensed findings' can be lengthy, painful and arduous enough, given those 'voluminous mountains' of raw qualitative data! This is even more true if you have chosen the "analyst-constructed typology" route, out of choice or necessity, and thus you have had to come up with the interpretive framework, on top of the long process of compilation!

The temptation may indeed be great to say "whew!!! OK, now that's done!" breathe a sigh of relief and say, "Now I've got 'the' answer!!" and get on with the rest of your life!!!

T.V. fans - please picture with me Lt. Columbo stubbornly dogging his 'perpetrator,' and then just when he finally appears to be leaving, suddenly he appears back in the door, proclaiming: "Oh ... just one more thing, sir/ma'am...!!!" Yikes!!!! thinks the perpetrator!!! For, very often, this 'last-ditch' attempt to re-examine the situation yet again, leads us down quite a different path regarding the 'findings and conclusions' (in Columbo's case, pertaining certainly to the guilt or innocence of the other party...!)

The Ultimate Columbo site (Click on "Episode Guides" for a run-down on the plots!!!)

And there you have the idea of a good-faith search for negative or disconfirming cases. It is a key step, absolutely essential, if you are to build a case for the credibility, and thus validity or believability, of your findings and results. This is, again, particularly true if you are in some way 'breaking new ground' in the form of exploratory/descriptive research - a most necessary step in any type of research, so I certainly do not want to discourage you from doing this all-important foundational type of research in the first place! Rather, I am pointing out that you will have 'extra homework' to do in terms of doing this careful search for cases, individuals, situations, etc., that 'don't quite fit the mold' of the findings or model you've proposed to 'account for' or 'explain' them.

Perhaps a second analogy might be appropriate at this point. Suppose that you have described a set of symptoms to a physician who sees that most of those symptoms you're reporting appear to fit the 'profile' or pattern of an established illness/condition, such as diabetes. The physician may choose to therefore make a preliminary diagnosis - for us as researchers, if you will, this is akin to a working hypothesis, to account for why you are not feeling well. But then on further discussion or followup examination, you also report one or two persistent symptoms that don't fit that particular pattern - i.e., don't appear to consistently 'go with' diabetes. But nonetheless, there they are, interfering with your overall well-being. The physician obviously needs to do some rethinking at this point. Are those symptoms a 'fluke?' Indicative of a second, different underlying disorder? Or are they significant enough to cause him/her to rethink the original diagnosis in the first place? Could there be another disorder entirely, one that also 'contains' some or all of the originally-thought-to-be-diabetes symptoms but which also has these new symptoms as part of its profile? If so, does the original diagnosis need to be scrapped and a whole new disorder diagnosed - one which 'fits' the whole pattern of symptoms better - including the 'new' symptoms?

You have an analogous job to do, if and when you do discover 'disconfirming' evidence! Akin to the decision we make in analytic statistics regarding the Type I error, you need to decide if:

  1. This disconfirming evidence could be a 'fluke' - i.e., due to sampling error but not severe enough in and of itself to cause you to scrap your initial findings, results, explanatory model; etc.
  2. You can modify your findings and results, and/or your models, slightly, so as to accommodate this new evidence but without having to abandon most or all of your original findings/model;
  3. This new disconfirming evidence is so 'severe' in its implications, and/or leads you to discover even more such 'disconfirming' cases, that you have no alternative but to scrap your model and seek a new one.

Perhaps this is the ideal time to remind ourselves of what qualitative researchers have long cautioned us is true: that the process from data collection to data analysis is not lockstep-linear as it sometimes is with quantitative studies! To put it another way, we may (temporarily, we hope?) find ourselves in somewhat of a cycle or loop in which we go back and forth between data collection and data analysis/interpretation (please see the following page):

It may take several 'back to the drawing board' iterations of this procedural cycle before we are reasonably secure that we have found an explanation, pattern, findings, model, etc. that "fits OK"!

One other 'remedy' that again parallels the medical example we've discussed. Just as in the case of ‘second opinions' and a battery of different diagnostic tests - we have a similar research remedy available to us to analogously 'increase our confidence' in the 'rightness' of our findings and results!

By choosing to do a 'multimethod' study, one employing different data collection and analysis paths for the same research questions/problem statement, we can then compare the separate findings and results of the different procedures we have applied! If they all 'converge,' or appear to lead to the same findings and conclusions, we can have greater confidence in the credibility, soundness, and validity of those findings and conclusions.

Again, just to remind you, the multimethod approach can mean:

  1. Counterbalancing quantitative with qualitative procedures (i.e., interviews with subjects along with Likert-scaled, quantifiable surveys dealing with the same areas being researched, with the qualitative compiled narrative then compared vs. the analytic statistical inferential tests applied to the Likert scores of the same phenomenon - i.e., attitudes - to see if the two approaches lead to the same findings and conclusions regarding those attitudes); or
  2. Counterbalancing two or more quantitative, or two or more qualitative, approaches (i.e., as an example of the latter, employing both face-to-face interviews and anonymously mailed open-ended written surveys).

Below is an example of a mixed-method study.

Mixing Qualitative and Quantitative Methods in Sports Fan Research

To sum it up, I guess, as with most other things in life, awareness is the key on this one!!! This will help prevent you from reaching 'premature closure' regarding your findings and conclusions! Whether or not you choose to take the extra step of planning a multimethod research design, or you opt to do a careful 'ex post' search for those disconfirming cases, you will be strengthening the credibility of your findings and conclusions in the process!

- - -

Please remember: even though this process of 'making meaning' may seem like an endless, circular loop or spiral at times - please persist in your joyful pursuit of "qualitative truth" and I assure you, it will eventually be profitable...!

 


Once you have finished you should:

Go back to When the Data Doesn't Fit

E-mail M. Dereshiwsky at statcatmd@aol.com
Call M. Dereshiwsky at (520) 523-1892


NAU

Copyright © 1999 Northern Arizona University
ALL RIGHTS RESERVED