EDR720
 StartSyllabusClassLibraryCommunicate
Help EDR720 : The Class : Consider the Source : Information : Online Reading

Electronic Textbook: Sources of Information - Tradeoffs and Applications

We're bringing it on home, dear cyber-researchers! In this 'wrap-up' module for our Research Design odyssey, we'll consider the following issues:

  1. The relative advantages & disadvantages of the various methods of data collection;
  2. How multimethod research procedures may give you a 'leading edge' in terms of strengthening the validity of your study findings & conclusions.






I. The Relative Tradeoffs of Different Methods of Data Collection
In the Research Design Lesson , we briefly discussed the idea of Chapter One "Limitations." These, as we also saw early on in the course, have to do with "design contaminants:" variables or factors that could not be controlled for and therefore could have crept in and influenced our study findings and results. We also learned that a synonymous term for such "Limitations" is "threats to internal validity."

-> A bare-bones way of thinking of limitations, or threats to internal validity, is this: "It happened, but not necessarily for the reasons that I think it did. Here's what else might have caused it, which was beyond my direct control as a researcher."

As with life itself, my friends, there is no 'utopia' in terms of designs and data collection procedures! That is: different methods have differing relative strengths and weaknesses! Thus, we will always, to a greater or lesser degree, be forced to confront this issue of "alternative, rival explanations," "contaminating variables or factors," "threats to internal validity/limitations" as we scrutinize the tradeoffs among our different research designs and procedures.

Bottom line time:

  1. You will list or identify relative threats to internal validity, or weaknesses, of a given design/procedure under Chapter One "Limitations;" and

  2. You will also list or identify relative strengths of a given design/procedure as "mitigating threats to validity" in your overall "Limitations" discussion. To remind you of our related discussion in Module #1, I find a tendency of dissertation writers to be almost too "self-flagellating" in this regard. They do a thorough job in identifying the weaknesses - i.e., limitations or threats to internal validity. But at the same time, they shy away from mentioning it if their particular design/procedure also effectively 'plugs' a given threat to validity - that is, "mitigates" it in a positive sense. Yet, both of these factors need to be given "equal time," in my opinion, in your Chapter One!

Let's proceed to a listing of some tradeoffs! For those of you taking Qualitative Research by modem with me, you'll recall that we've already discussed the relative strengths and weaknesses of some specialized qualitative approaches. These were presented to you in tabular format and come to us from the excellent qualitative book by Marshall and Rossman. I urge you to review these at this time, and certainly as you eventually draft your dissertation Chapter One Limitations and Chapter Three Design and Procedures.

For the purposes of our discussion, and the more general types of designs and procedures, I'd like to share with you the outstanding "tradeoff list" compiled by Floyd Fowler. He is an eminent survey design expert and researcher at the University of Massachusetts at Boston. Floyd Fowler also wrote the very first book in the superlative Sage Publications "Applied Social Research Methods Series." It is entitled Survey Research Methods and has gone into a popular 2nd edition (1993). Please note that there is some overlap between and among the various methods/procedures in this list (i.e., "self-administered" survey instruments could be either "mailed" or "dropped off and picked up later"). Nonetheless, I feel it is a valuable "all-in-one" place to start in terms of thinking about the relative tradeoffs of your particular type of design and/or data collection procedure.

Table 1.
Relative Tradeoffs of Various Procedures
(adapted from Floyd Fowler, Sage, 1993)


Method/Procedure Strengths Weaknesses
Personal
Interviewing
* "Personal touch" helps establish subjects' "buy-in;" can build rapport & confidence

* Researcher can clarify subjects' questions & help ensure that any complex instructions are correctly followed

* Researcher can probe for additional clarification information as needed

* Researcher can "multimethodly" combine interviewing with his/her own observations (i.e., subjects' body language, visual, non-verbal cues)
* May be more costly in time and money than some other procedures

* May necessitate identifying & training assistants on-site

* Some "hard-to-reach" samples (i.e., those who work outside the home; high-level professionals) may be difficult to reach


Method/Procedure Strengths Weaknesses
Telephone
Interviewing
* Less costly than personal interviewing

* Can use technological time-savers such as "RDD" (random-digit dialing) to access large random samples

* More convenient access to certain "hard-to-reach" populations than for personal interviews

* Tend to get better response rates than mailed surveys - again, due to the personal contact (by phone)
* Possible sampling limitation since it excludes those without telephones

* Non-response associated with RDD sampling (please see preceding column) is higher than for personal interviews

* More limited in terms of 'visual cues:' i.e., researcher can't observe body language & nonverbal cues; can't show subjects visual aids or other handouts that may be pertinent to the questions being asked

* May be less successful in cases of research on 'sensitive subjects,' without the face-to-face contact


Method/Procedure Strengths Weaknesses
Self-Administered
Data Collection Procedures
* Can include visual aids, handouts, etc. for the subject to consider in his/her "package"

* Can ask questions requiring 'lengthy' responses, prior thought, etc.

* Convenient for asking "batteries of similar questions:

* Can add the "privacy/safety" factor: subject may feel freer to respond more candidly this way, rather than in a 'personal' contact (i.e., face-to-face or telephone interview)
* Researcher needs to exercise extra care in constructing very clear directions and survey materials

* Subjects need to possess exceptional reading, writing and thinking skills to 'self-administer' the data collection materials accurately

* Researcher is not present for "extra quality control;" i.e., answering questions, clarifying directions, or following up on ambiguous responses


Method/Procedure Strengths Weaknesses
Group Administration of Data Collection Procedures * Easier to attain "high" cooperation rates, and thus response rates

* Researcher may find it more efficient to clarify directions, answer questions,etc.

* Relatively less costly as compared with individual access & data collection procedures
* May be relatively more difficult to arrange for groups to be accessible at a given place & time (i.e., "juggling schedules" to establish a mutually convenient date and time for the data collection)


Method/Procedure Strengths Weaknesses
Mail Procedures * Relatively low cost as compared with "personal contact" (face-to-face interview & telephone procedures)

* Far less demanding in terms of locating, hiring & training assistant staff than the above 2 "personal contact" procedures

* More efficient access to widely dispersed sample subjects, as well as those that are difficult to reach via telephone or in person

* Allows for subjects to take more time in answering, consider their responses, study any supporting visual aids or other handouts, etc.
* Traditionally lower response rates, on average, as compared with "personal contact" types of data collection (i.e., face-to-face interviews and telephone interviews)

* Greater likelihood of errors in records of subjects' addresses

* Lack of researcher opportunity to collect 'visual cue' observational data, to clarify possible confusion regarding directions or questions, or to follow up ambiguous responses if the surveys are returned anonymously


Method/Procedure Strengths Weaknesses
"Drop-Off/Pick Up Later" Procedure of Data Collection (i.e., survey dissemination and return to researcher) * The researcher can be on-site at least at the beginning to explain the study, provide instructions, & answer subjects' questions

* Response rates tend to be similar to those of interviews due to that initial face-to-face contact with the researcher & the potential to build rapport, trust & "buy-in" on the part of the subjects

* Subjects generally have more opportunity to give thoughtful answers, study supporting materials, than for the face-to-face or telephone interview
* Approximately as costly in time and money as personal interviews

* Need to hire and train a cadre of "field staff" (albeit probably less than with the scripter/assistant moderator in the face-to-face interview situation) to supervise the completion & return of the data to the researcher




II. "Seeing It from Many Angles:" The Multimethod Approach

The preceding table clearly indicates the "tradeoff" nature of going with a single approach to research design and procedures. A given procedure will simply 'be better in some aspects and worse in others.'

The natural extension, then, is to say to oneself, "Why not have the best of all possible worlds?! By applying two or more different procedures, one can maximize the positives and hopefully cancel out some of the negatives, particularly if those two or more procedures 'point in the SAME direction' regarding the answer to the research question or hypothesis."

This is precisely the philosophy behind the multimethod approach to research designs and procedures! Brewer & Hunter, in their classic 1989 Sage book on multimethod procedures, characterize them as:

  1. applying two or more methods which have non-overlapping weaknesses but complementary strengths to a given research problem, question or hypothesis; and then:

  2. comparing the 'answer to the (research) question' generated by the two or more separate approaches, to see if they converge, or agree. If this is the case, then we can be more confident that "we've found 'something real' with regard to our results, as opposed to accidentally having it contaminated with a methodologic artifact." This convergence or comparison is also referred to as triangulation, or as Brewer & Hunter define it, sighting in on phenomenon from various viewpoints.

An example may make these two features of multimethod research a bit clearer. Suppose you are interested in identifying the attitudes of first-year public school teachers in your district towards a proposed peer-coaching professional development and evaluation program. You decide to collect data by means of individual face-to-face interviews. You ask a series of questions of your teacher-interviewees in a standard focus group format, tape-recording the responses. You later produce a transcript of the interview session and identify key recurring response themes from several re-readings and annotatations of this interview transcript. From these, you compile a listing of the major attitudes, type and direction, as expressed by those teachers in your face-to-face interview.

Well ... without perhaps realizing it, you have also picked up a second effect: one that can be, to a greater or lesser degree, confounded with the first:



Your goal is to get a "clean fix" or sighting on what's inside the circle on the left side - namely, "real" data, in this case, subjects' true attitudes. However, by using only a single method to gather these data - i.e., face-to-face focus group interviews - these data are invariably going to be 'contaminated' to a greater or lesser degree by any "limitations" or "relative weaknesses" associated with that one particular method. For instance, in the focus group interview setting, it is possible that, even accidentally, some subjects might begin reacting to and reinforcing what they perceive as your own 'biases' on the issue - in effect, "telling you what they think you want to hear," or the halo effect. This can certainly happen despite your own best intentions, training, practice, etc. to not convey any 'directional biases,' since after all, communication is always a "two-way street!" Another potential methodologic limitation of the focus group interview setting might be that some of the teachers might feel inhibited in voicing their 'real' opinions in front of their day-to-day colleagues who are also sitting in that same room, responding to your interview questions at the same time. This can, again, happen even if you 'do everything right' and try your best to assure confidentiality, 'there are no right or wrong answers,'etc. There could be factors in that political climate beyond your control or even awareness - i.e., a long-standing pattern of mistrust due to external events.

Thus, the key issue becomes: "How much of what you picked up and analyzed and reported in that interview session represents the 'true, real' answer to your research question (i.e., their genuine attitudes regarding the issues) - and, on the other hand, how much of it is contaminated by those methodologic artifacts such as the two above limitations of this particular means of gathering the data?" That 'overlap' between the two circles or effects can, admittedly, be reduced to some degree with certain careful planning and actions on the part of the researcher - but in most cases, not eliminated entirely, as we've specified in the preceding scenario. (Remember: even in a tightly controlled experimental study, at best only a limited number of factors can be controlled. In most studies dealing with human subjects, you simply cannot be 100% sure that you've identified all of the 'impacting' factors in the first place, let alone adequately controlled for them.)

Now let's expand the tale a bit. The same researcher, recognizing the dangers of confounding inherent in this single method, decides to cross-check against it by also designing a written, paper-and-pencil survey, to cover the same general topics (say, with both Likert-scale fixed items, and also some "anything goes, as-you-walk-the-walk, grounded-theory emergent" type of open-ended items). This researcher selects another sub-sample of subjects (first-year teachers) from the same general target population and administers the survey to them.

Had the survey alone been used as a single method, the researcher would have faced the same confounding issue - albeit with different "threats to internal validity," or limitations of the survey method. We've discussed these in preceding lesson packets and also have seen them summarized in Table 1 of this lesson packet. To focus on just one of these for the following figure, they might look as follows:



No halo effect or peer-group inhibition to contend with this time: subjects may never even have met the researcher, and in addition they are now filling out the survey in the privacy of their home or office. But as we also saw in Table 1, this different (survey) method brings with it its own relative drawbacks - as just one example, an increased possibility for subjects' misinterpreting the meaning of the instructions and/or questions - for the same reason! - that they are providing the data in a 'private' away-from-the-researcher setting!

As we've seen from Table 1 and throughout our respective Research Design and Qualitative Research 'odysseys,' each method and procedures 'drags along its own baggage' of relative strengths and weaknesses. Thus, a single method invariably contaminates 'real findings/results' with 'methodologic artifacts' to a greater or lesser degree.

But to cap off our hypothetical scenario, and illustrate the benefits of multimethod designs and procedures, suppose our researcher, as indicated, applies both approaches - face-to-face interview and paper-and-pencil survey - and does the following:



Note the two-phase process inherent in the above approach:
  1. Step One: Separately apply and compile the findings and results from each individual method; and
  2. Step Two: Compare ('triangulate') the findings and results from the various methods to see if they 'converge' or agree.

Suppose that this researcher does so and finds, indeed, that the two separate 'roads to the answer' of the interview and survey results, respectively, 'ended up at the same final destination,' or converged with regard to the findings and results. In both very different procedural settings, the same core set of attitudes and perceptions emerged.

While this researcher has not eliminated the limitations or weaknesses of a particular design, nonetheless, he/she can be more confident (i.e., greater credibility or validity of findings and results) that "I've picked up 'something real' here. It can't be 'blamed' on the method itself, because I used two different methods, each with different 'artifacts' - and I still ended up with the same core set of findings and results."

This, then, is the primary benefit of multimethod research designs and procedures. The validity or credibility of the findings and results is enhanced by being able to demonstrate that one can arrive at them with different methodologic approaches - each of which carries different 'biases' or artifacts. Again, it is not an absolute elimination of those individual biases - but rather, a 'strengthening of one's case' by showing that the biases appeared to 'cancel out' in the convergence of the findings and conclusions.

This principle is in evidence when a diagnosis is made via a 2nd (3rd, 4th) medical opinion, or more to the point, via different medical procedures and tests. An individual test alone might have its own unique sources of error or biases. But if they both/all 'point in the same direction' regarding the presence or absence of a medical condtion, such as say, diabetes, then one gradually begins to believe, "So, OK, it is therefore more likely that it is (or isn't) diabetes, rather than a 'quirk' of the medical testing process itself."

Much the same can be said regarding multiple sources of evidence in a courtroom. (I swear I have used this example in 'live' research design and multimethod research classes long before the "O.J. Trial!") As I've said in live classes for several years, "It is not so much a matter of certainty as it is of building a more or less convincing case one way or the other. After all, the only ones who 'know for sure' the defendant's guilt or innocence are the defendant him/herself, and the victim - & the latter may not be around to tell us!" So - it is a case of a 'skillful' attorney, be that attorney prosecutor or defense in orientation, who compiles "multiple convergent evidence," any one of which might not 'convince' the jury (e.g., a witness with an alibi; a piece of evidence left at the crime scene), but when taken together seem to lead to the same path regarding the 'finding and conclusion' - guilt or innocence.

For these reasons - increased credibility and confidence regarding the findings and results - multimethod designs have come into vogue. It is generally felt that rather than "duplicating effort" or "re-inventing the wheel" as far as identifying and applying such multiple procedures, the gain in validity/credibility of study findings and results makes it well worth the 'trouble!'

- - -


Endings -- or BEGINNINGS?!!! I kind of feel that we're at both points! We've journeyed through a series of discussions and related assignments dealing with various aspects of the research process. In that sense, we've "ended" 10 specific such journeys.

At the same time, I feel much more strongly and positively regarding the magnificent beginning you've made! I'd like for you to please review the document right now: What Is A Doctoral Research Proposal I also urge you to reread lesson regarding the "Dissertation Road Map," and particularly the role of the proposal as the beginning of that three-phase process.

We've cruised through some key aspects of that proposal via the topics we've covered in our 10 modules. You've also tried your hand at composing 'bits and pieces' of the proposal as part of the related assignments.

Now it's time to bring it all together - and also to get a confident start down the glorious road to the "Big D!" That's what I'll be asking you to do in this final assignment.

I want to have a good 'overall look' at however much of the proposal you are ready to pull together for me at this point. This is because I want to be of maximum help to you in hopefully pinning down a good 1/3 of your dissertation journey! I'll review it with the intent of providing you with not only 'corrective' feedback, but also 'next-step suggestions:' i.e., as appropriate, 'where do you go from here' to 'grow it out,' as I like to say, into the 2nd critical phase - your prospectus, or 1st 3 chapters!

My goal is, and always has been, that our Research Design journey is not an 'isolated course hoop-jump' but rather a holistically integrated part of the major self-initiated research activity that you are all facing: the doctoral dissertation. Or to put it another way: that the "end product" of this particular course will directly plug into and be a "beginning product" of that final, summative degree requirement.

It has been my greatest pleasure and privilege getting to know each and every one of you. You are superlative scholars, leaders and researchers - not to mention cherished colleagues and friends!

P.S. I would be honored to continue 'down your dissertation path' with you if you are needing a dissertation chair or member. Just ask and hear me say, "YES!"

Wishing each and every one of you a maximally positive, productive and above all joyful continued research journey!!!


Have a look at the perspective of another who passed this way, Ben Dean. He will give you some good advice as you proceed through writing your dissertation

http://www.ecoach.com/News/growth.html
The Dissertation as Spiritual Growth


Once you have finished you should:

Go on to Group Assignment 1: Why Use a Multi-Method Approach?
or
Go back to Topic 1: Sources of Information

E-mail M. Dereshiwsky at statcatmd@aol.com
Call M. Dereshiwsky at (520) 523-1892


NAU

Copyright © 1999 Northern Arizona University
ALL RIGHTS RESERVED