EDR610
: The Class
: Research Design
: Research Part 1
: Lesson4-1-1
|
Families of Research Designs - Part I
Just to put things in perspective, cyberspace superstars...
We've looked at an overall flowchart or schematic of the entire research design and analysis process. Next, we spent some time focusing in on research questions or problem statements ... the "heart & soul" of the whole process (It Starts with a Question) We further focused in by talking about some important components of these research questions/problem statements: namely, variables and hypotheses (Module 3) Now it's time to move on to the "research design methodology" part of the flowchart. The design methodology (sometimes just called "design") consists of the label(s) that characterize the "general blueprint" of the design. As we'll see, usually more than one design label will apply to a particular study. As with research questions or problem statements, these design "buzzwords" come in "families." We'll see that many of them "link" to particular "keywords" in our problem statements. Some of them also have to do with the form(s) of data that we are collecting: whether in numbers (quantitative), words (qualitative) or both (multimethod).
Figure 1 illustrates one basic way to start to break down these design "families:
Open the link below to view a Power Point presentation on quantitative and qualitative research in more detail. The Two Traditions: Qualitative and Quantitative Research For example, if you are doing a study where you will be rating students (numerically) on their performance of a sensory-motor skill AND also interviewing these students (data in words) to determine how they perceive their own skill levels (one of the doctoral students whose committee I'm chairing is doing such a study!), then at least one "design methodology label" that would apply is "multimethod." Now, some design labels apply only to qualitative studies -- while others could apply to a study that's any of the above 3 possibilities. We'll look at the qualitative labels in a future follow-up lesson. For now, let's look at the 2nd possibility: families of design methodology labels that could apply to any/all of the above 3 possibilities.
THAT CORRESPOND TO QUANT/QUAL/MULTIMETHOD STUDIES Most of these, as we'll see, "link" to certain "keywords" in the research question or problem statement! I. Descriptive Designs We've already seen these! And yes -- they link to descriptive questions/statements! Key characteristics: "what is/what are/identifying/exploratory type studies. Example: This study is to identify the perceived barriers to successful implementation of the Career Ladder Teacher Incentive & Development Program in X School District. "Identify"/"what is - what are" (the perceived barriers) - > Descriptive problem statement AND also descriptive research design methodology! Two "sub-types" (add'l. design methodology labels that could apply to "descriptive designs):"
You've probably seen (more than you care to think about! if you've been 'approached' by a 'needy dissertation stage doctoral student' to participate in his/her study!) such surveys. They can take many forms: While often these surveys are paper-&-pencil in nature (e.g., you're handed one or receive it in the mail & asked to fill it out and return it to the researcher), they are sometimes "administered" orally in a face-to-face or telephone interview (e.g., the researcher records your answers him/herself).
More Information on Interview Studies There are other variations on survey-type questions; the above are just examples of the most common forms and scaling of such responses. If the responses to our earlier example were collected in the form of a survey -- be it, say, Likert-scaled attitudinal items and/or open-ended questions where the teachers are asked to share the perceived barriers in their own words -- then the study would be characterized as a descriptive survey design methodology. Take a Break and check out this example of a different sort of survey! He/she might want to identify the most frequently occurring type(s) of disruptive behavior in a particular classroom. With clear prior agreement on what constitutes such "disruptive behavior" (operational definitions of our variables are important, remember?! It becomes an issue of "reliability," or verifiability that "we saw what we saw" vs. "our own bias" of what constitutes this disruptive behavior!), the researcher could develop a listing of such behaviors and observe and record the number of times each one occured in a particular observation session in a classroom. (Again, he/she might wish to 'compare notes' with assistants in order to enhance reliability or verifiability -- e.g., as a cross-check for accuracy). This type of research would warrant the design methodology label of not only "descriptive" (due to the 'identify/what is - what are [the most frequently occurring ... ]?') but also "observational" due to the recording/tallying protocol. (By the way, qualitative-type observations can also be recorded. They don't have to be strictly numeric tallies. Examples that come to mind include case notes of counselors, where they record their perceptions in words.) More Detail on Observational Studies-Robert Gordon University, UK II. Correlational Designs We've seen these too! Just as in the case of "descriptive" designs, these "link" to the keywords of "association," "relationship," and/or "predictive ability" that we've come to associate with "correlational" research questions or problem statements! III. Group Comparisons We've briefly talked about "experiments" generally, in terms of "key features" such as the following:
Now ... there are actually two "sub-types" of experimental designs. Plainly put, they have to do with how much 'control' or 'power' you as the researcher have to do the above randomization and grouping!
In the preceding scenario, the researcher first: The two levels of "randomization" help to ensure good control of those pesky contaminating or confounding variables, don't they?! You're more likely to get a "good mix" on all those other factors when you can randomly draw your subjects and also randomly assign them to groups that you as the researcher have the "power" to form! Ah...but ivory-tower research is one thing; real life quite another ... ! What if you get the OK to do your research within a school district, but the sup't. says, "Oh no! I can't let you be disrupting our bureaucratic organization here and "making your own 4th grade classrooms" for your study! That's way too disruptive! No, no, the best you can do is to randomly select INTACT existing 4th grade classrooms and then go ahead and use all the kids in those randomly drawn GROUPS instead!" The True Experiment and Quasi-Experiment Which brings us to the 2nd variant of "experimental designs:"
Here (for the quasi-experiment), you randomly draw intact groups (e.g., from all the 4th grades in the district, you draw 4 of them at random) and then flip a coin or use some other random procedure to assign the pre-existing 4th grades to either the "treatment" or "control" conditions. (In our example Grades A and C "land" in the traditional lecture method (control), while Grades B and D end up in the hands-on science instruction (e.g., the "treatment" or the "experimental" group). Do you see how this is different from the "true" experiment? In the "true" experiment, you selected the children themselves (subjects) at random and then "had the power" to in essence "form" your own "4th grades" by assigning the individual kids themselves randomly to either the control or the experimental conditions. Here, though, the 'best you can do' (again, often for practical reasons such as access to sites, permission, etc.) is draw not individual kids but the GROUPS themselves (pre-existing 4th grade classrooms) at random and then in step # 2 assigning NOT the INDIVIDUAL KIDS but rather the WHOLE GROUPS to either the treatment or control conditions. Open the link below for more detailed information about Quasi-Experimental design P.S. Do you see how this one-step loss of randomization may mean a bit less control over those pesky contaminants?! By forming your own groups you have a greater likelihood of "getting a good mix on all other stuff". But here, you've got to "live with the existing groups as is." And suppose that in the above scenario, 4th Grades B & D also happen (quite by accident, but welcome to 'real life!') to have a higher average I.Q. of 15 points than A & B! Now we've got a contaminant! Did the kids do better because of the hands-on science lesson -- or because of their inherently higher aptitude, intelligence or whatever?! But at least we still have that last step: random assignment to either the experimental or control conditions! Remember ... again ... Time for another story break. Enjoy this fable about a Famous quasi-experiment... Well -- we lose that "random assignment" property in the 3rd "family" of group comparison design methodologies!
Thus, there is no treatment either! Simply an attempt to see if a grouping that we had no prior control over seems to "make a difference" on some outcome(s)! The keyword "difference" (by grouping) and no treatment would be the tip-off to an ex post facto or causal-comparative study design. And -- regarding the grouping -- maybe this rather silly example will make the point! And help you to identify if you are in such a situation of "no-control-over-grouping:" You wish to study whether preschoolers from single-parent homes are different in terms of emotional readiness for kindergarten than those of two-parent homes. Now ... you couldn't go to prospective subjects' homes and say, "OK, now you've got to get divorced ... and YOU have to stay married ... 'cuz that's how you came up in the random assignment!" I don't think so ... !!! Same thing with "gender:" you took it "as is" (e.g., those subjects in essence 'self-selected into their gender grouping). You had no prior control over 'making' them 'be' one gender or the other but rather took those groups 'as is' and kind of pile-sorted some response(s) by gender to see if it 'made a difference' on some outcome! Indeed ... the literal Latin translation of "ex post facto" is "after the fact." This shows YOUR role in the 'grouping' process as the researcher! You didn't 'assign' them into any one group, randomly or otherwise. Instead, you came in "after the fact" and wished to see if that self-determined grouping made a difference on some outcome(s) that you are studying! As you can imagine -- even bigger problems with contaminating variables! There is no randomization or control here! Thus the name "causal comparative" is sort of a misnomer. You are indeed "comparing" two or more "pre-formed" groups on some outcome(s). But due to that lack of randomization and control, you can't really use this design to study "cause/effect" types of research questions or problem statements. There are generally too many uncontrolled, unrandomized contaminating variables that may have entered the picture to confidently make 'strong' cause/effect statements! Nonetheless, given the circumstances, this type of design might be "the best you can do." Group differences on some outcome(s) might indeed be interesting to study even though you had little or no "control" in the situation. To summarize, for the "group comparison" family of designs:
Next time we'll look at some terminology for the "qualitative" branch of design families!
Once you have completed this assignment, you should: Go on to Assignment1: Identify Design Methodology Send Email to Walt Coker at Walter.Coker@nau.edu
Web site created by the NAU OTLE Faculty Studio
Copyright 1998
Northern Arizona University |