Oh-so-critical, dissertation/thesis writers!
The #1 reason for rejection of research submissions to refereed professional
journals is either failure to address, or incorrectly/inadequately
addressing, evidence that your "instrument" (and for now, I am going
beyond surveys to include qualitative data collection procedures too,
such as interview protocols) possesses the two key qualities of good
measurement!
These qualities (and much more about them shortly!) are
- VALIDITY - does it measure what I think it does? If I say
it is a 'test of depression,' is it really 'picking up' measurement
of depression? or is it accidentally missing the mark and picking
up measurement of anxiety instead?
Introduction
to Validity
Take
the Validity Tutorial - developed by a student at Cornell
- RELIABILITY - does it measure stably, consistently, predictably?
Of course there is some component of 'random noise' or 'variation'
in all human measurement processes. I might score a bit lower on the
test of academic achievement today as opposed to a couple of days
ago because I happen to be coming down with a cold and am a little
tired and distracted. Nonetheless, a 'reliable' test will indicate
that I score in the same 'general' realm (again, with allowance for
a 'slight' margin of error). To give you another example, there'd
be cause for concern about the 'reliability' of an intelligence test
if it showed me in the lowest 25% one time and in the upper 10% a
couple of days later.
Overview
of type of reliability
At this point, please review Intro to Research Lessons for Module
#6: Properties of Good Measurement .
These contain a thorough discussion of quantitative ways to
assess validity and reliability of instrumentation!
There is also a qualitative way to pilot-test and assess validity and
reliability of instrumentation. It would involve:
- Create an initial draft of your 'instrumentation' (again,
the term is used broadly - be it paper-and-pencil survey, interview
protocol, etc.) Don't forget, for the "mail-out researchers," include
a draft of your cover letter too! Remember that you are wanting to
'road-test' the entire package!
- Identify 5-10 'expert judges' to cover the broad spectrum of
'experts' relating to your survey. For instance, you would make
sure you have a couple of practitioners, perhaps an 'end user' or
two (i.e., a holdout sample subject, such as a student or parent who
'is like' the eventual student or parent sample group to whom you
would intend to disseminate the surveys), and a 'survey construction
expert' (that's the capacity in which I often serve as a pilot judge!
and by the way, I'm allowed to do this for you even if I am your committee
chair or member! It's not considered a conflict of interest!)
- Ask the 'expert judges' to review your survey draft, including
cover letter and/or any other enclosures, and give you their feedback
on the key criteria: form, content, appearance, clarity, and anything
else you can think of. You can get their input via:
- convening them in a single focus group (most efficient if you
can pull it off in juggling schedules!)
- mail
- telephone
- in person individual interview.
- After you get feedback from each judge, line up their comments,
item by item, and coordinate any desired revisions to your instrumentation.
(You'd be surprised at how often it 'converges,' even from different
'expert judges!' I as a design expert might often recommend the same
change as did the in-the-field content practitioner!)
This qualitative procedure for piloting is one of the quickest and
easiest procedures to use!
I've written a paper in which I illustrate, in detail, how I developed
a survey containing both closed- and open-ended items, and then piloted
it (by mail, because they were scattered throughout the state) with
a panel of expert judges. I show you my original draft of the survey,
how I compiled the pilot judges' comments item by item, and then finally
what the final (revised) survey draft looks like. The interesting and
perhaps surprising thing to note here is, there were some instances
where I deliberately chose to 'go out on a limb' and not apply
a given piece of pilot advice for revision! And I discuss these cases
as well, and give rationale as to why I went my own way!
This paper is entitled, When 'Do It Yourself' Does It Best: The
Power of Teacher-Made Surveys and Tests. You can get this paper
from the ERIC archives.
Bottom-line time: For the proposal (as well as prospectus - stay
tuned, dear Dissertation Seminar cyber-partners!), you will subdivide
your Instrumentation discussion as follows:
Instrumentation
Name of first type (survey,
standardized instrument, interview protocol, etc.)
Provide a brief narrative overview of the nature/content of the items,
how they are scaled (e.g., open-ended, Likert scale) and related information
on either "pilot-test procedures and results" or "evidence of validity
and reliability." (See the preceding discussion on piloting, as well
as the EDR 610 Intro to Research Module #6,
for the various quantitative indicators.)
Name of second type (survey,
standardized instrument, interview protocol, etc.)
Provide a brief narrative overview of the nature/content of the items,
how they are scaled (e.g., open-ended, Likert scale) and related information
on either "pilot-test procedures and results" or "evidence of validity
and reliability." (See the preceding discussion on piloting, as well
as the EDR 610 Intro to Research Module #6,
for the various quantitative indicators.) etc., etc., continue to 'sort'
narrative discussion by specific type of instrument, covering under
each:
- a narrative overview of this instrument (what and how measured);
and
- a summary of any related pilot-test procedures, and/or evidence
(quantitative/qualitative/both) of its validity and reliability).
Name of final type (survey,
standardized instrument, interview protocol, etc.)
Provide a brief narrative overview of the nature/content of the items,
how they are scaled (e.g., open-ended, Likert scale) and related information
on either "pilot-test procedures and results" or "evidence of validity
and reliability." (See the preceding discussion on piloting, as well
as the EDR 610 Intro to Research Module #6,
for the various quantitative indicators.)
- - -
That's it for the 'survey trilogy,' dear cyber-scholars! One more lesson
packet to go, in which we'll 'bring it on home' and return to the construction
of the research proposal (first step on the way to the thesis or dissertation!)!