Using Clickers for Whole Group Peer Evaluations in the Large-Enrollment Undergraduate Classroom

By Linda S. Neff

What is a clicker?

Instructors use clickers, also known as "classroom response systems" or “classroom performance systems,” as an instructional technology to engage students in large face-to-face classes by collecting and presenting immediate learner feedback.  Similar to a TV remote control, students use their clickers to enter responses to multiple-choice questions.  A receiver connects to the instructor’s computer where students beam their responses.  The clicker software automatically tabulates the answers and the instructor has the option to present the results in a table or chart on the overhead screen.  Instructors use clickers to take attendance, administer tests or quizzes, gauge student understanding, deliver interactive demonstrations, encourage peer instruction, and to gather data.  By collecting real-time data, instructors can adjust their in-class activities to address any misunderstandings or alternative perspectives.  

What are the problems in the large enrollment classroom?

“A growing body of research points to the value of undergraduate learning environments that set high expectations, promotes active and interactive learning, and gives students personal validation and frequent feedback on their work” (McGregor et al. 2000, p. 1).  Unfortunately, at many universities across the nation, undergraduate general education classes typically have an enrollment that exceeds 100 students where instructors hamper student learning by limiting their selection of instructional strategies and assessment techniques.  For the most part, instructors or more often than not, part-time instructors, deliver the content in a lecture-centered format and assess the ability of students to memorize terms and concepts with multiple-choice quizzes or exams.  In this context, instructors do not challenge the students to think critically or discuss the interrelationships between concepts.  Nor do they assess student understanding at this higher level of thinking.  In addition, the university imposes physical classroom environments that also hinder student learning.  The large lecture hall with a podium located center stage inhibits small collaborative learning that we know works in smaller classrooms (Marzano et al., 2001).  Even with the most exuberant lecturer, students are still bored, absent, and lost most of the time.

How, then, can we transform the large-enrollment, passive-lecture course into an interactive, fun, exciting learning experience where students take away valuable knowledge and life skills?

How do clickers solve the problem?

Technologies, such as clickers, encourage collaborative mobile learning and provide instructors with innovative approaches to increase student-centered learning for undergraduates.  Clickers represent a mobile, static, and personal-networked technology that encourages student-teacher, student-student, and student-content interaction.  By adopting a variety of strategies, clickers can transform the undergraduate large-enrollment archaeology class into a positive learner-centered experience. 

How did I use clickers in my large-enrollment class?

Lost Tribes and Buried Cities (ANT 104) examines the rise and fall of the most spectacular cultures of the ancient past.  We examine the worldwide diversity of archaeological sites to investigate the roots of global inequality and human adaptation to the environment.  More often than not, a part-time instructor or a graduate student instructor (GSI) teaches the course with a section enrollment averaging around 95-100 students.  Most of the instructors who teach the course instruct using a traditional “sage on the stage” lecture format and assess student learning with quizzes, two or three exams, and a final paper.  Student learning is based on a student’s ability to memorize vocabulary and concepts.  Attendance remains low as anonymity soars high with student engagement falling off the bottom of the scale.

In ANT 104, I used clickers for the following instructional strategies:

  1. to deliver student-managed quizzes;
  2. to hold teacher-led interactive discussions to measure learner knowledge, attitudes, and skills;
  3. to assess student understanding; and
  4. to increase attendance, student engagement, and student knowledge retention. 

In this case study, I explore an instructional strategy that promotes student-student interaction while steering the power away from the sage on the stage and empowering the students with their own learning process.  I discuss how you can use clickers for a whole class peer evaluation of a final small group oral presentation. So it’s a whole group doing peer evaluations of a subset of this group? Would “class” work better than “whole group”?

The Final Project

For the final project, a small group of 5-6 students worked together to write an informative paper and develop an educational board game on a course-related topic of their choice. 

Students were graded on the following items: 

  1. group contract,
  2. project topic,
  3. informative report,
  4. game style,
  5. game documentation,
  6. game board and pieces,
  7. a final oral presentation, and
  8. individual collaborative work skills. 

While the project had many learning objectives, I will discuss the learning outcomes associated with the students’ ability to present a professional presentation to their peers.    

When the students conducted a peer review of each group’s final oral presentation, they developed a better understanding of the assessment criteria, process, and standards.
Student participation in the assessment process guides student understanding of the assessment expectations (Price & Rust, 2007). 

Peer evaluation of the final oral presentation gave the students the opportunity to present to their peers, as well as experience the pressure and expectations set by their peers. 

During the final presentations, all of the students in the audience actively listened to and evaluated the presentation skills of their peers using a paper rubric and their clickers.  The computer tallied the results, which the instructor exported to an Excel spreadsheet (using the CPS software Raw Response Data Export function).  In the spreadsheet, the instructor computed the averages and posted the final score in Bb Vista.

The final oral presentation grade was calculated as the average score of the overall class peer evaluation score and my rubric score for the group. For example, if the class average was a 26 on the class peer evaluation and the instructor score was a 28, then their final score for the final oral presentation was 27.

The logistics of collecting, tabulating, and analyzing peer evaluations for 100 students typically take an enormous amount of time.  In fact, the amount of time would easily discourage the instructor from conducting this type of evaluation.  By collecting all the responses digitally, it took the instructor less than half an hour to post the final oral presentation grades in the Bb Vista grade book.

How did clickers contribute to student learning?

Student Engagement.  Both the presenters and evaluators were actively engaged in the quality of the presentation.  Student presenters spoke to the audience instead of the instructor.  They also invoked a great deal of humor and seemed genuinely excited about their games.  Many students had the whole class play their games as part of their presentation.   

Meaningful and Timely Feedback.  Peer evaluation hones the critical skills of the evaluator as well as the evaluated. Using the rubric, the performance criteria informed students of their skills and knowledge regarding their topic as well as their ability to perform an oral professional presentation in the real world.  The clickers made it easier to provide this invaluable immediate feedback to the evaluated group from their classmates. 

Active Community of Learners.  The instructor designed the final project using small groups of five to six students.  They collaborated on a meaningful “applied” archaeological project where they actively engaged with their peers.  Theoretically, a student could take a large enrollment course and never speak to another student.  By integrating the small-group learning opportunities, students “develop active, collaborative learning environments where understanding of course content is shared and constructed” (MacGregor et al., 2000, p. 48).

Would I do it differently next time?

To close the feedback loop, I would want to build in another session prior to the final oral presentation where two groups peer reviewed their presentations.  Each group would then have the option to use their reviewers’ constructive feedback.  After the presentation, the instructor might conduct a survey asking the students whether they took the feedback seriously, and, if so, how they used it and how valuable they found the experience.

As a first attempt, I set high expectations, promoted active learning, and provided invaluable frequent feedback to a large enrollment classroom using clickers for a final project peer evaluation.  The peer evaluations using the clickers were efficient and increased student engagement and perhaps improved student learning.  While using rubrics has increased the quality of written work for my students, rubrics in isolation do not close the feedback loop.  Students need to learn how to evaluate each other’s work using rubrics to digest fully the criteria and standards set forth for each assignment.  They also need to evaluate what they did wrong the last time and how can they improve for the next assignment and have the opportunity to make those improvements.

Did I have any challenges?

References

MacGregor, J., Cooper, J. L., Smith, K. A., & Robinson, P. (2000).  Strategies for Energizing Large Classes: From Small Groups to Learning Communities. Austin, Texas: Jossey-Bass Publishers.

Price, M., O'Donovan, B., & Rust, C. (2007). Putting a social-constructivist assessment process model into practice: Building the feedback loop into the assessment process through peer review. Innovations in Education & Teaching International, 44(2), 143.

Sample — Oral Presentation Peer Evaluation Form

Your Group: _______________ Your Name: _____________________________

Peer Name: _______________ Total Score: _____________________________

Instructions
Here is your opportunity to review the work of your peers. For each category, circle/highlight/check the appropriate level for this group’s performance. Add up each level and type in the total score at the top of the page.

 

4

3

2

1

Time-Limit

Presentation is 10 minutes in length.

Presentation is 1-2 minutes over or under the 10 minute limit.

Presentation is 3-4 minutes long.

Presentation is less than or more than 5 minutes over the 10 minute length.

Preparedness

Group is completely prepared and has obviously rehearsed.

Student seems pretty prepared but might have needed a couple more rehearsals.

The student is somewhat prepared, but it is clear that rehearsal was lacking.

Student does not seem at all prepared to present.

Stays on Topic

Stays on topic all (100%) of the time.

Stays on topic most (99-90%) of the time.

Stays on topic some (89%-75%) of the time.

It was hard to tell what the topic was.

Volume

Volume is loud enough to be heard by all audience members throughout the presentation.

Volume is loud enough to be heard by all audience members at least 90% of the time.

Volume is loud enough to be heard by all audience members at least 80% of the time.

Volume often too soft to be heard by all audience members.

Posture and Eye Contact

Stands up straight, looks relaxed and confident. Establishes eye contact with everyone in the room during the presentation.

Stands up straight and establishes eye contact with everyone in the room during the presentation.

Sometimes stands up straight and establishes eye contact.

Slouches and/or does not look at people during the presentation.

Speaks Clearly

Speaks clearly and distinctly all (100-95%) the time, and mispronounces no words.

Speaks clearly and distinctly all (100-95%) the time, but mispronounces one word.

Speaks clearly and distinctly most ( 94-85%) of the time. Mispronounces no more than one word.

Often mumbles or can not be understood OR mispronounces more than one word.

Content

Shows a full understanding of the board game being presented.

Shows a good understanding of the board game being presented.

Shows a good understanding of parts of the board game being presented.

Does not seem to understand the board game very well.