Because the research tells me so

“Because the research tells me so”:
Best practices in facilitating peer instruction

As a follow-up to last month’s post, on research showing that peer discussion helps students learn I’d like to share a variety of the messages that are coming out of the research on clickers and peer instruction – with particularly pertinent implications for instructors.

1. Peer discussion versus instructor explanation of clicker questions

In last month’s post I shared research on why it’s important to have students talk to each other during a clicker question, because this is where much of the essential learning takes place. But what is the role of the instructor explanation? The authors of that last study set out to answer this question as well, by again using matched pairs of questions to assess student learning from clicker discussions; each question testing the same concept, but appearing different to the students. All students answered the first question in the pair individually, but then different groups of students experienced different forms of instruction:

  • Group 1 discussed the question with their neighbors, and were given the answer after discussion (but the instructor did NOT explain the answer)
  • Group 2 did NOT discuss the question with their neighbors, but the instructor gave them their own explanation
  • Group 3 had a combination of the two: They discussed the question with their peers, and then heard the instructor give an explanation of the question.

Then, all students answered the second, similar question on their own, which acted as a test of their learning of the concept.

The last, the combination approach, is how Peer Instruction is designed, but what this study does is to allow an empirical testing of the hypothesis that this will help students learn.

What they found was very interesting.

  • BOTH peer discussion and the instructor explanation helped student learn (i.e., perform better on the second question)
  • However, the combination mode (where peer discussion was combined with peer discussion) resulted in the greatest student learning.
  • This combination mode was helpful for students, regardless of their ability level.
  • However, the situation where instructors only offered an explanation (without an opportunity for peer discussion) was particularly UNhelpful for stronger students, seeming to turn them off from the process.
  • In a non-majors’ course, the addition of peer discussion to the instructor explanation did not have as large of an effect as in the majors’ course – suggesting that perhaps non-majors do not see their peers as helpful learning resources.

Take-home message: Have students discuss clicker questions with their peers, but give them your explanation as well.

Source: Smith, Wood, Krauter and Knight, “Combining Peer Discussion with Instructor Explanation Increases Student Learning from In-Class Concept Questions,” CBE-Life Sciences Education, 10, 55-63 (2011).

2. The impact of instructor’s cues

The same research team, headed by Jenny Knight, is currently investigating the impact of the cues that instructors give to students when they introduce a clicker question. Their research question is: “Do students engage in higher quality reasoning when instructors provide models of, and reminders to, use reasoning in their discussions?”

For this study, they have recorded, transcribed, and analyzed 72 different student conversations in introductory biology classes. When giving students a clicker question, instructors were told to either instruct students (a) please discuss this question, or (b) to tell students to discuss the question and remember to focus on reasoning in their discussion. In the latter group, students also viewed a short video demonstrating what is meant by “quality” reasoning in a conversation.

They found a significant shift in the type of reasoning that students used in their conversations, such that students who got the cue to use reasoning were more likely to use reasoning, more likely to have both partners in the conversation use reasoning, and more likely to use evidence in that reasoning. They were also more likely to ask questions during their conversations to confirm their own reasoning, or express doubt about their partner’s reasoning.

I have also written previously on the importance of framing your use of clickers for your students, so that they know what is expected of them during your use of clickers:
Getting students on-board with clickers and peer discussion.

Take home message: Remind students to use reasoning in their discussions, to prompt higher-quality conversations.

Source: Jenny Knight and Sarah Wise, University of Colorado. Unpublished work.

3. The impact of course credit

In my workshops, I often counsel instructors to give what I call the “whiff of credit” for participating in clickers – give students participation-only credit for clickers, with perhaps some small amount of extra credit (often offsetting a poor homework score) for the correct answer. I base that recommendation in large part on an interesting study by Shannon Willoughby. One caveat: The findings of this study are a little less clear than the others in this post.

In this study, in an astronomy classroom, clicker questions were graded differently in two different sections, as below:

  • High-stakes: Clicker points given for correct answers (1 point for the correct answer only)
  • Low-stakes: Clicker points given for participation only (1 point for any answer)

Clickers were worth 4% of the student grades. They recorded and analyzed student conversations in these sections, and categorized the nature of the conversations. They also collected data on student performance on the clicker questions and on a conceptual test on the content.

Both groups of students got similar grades in the course, and similar scores on the conceptual test, suggesting that the grading incentive didn’t affect their learning of the material. They found that the type of conversations varied quite dramatically based on the type of credit given, such that the low-stakes groups were more likely to:

  • Have longer discussions (i.e., make more statements)
  • Say what they thought the answer was
  • Ask for clarification on the answer
  • Restate the question and ask a new question

So, giving students credit for getting the right answer to a clicker question doesn’t actually serve the purpose that instructors might hope that it would: to prompt student discussion. It appears to have the opposite effect, shutting down conversation to some degree.

Take home message: Use credit for participation-only in order to create a productive atmosphere for frank discussion.

Source: Willoughby and Gustafson, “Technology talks: Clickers and grading incentive in the large lecture hall,” American Journal of Physics, 77 (2) (2009).

Other articles of interest —

Biology-focused overview: Caldwell, “Clickers in the Large Classroom: Current Research and Best-Practice Tips,” CBE-Life Science Education, 6 (2007).

General college education and psychology of clickers: Mayer et al., “Clickers in College Classrooms: Fostering learning with questioning methods in large lecture classes,” Contemporary Educational Psychology, 34, 51-57 (2009).

General overview and rationale: Mazur, “Farewell, Lecture?” Science, 323 (2009).