Talk:Multiple Choice Questions (Teaching & Learning)

From UBC Wiki

Contents

Thread titleRepliesLast modified
SoTL Journal Club (March 12, 2013)116:40, 7 March 2013

SoTL Journal Club (March 12, 2013)

Selected article for our discussion:

  • Little, J. L., Bjork, E. L., Bjork, R. A., & Angello, G. (2012). Multiple-choice tests exonerated, at least of some charges: Fostering test-induced learning and avoiding test-induced forgetting. Psychological Science, 23(11), 1337-1344. Ubc-elink.png


Some pointers for discussion:

1. As the introduction notes, MQCs are often criticized as blunt instruments to foster or even measure learning. Yet one of the most pervasive tools for interactive engagement in large classes, particularly in the sciences, has been the use of clicker questions, almost always as MCQ questions. Is there a contradiction here?

2. Data presented (e.g. in Figure 1) presents evidence to show the benefits that taking a test can have on promoting student learning, not just measuring it (also see, for example the Belluck NYT article referenced in the paper). How well / widely do we communicate such things between colleagues (and indeed students)? How might we do it better?

3. The paper presents a complicated and sophisticated experimental design. An issue the authors highlight is the different performance on the initial test for those students who did a MCQ test compared with those who did a cued-recall (short answer) test. This makes it difficult to compare retrieval effects of interventions of different initial efficacy. One way around this may be to consider negative marking of incorrect MCQ questions, such that random guessing yields a score of zero (rather than 25%, on average, for 4-choice MCQs). Does anyone have any experience of having used this method of penalizing incorrect answers as a deterrent to random guessing?

ShayaGolparian (talk)22:01, 6 March 2013

Thanks. This is handy.

JudyCKChan (talk)16:40, 7 March 2013