A multiple change score comparison of traditional and behavioral college teaching procedures.
Fill-in-only responding beats traditional lectures for college learning, and the idea still works online.
01Research in Context
What this study did
Alba et al. (1972) compared two ways to teach college students. One group got traditional lectures. The other group used fill-in-only responding. Students had to write answers instead of picking multiple choice.
The study tracked learning with both fill-in and multiple-choice tests. It wanted to see which method helped students learn more.
What they found
Students who used fill-in-only responding learned more. They scored higher on both types of tests. Even on multiple-choice questions, the fill-in group beat the lecture group.
How this fits with other research
Shaw et al. (2024) extends this work online. They tested interteaching against discussion boards in grad classes. Like the 1972 study, active student responding won. Quiz scores rose even when classes moved to Zoom.
Wong et al. (2009) adds a twist. They tried adding bonus points to interteaching. The points did not help exam scores. This shows that the core method matters more than extra rewards.
Perez et al. (2015) found a warning. Their stimulus equivalence lectures boosted quiz scores right away. But scores dropped on the final exam. This hints that fill-in-only benefits might fade without later review.
Why it matters
You can swap lectures for fill-in tasks tomorrow. Ask students to write short answers every few minutes. Use slips of paper or the chat box. This simple change beats talking at them. Revisit the material later to lock it in.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Replace your next five lecture slides with fill-in prompts. Have students write answers on scrap paper or in chat.
02At a glance
03Original abstract
Seventy-six students in a college-level course in human development were divided into an experimental and a control group of approximately equal size. Both groups were given a pretest composed of fill-in and multiple-choice items. The control group was exposed to conventional educational practices while the experimental group was treated in a manner similar to that described by Johnston and Pennypacker (1971), performing only on fill-in items. Post-test results showed significantly greater changes in the experimental group, regardless of the type of test item, although the difference was greater in the case of the fill-in items. The results are discussed in terms of their implications for both future research and tactics in the development of improved teaching technologies.
Journal of applied behavior analysis, 1972 · doi:10.1901/jaba.1972.5-121