Effects of imposed postfeedback delays in programmed instruction.
Add a 10-second pause after each question in computerized lessons—students study the material and score higher.
01Research in Context
What this study did
College students answered computer-based quiz questions. After each answer the screen froze for 10 seconds before the next item appeared.
The researchers compared scores when the pause was on versus off. They used an alternating-treatments design so every student tried both versions.
What they found
Students scored higher when the forced 10-second pause followed each answer.
The extra time let them reread the material, which boosted accuracy on later questions.
How this fits with other research
SCHUTZ et al. (1962) once pushed for instant reinforcement, even using special inks to mark answers immediately. The new study flips that idea: a short delay, not speed, helped learning.
Snycerski et al. (2004) saw that a 15-second delay for water reinforcement slowed lever-press learning. Their delay punished acquisition, while the 10-second pause here aided it. The tasks differ—button press for water versus study time for quiz—so the results do not clash.
Watson et al. (2007) also used computerized lessons and found prompting styles matter. Together the papers show both pause length and prompt type shape how much students learn.
Why it matters
If you run computer lessons, insert a 10-second lock after each answer. Learners will use the pause to review the screen, and their scores rise without extra teacher time.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Program a 10-second screen freeze after every answer in your next digital worksheet.
02At a glance
03Original abstract
Imposed postfeedback delays promote discrimination training; the present experiments determined whether they also improve performance in programmed instruction. In two experiments, college students completed 45 sets of Holland and Skinner's (1961) programmed text on behavior analysis in a computerized format in a three-component multiple schedule. In Experiment 1, the conditions were (a) no delay between questions, (b) a 10-s delay after each question (noncontingent delay), and (c) a 10-s delay after each question answered incorrectly (contingent delay). Noncontingent delay produced better performance than no delay and contingent delay. To determine whether performance increased in the noncontingent delay condition because subjects studied the material during delay periods, Experiment 2 tested three conditions: (a) no delay between questions, (b) a 10-s delay after each question (noncontingent delay), and (c) a 10-s delay after each question with the screen blank during the delay period. Noncontingent delay produced better performance than no delay, but there was no difference in performance between no delay and noncontingent delay with blank screen. Hence, noncontingent delay improved performance because students used delay periods to study. Furthermore, subjects preferred noncontingent delay to the other conditions, and session time increased only slightly.
Journal of applied behavior analysis, 1994 · doi:10.1901/jaba.1994.27-483