Evaluating the Efficacy of and Preference for Interactive Computer Training with Student-Generated Examples
Students love making their own examples on screen, but the fun alone does not raise test scores.
01Research in Context
What this study did
Aquino et al. (2024) tested a new computer lesson. Students made their own examples and shared them on screen.
The team asked: Do students like this more than plain videos or text? Does it help them score higher on quizzes?
They ran a single-case study with college students learning behavior-analysis terms.
What they found
Students picked the interactive module every time. They said it felt fun and personal.
Yet their quiz scores stayed flat. The fancy format did not beat video or text on learning gains.
How this fits with other research
Herzog et al. (2026) saw the same split. Kids liked computer math games, but brighter ones learned slower because the tasks were too easy for them.
Radley et al. (2019) showed that quick group polls match longer 1-to-1 preference tests. Aquino’s team used a short digital poll and got the same clear winner.
Wang et al. (2025) reviewed 15 studies of computerized cognitive training for youth with ASD. They found real skill gains. Aquino’s null score result looks like a contradiction, but the earlier work targeted different skills and added longer practice blocks.
Why it matters
Liking matters. If students hate the format, they drop out. Use interactive, student-made examples to keep them in their seats. Just do not expect higher test marks unless you add more practice and feedback loops. Pair the fun module with brief quizzes or fluency drills to turn preference into performance.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Keep the interactive student-made slides, then add three quick practice questions after each chunk.
02At a glance
03Original abstract
Designing effective and preferred teaching practices for undergraduate students are common goals in behavior analytic training programs. A preliminary study by Nava et al. (2019) showed that undergraduate students generally rated peer-generated examples of the principles of behavior analysis as more preferred, relatable, and culturally responsive than traditional textbook examples. However, peer-generated examples did not result in any improvement in performance on concept knowledge assessments. The current study extended the study by Nava et al. by embedding peer-generated examples within interactive computer training (ICT) to provide opportunities for active responding, prompt fading, automated feedback, and practice with examples and nonexamples. Results showed that ICT did not produce reliable improvements in knowledge assessments but were preferred to video examples and textual examples. In addition, students reported that certain interactive features contributed to their preference for ICT. We discuss ways to further improve the efficacy of the preferred ICT package.
Behavior Analysis in Practice, 2024 · doi:10.1007/s40617-024-01007-y