Time to completion of web-based physics problems with tutoring.
Computer time stamps can flag clients who rush, stall, or stay steady before you see errors.
01Research in Context
What this study did
Warnakulasooriya et al. (2007) watched how college students moved through online physics problems. They used the computer's built-in log to record every click and the time between clicks.
No teaching or reward system was added. The team simply described the speed patterns they saw.
What they found
Three clear groups showed up in the logs. One group raced to the end in a straight line, likely copying answers. A second group started fast then stalled, a sign of giving up. The last group took steady, thoughtful steps.
These time trails gave a quick red-flag alert about who needed help or honesty checks.
How this fits with other research
Tarifa-Rodriguez et al. (2024) and Tarifa-Rodriguez et al. (2023) both catalog similar log tricks for college coursework. Their reviews count clicks, time, and text to size up engagement, showing the 2007 idea is now a full tool kit.
Kim et al. (2014) used eye-tracker logs instead of click logs and also found slow, fast, and stuck profiles while students read graphs. Same concept, different sensor.
Ben-Yehudah et al. (2019) seems to clash at first: they say digital text hurts students with ADHD. But they measured comprehension, not speed. Rasil looked at time, Gal looked at understanding—two separate outcomes, no real fight.
Why it matters
You already collect response latency in discrete trials. Add a simple timer to any online task you give clients. A sudden drop or jump in seconds can signal cheating, fatigue, or mastery before the data sheet shows it. One glance at the log can guide your next prompt or honesty reminder without extra tests.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Open the session history, sort by time-to-answer, and re-teach or re-motivate the bottom and top 10% outliers.
02At a glance
03Original abstract
We studied students performing a complex learning task, that of solving multipart physics problems with interactive tutoring on the web. We extracted the rate of completion and fraction completed as a function of time on task by retrospectively analyzing the log of student-tutor interactions. There was a spontaneous division of students into three groups, the central (and largest) group (about 65% of the students) being those who solved the problem in real time after multiple interactions with the tutorial program (primarily receiving feedback to submitted wrong answers and requesting hints). This group displayed a sigmoidal fraction-completed curve as a function of logarithmic time. The sigmoidal shape is qualitatively flatter for problems that do not include hints and wrong-answer responses. We argue that the group of students who respond quickly (about 10% of the students) is obtaining the answer from some outside source. The third group (about 25% of the students) represents those who interrupt their solution, presumably to work offline or to obtain outside help.
Journal of the experimental analysis of behavior, 2007 · doi:10.1901/jeab.2007.70-06