In Dreams Begin Responsibility: Why and How to Measure the Quality of Graduate Training in Applied Behavior Analysis.
Graduate ABA programs should post real-world outcome data so the field can see which schools produce the most effective BCBAs.
01Research in Context
What this study did
Critchfield (2015) wrote a position paper. The paper says graduate ABA programs should track how well their alumni do in real jobs.
Programs should publish those results. This lets the field see which schools produce the most effective BCBAs.
What they found
The paper did not collect new data. It argued that consumer trust drops when poorly trained BCBAs hit the field.
Tracking alumni success is the only way to prove a program is worth tuition dollars.
How this fits with other research
Najdowski et al. (2021), Levy et al. (2022), and Mathur et al. (2022) all extend the idea. They say programs should also be judged on antiracist and cultural-responsiveness training.
Critchfield (2018) adds another layer. Programs should show that alumni can use stimulus-relations to teach faster.
Britton et al. (2021) push for ethical-behavior metrics. McComas et al. (2025) want anti-ableism scores. Jackson-Perry et al. (2025) say critical autism studies should count too.
Together these papers turn one metric into a dashboard. Field success is still the core, but now it includes equity, ethics, and emergent-learning skill.
Why it matters
If you hire BCBAs or sit on a hiring panel, ask for alumni outcome data. Programs that share it are betting on quality. If you teach, start tracking your own graduates’ client gains and publish them. One page of numbers per cohort is enough. It protects the field and gives future students a reason to choose your course.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Email your alma mater and ask for their latest alumni effectiveness report.
02At a glance
03Original abstract
Although no one knows just how effective graduate training may be in creating effective practitioners of applied behavior analysis, there are plenty of logical and historical reasons to think that not all practitioners are equally competent. I detail some of those reasons and explain why practitioner effectiveness may be a more pressing worry now than in the past. Because ineffective practitioners harm the profession, rigorous mechanisms are needed for evaluating graduate training programs in terms of the field effectiveness of their practitioners. Accountability of this nature, while difficult to arrange, would make applied behavior analysis nearly unique among professions, would complement existing quality control processes, and would help to protect the positive reputation and vigorous consumer demand that the profession currently enjoys.
Behavior analysis in practice, 2015 · doi:10.1016/S0005-7894(98)80017-8