Modern statistical practices in the experimental analysis of behavior: An introduction to the special issue
Grab the free Monte Carlo Shiny app to get clean p-values for any ABAB graph in minutes.
01Research in Context
What this study did
Young (2019) wrote the lead paper for a special issue on better stats for single-case work.
He gives readers a free Shiny app that runs Monte Carlo tests right on your ABAB graphs.
The tool spits out p-values that fit how behavior analysts think about level, trend, and overlap.
What they found
The paper is a how-to guide, not an experiment, so no new data are reported.
The takeaway is the app itself: you upload your graph, click run, and get a clear-cut p-value.
How this fits with other research
Barnard-Brak et al. (2022) updated the conversation in a later special section. They translate even more fancy stats into plain English, building on Young’s 2019 starter kit.
Ninci (2023) turns the same Monte Carlo idea into day-to-day practice rules. She tells BCBAs exactly how many phase changes you need and what to eyeball first, so you don’t mis-read your own graph.
Manolov et al. (2022) give a flowchart for picking the right effect measure before you start. Use their chart first, then plug your data into Young’s app to test it.
Tanious et al. (2020) offer two new consistency numbers for ABAB designs. You can run their CONDAP and CONEFF first, then let Young’s Monte Carlo tool give you a p-value for the same data set.
Why it matters
You no longer have to guess if your treatment graph “looks” better. Download the free app, upload your phase data, and get a p-value that reviewers will accept. Pair it with the newer checklists from Ninci (2023) and Manolov et al. (2022) and you have a full statistical shield against visual bias.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Upload your last ABAB graph to the Monte Carlo app and save the p-value output for your next report.
02At a glance
03Original abstract
Group-based experimental designs are an outgrowth of the logic of null-hypothesis significance testing and thus, statistical tests are often considered inappropriate for single-case experimental designs. Behavior analysts have recently been more supportive of efforts to include appropriate statistical analysis techniques to evaluate single-case experimental design data. One way that behavior analysts can incorporate statistical analyses into their practices with single-case experimental designs is to use Monte Carlo analyses. These analyses compare experimentally obtained behavioral data to simulated samples of behavioral data to determine the likelihood that the experimentally obtained results occurred due to chance (i.e., a p value). Monte Carlo analyses are more in line with behavior analytic principles than traditional null-hypothesis significance testing. We present an open-source Monte Carlo tool, created in shiny, for behavior analysts who want to use Monte Carlo analyses in addition as part of their data analysis.
Journal of the Experimental Analysis of Behavior, 2019 · doi:10.1002/jeab.511