MIEBL: Measurement of Individualized, Evidence-Based Learning Criteria Designed for Discrete Trial Training
Use the free MIEBL calculator to set mastery thresholds that fit each learner's starting point instead of default 80-100% rules.
01Research in Context
What this study did
Ramos (2025) built a free web tool called MIEBL. It sets mastery goals for discrete-trial lessons.
You feed in the learner's first-try accuracy and the mastery level you want. The program gives a custom pass-fail line.
What they found
The paper shows how the tool works. It does not test kids or report outcomes.
How this fits with other research
Schaaf et al. (2015) taught braille with plain 100% mastery. MIEBL swaps that fixed rule for a data-driven cutoff.
Tiernan et al. (2022) reviewed thirty years of Precision Teaching. Most studies picked fluency numbers by hand. MIEBL offers a fresh, stats-based way to set those numbers.
Merlo et al. (2023) gave us the BEHAVE app for tracking mand data. MIEBL is another free web app, but it targets mastery instead of ongoing notes.
Why it matters
Stop using 80% or 90% for every skill. Plug your baseline and target into MIEBL and let the tool give a fair, learner-specific goal. It takes one minute and may stop you from moving too fast or holding a kid back too long.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Try MIEBL with one learner: enter baseline accuracy and desired mastery, then use the new cutoff in your next DTT session.
02At a glance
03Original abstract
Informing the selection of a performance criterion for discrete trial training has been the subject of a growing body of empirical research, but an explicit framework has not yet been established. This paper proposes a tool for selecting a performance criterion that uses individualized assessment characteristics and mastery level goals and is grounded on sound probability theory. This tool is demonstrated to provide results that are consistent with existing research outcomes, and its use is advocated to better inform practitioners and researchers on the implications of their performance assessment choices. A tutorial with ready-to-use software is provided. Practitioners can use the tool to evaluate their assessment strategies and outcome expectations. Practitioners can use the tool to help make judgments about whether to continue with a teaching strategy or switch to another one. Researchers can use the tool to account for bias between observed performance and actual mastery level at the end of instruction, which is a confounder to observed performance during maintenance or generalization. Researchers and practitioners can use the tool to make better-informed decisions. The online version contains supplementary material available at 10.1007/s40617-025-01058-9.
Behavior Analysis in Practice, 2025 · doi:10.1007/s40617-025-01058-9