Evaluating the use of exploratory factor analysis in developmental disability psychological research.
Most factor analyses in our journals skip basic quality steps—check the math before you build treatment plans on them.
01Research in Context
What this study did
Norris et al. (2010) read 66 factor-analysis papers from developmental-disability journals. They checked each one against standard EFA rules like sample size, rotation choice, and item-to-subject ratio.
The goal was to see how many studies follow the math rules that make factor results trustworthy.
What they found
Two-thirds of the papers broke at least one major EFA rule. Common slips: too few subjects, wrong rotation, or no report of how many factors were kept.
In short, most published factor solutions in our field stand on shaky ground.
How this fits with other research
Selau et al. (2025) and Huang et al. (2014) show the upside: when you follow the rules, EFA gives clear, useful factors for adaptive behavior and fine-motor checklists.
Liyew et al. (2025) did the same with ATEC ratings. These later studies used bigger samples and reported rotation details—exactly the fixes Megan’s audit calls for.
Perez et al. (2015) also audited motor tests and likewise found weak reliability papers. Together the reviews say: our assessments can be valid, but only if we publish the boring methodological details.
Why it matters
Before you trust any new factor-based questionnaire, open the method section. Look for subject-to-item ratio, extraction method, and rotation type. If those lines are missing, treat the factors as guesswork, not gospel. Ask authors for the numbers or run your own check. Good factors guide good interventions; bad factors waste time and money.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Pull the last factor-based scale you used and verify the paper lists extraction method, rotation, and subject:item ratio.
02At a glance
03Original abstract
Exploratory factor analysis (EFA) is a widely used but poorly understood statistical procedure. This paper described EFA and its methodological variations. Then, key methodological variations were used to evaluate EFA usage over a 10-year period in five leading developmental disabilities journals. Sixty-six studies were located and evaluated on multiple procedural variations. Only 35% (n = 23) of studies used EFA; principal components analysis was the model used most often (n = 40, 61%). Orthogonal rotation was used most often (n = 39, 59%). A large portion of studies ran analyses with a subject: item ratio larger than 5:1 (n = 49, 74%). Most researchers employed multiple criteria for retaining factors (n = 45, 68%). Overall, results indicated that published recommendations and guidelines for the use of EFA are largely ignored.
Journal of autism and developmental disorders, 2010 · doi:10.1007/s10803-009-0816-2