Further Evaluation of the Validity and Reliability of the Performance Diagnostic Checklist-Human Services
The PDC-HS stays reliable when you score real staff videos, especially if you have behavior-analytic training.
01Research in Context
What this study did
Cymbal et al. (2020) tested the PDC-HS with real videos. Thirty consultants watched clips of staff interviews. They scored four problem areas: training, resources, task difficulty, and consequences.
The team checked if scores stayed the same two weeks later. They also compared behavior analysts to non-analysts.
What they found
Consultants picked the same domain a large share of the time. Test-retest reliability was good (r = 0.82). Behavior analysts scored slightly higher than non-analysts, but both groups were consistent.
Scores were a bit lower than earlier studies that used only written stories.
How this fits with other research
Kaiser et al. (2022) also tested a checklist (the SDQ) and found weak reliability. Cymbal’s tighter training and clear videos may explain why the PDC-HS did better.
Chou et al. (2013) showed the Supports Intensity Scale works for adults with ID. Both studies confirm that well-built checklists can guide service decisions.
Willner (2005) created an earlier self-audit for support plans. Cymbal’s work extends that idea by focusing on staff performance instead of paper quality.
Why it matters
You can trust the PDC-HS when you watch real staff and take notes. Show your team short clips, then score together. Behavior analysts can train others in one hour. Use the results to pick the right fix: more training, better tools, or clearer consequences.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Film a 3-minute staff interview, score it with the PDC-HS, and compare with a coworker.
02At a glance
03Original abstract
The Performance Diagnostic Checklist-Human Services (PDC-HS) is primarily an informant-based assessment tool designed to help consultants identify variables contributing to problematic employee performance and formatted specifically for use in human service settings. In an evaluation of the tool’s validity and reliability, a previous study used a series of video vignettes to simulate a consultant using the PDC-HS to troubleshoot performance problems; participants scored each video twice, providing measures of validity, test–retest reliability, and interrater reliability. Although the results suggested that the PDC-HS was valid and reliable, the vignettes may have lacked external validity. In the current study, we replicated and extended previous research to include video-based scenarios from actual consultant interviews using the PDC-HS. Twenty-one participants scored three videos, each with a different set and number of domains indicated for intervention. Approximately one month later, participants scored the same videos to assess test–retest reliability. Results suggest that participants were largely able to identify the problematic PDC-HS domains and that the tool was generally reliable, although scores were somewhat lower than those reported by in previous research. Results also suggest that the tool is most effectively implemented by a consultant with at least some background in behavior analysis.
Journal of Organizational Behavior Management, 2020 · doi:10.1080/01608061.2020.1792027