Improving optimal prompt learning through multilayer fusion and latent dirichlet allocation
A new prompt-engineering method that blends multilayer attention with topic modeling beats current baselines when classifying ABA therapy transcripts in low-data situations.
01Research in Context
What this study did
The team built a smart prompt tool called GAM-LDA. It mixes attention layers with topic modeling.
They tested it on a small pile of ABA therapy transcripts. The goal was to label each line correctly.
Only a few examples were given to the system each time. This is called few-shot learning.
What they found
GAM-LDA beat every baseline it faced. Few-shot accuracy jumped on the ABA dialogue set.
The tool needed less data yet made fewer mistakes.
How this fits with other research
Tincani et al. (2020) showed most SGD studies over-use multiply-controlled mands. Their map of verbal operants gives the labels that GAM-LDA now spots faster.
Kim et al. (2023) proved that trimming trial dose to 12 speeds kid learning. Likewise, GAM-LDA trims the data dose while keeping label accuracy high.
Gwynette et al. (2020) used instructive feedback to spark new words. GAM-LDA could flag where such feedback is missing in a transcript, guiding the next prompt you give.
Why it matters
You can let GAM-LDA pre-code your session notes. It spots mand, tact, or intraverbal frames in seconds. That frees you to design the next teaching move instead of drowning in paperwork. Try feeding it five sample transcripts on Monday and watch it label the rest.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Export last week’s transcripts, feed five to the free GAM-LDA demo, and paste the auto-labels into your Excel tracker.
02At a glance
03Original abstract
Recent advances in few-shot learning have demonstrated the potential of prompt-based techniques with pre-trained models, eliminating the need for extensive fine-tuning. However, challenges such as obtaining optimal prompts and addressing data scarcity in specialized domains remain challenging. We introduce a novel framework incorporating a Global Attention Mechanism (GAM) that effectively integrates features from multiple layers of pre-trained language models, enhanced by Latent Dirichlet Allocation (LDA) generated topic features for prompt optimization. Extensive experiments on four datasets consistently show that our approach outperforms state of-the-art baselines. The strategic integration of GAM with layer-specific features and LDA topics proves particularly effective in extracting valuable latent information for few-shot learning scenarios, yielding significant improvements in specialized domains, as evidenced by enhanced performance in therapeutic dialogue classification within a Applied Behavior Analysis clinical dataset.
, 2025 · doi:10.3389/frobt.2025.1579990