This guide draws in part from “Research (Design) and Practice: Can't we all just get along?” by Cara Phillips, PhD., BCBA-D (BehaviorLive), and extends it with peer-reviewed research from our library of 27,900+ ABA research articles. Citations, clinical framing, and cross-links below are synthesized by Behaviorist Book Club.
View the original presentation →Research (Design) and Practice: Can't we all just get along? becomes clinically important the moment a team has to turn good intentions into reliable action inside clinic sessions and day-to-day service delivery. In Research (Design) and Practice: Can't we all just get along, for this course, the practical stakes show up in stronger conceptual consistency and better translational decision making, not in abstract discussion alone. The source material highlights the field of behavior analysis has a well-recognized gap between research and practice. That framing matters because behavior analysts, trainees, researchers, and the clients affected by analytic rigor all experience Research (Design) and Practice: Can't we all just get along? and the decisions around the analytic principle, decision point, and applied example the team is trying to connect differently, and the BCBA is often the person expected to organize those perspectives into something observable and workable. Instead of treating Research (Design) and Practice: Can't we all just get along? as background reading, a stronger approach is to ask what the topic changes about assessment, training, communication, or implementation the next time the same pressure point appears in ordinary service delivery. The course emphasizes clarifying potential side effects of the research/practice divide on the provision of services in clinical practice, clarifying three advantages to using research design to evaluate treatment effects in applied settings, and applying Research (Design) and Practice: Can't we all just get along? to real cases. In other words, Research (Design) and Practice: Can't we all just get along? is not just something to recognize from a training slide or a professional conversation. It is asking behavior analysts to tighten case formulation and to discriminate when a familiar routine no longer matches the actual contingencies shaping client outcomes or organizational performance around Research (Design) and Practice: Can't we all just get along?. Cara Phillips is part of the framing here, which helps anchor the topic in a recognizable professional perspective rather than in abstract advice. Clinically, Research (Design) and Practice: Can't we all just get along? sits close to the heart of behavior analysis because the field depends on precise observation, good environmental design, and a defensible account of why one action is preferable to another. When teams under-interpret Research (Design) and Practice: Can't we all just get along, they often rely on habit, personal tolerance for ambiguity, or the loudest stakeholder in the room. When Research (Design) and Practice: Can't we all just get along is at issue, they over-interpret it, they can bury the relevant response under jargon or unnecessary process. Research (Design) and Practice: Can't we all just get along? is valuable because it creates a middle path: enough conceptual precision to protect quality, and enough applied focus to keep the skill usable by supervisors, direct staff, and allied partners who do not all think in the same vocabulary. That balance is exactly what makes Research (Design) and Practice: Can't we all just get along? worth studying even for experienced practitioners. A BCBA who understands Research (Design) and Practice: Can't we all just get along? well can usually detect problems earlier, explain decisions more clearly, and prevent small implementation errors from growing into larger treatment, systems, or relationship failures. The issue is not just whether the analyst can define Research (Design) and Practice: Can't we all just get along. In Research (Design) and Practice: Can't we all just get along, the issue is whether the analyst can identify it in the wild, teach others to respond to it appropriately, and document the reasoning in a way that would make sense to another competent professional reviewing the same case.
The background to Research (Design) and Practice: Can't we all just get along? is worth tracing because the field did not arrive at this issue by accident. In many settings, Research (Design) and Practice: Can't we all just get along work shows that the profession grew faster than the systems around it, which means clinicians inherited workflows, assumptions, and training habits that do not always match current expectations. The source material highlights this divide is fostered by both sides of the fence, in very different ways. Once that background is visible, Research (Design) and Practice: Can't we all just get along stops looking like a niche concern and starts looking like a predictable response to growth, specialization, and higher demands for accountability. The context also includes how the topic is usually taught. Some practitioners first meet Research (Design) and Practice: Can't we all just get along? through short-form staff training, isolated examples, or professional folklore. For Research (Design) and Practice: Can't we all just get along, that can be enough to create confidence, but not enough to produce stable application. The more practice moves into clinic sessions and day-to-day service delivery, the more costly that gap becomes. In Research (Design) and Practice: Can't we all just get along, the work starts to involve real stakeholders, conflicting incentives, time pressure, documentation requirements, and sometimes interdisciplinary communication. In Research (Design) and Practice: Can't we all just get along, those layers make a shallow understanding unstable even when the underlying principle seems familiar. Another important background feature is the way Research (Design) and Practice: Can't we all just get along frame itself shapes interpretation. The source material highlights in this presentation, we will discuss the ways in which the requirements for publication impede the dissemination of findings that emerge from clinical settings. That matters because professionals often learn faster when they can see where Research (Design) and Practice: Can't we all just get along sits in a broader service system rather than hearing it as a detached principle. If Research (Design) and Practice: Can't we all just get along involves a panel, Q and A, or practitioner discussion, that context is useful in its own right: it exposes the kinds of objections, confusions, and implementation barriers that analytic writing alone can smooth over. For a BCBA, this background does more than provide orientation. It changes how present-day problems are interpreted. Instead of assuming every difficulty represents staff resistance or family inconsistency, the analyst can ask whether the setting, training sequence, reporting structure, or service model has made Research (Design) and Practice: Can't we all just get along harder to execute than it first appeared. For Research (Design) and Practice: Can't we all just get along, that is often the move that turns frustration into a workable plan. In Research (Design) and Practice: Can't we all just get along, context does not solve the case on its own, but it tells the clinician which variables deserve attention before blame, urgency, or habit take over. Seen this way, the background to Research (Design) and Practice: Can't we all just get along? is not filler; it is part of the functional assessment of why the problem shows up so reliably in practice.
The practical implication of Research (Design) and Practice: Can't we all just get along? is not just better language; it is better allocation of attention when the team has to decide what to fix first. In most settings, Research (Design) and Practice: Can't we all just get along work requires that means asking for more precise observation, more honest reporting, and a better match between the intervention and the conditions in which it must work. The source material highlights the field of behavior analysis has a well-recognized gap between research and practice. When Research (Design) and Practice: Can't we all just get along is at issue, analysts ignore those implications, treatment or operations can remain superficially intact while the real mechanism of failure sits in workflow, handoff quality, or poorly defined staff behavior. The topic also changes what should be coached. In Research (Design) and Practice: Can't we all just get along, supervisors often spend time correcting the most visible error while the more important variable remains untouched. With Research (Design) and Practice: Can't we all just get along?, better supervision usually means identifying which staff action, communication step, or assessment decision is actually exerting leverage over the problem. In Research (Design) and Practice: Can't we all just get along, it may mean teaching technicians to discriminate context more accurately, helping caregivers respond with less drift, or helping leaders redesign a routine that keeps selecting the wrong behavior from staff. Those are practical changes, not philosophical ones. Another implication involves generalization. A skill or policy can look stable in training and still fail in clinic sessions and day-to-day service delivery because competing contingencies were never analyzed. Research (Design) and Practice: Can't we all just get along gives BCBAs a reason to think beyond the initial demonstration and to ask whether the response will survive under real pacing, imperfect implementation, and normal stakeholder stress. For Research (Design) and Practice: Can't we all just get along, that perspective improves programming because it makes maintenance and usability part of the design problem from the start instead of rescue work after the fact. Finally, the course pushes clinicians toward better communication. With Research (Design) and Practice: Can't we all just get along?, analytic quality depends on whether the BCBA can translate the logic into steps that other people can actually follow. Research (Design) and Practice: Can't we all just get along? affects how the analyst explains rationale, sets expectations, and documents why a given recommendation is appropriate. When Research (Design) and Practice: Can't we all just get along is at issue, that communication improves, teams typically see cleaner implementation, fewer repeated misunderstandings, and less need to re-litigate the same decision every time conditions become difficult. The most valuable clinical use of Research (Design) and Practice: Can't we all just get along? is a measurable shift in what the team asks for, does, and reviews when the same pressure returns.
The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.
Ethically, Research (Design) and Practice: Can't we all just get along? cannot be treated as a neutral technical topic because the way it is handled changes who is protected, who is informed, and who absorbs the burden when things go poorly. That is also why Code 1.01, Code 1.04, Code 2.01 belong in the discussion: they keep attention on fit, protection, and accountability rather than letting the team treat Research (Design) and Practice: Can't we all just get along? as a purely technical exercise. In Research (Design) and Practice: Can't we all just get along, in applied terms, the Code matters here because behavior analysts are expected to do more than mean well. In Research (Design) and Practice: Can't we all just get along, they are expected to provide services that are conceptually sound, understandable to relevant parties, and appropriately tailored to the client's context. When Research (Design) and Practice: Can't we all just get along is handled casually, the analyst can drift toward convenience, false certainty, or role confusion without naming it that way. There is also an ethical question about voice and burden in Research (Design) and Practice: Can't we all just get along?. In Research (Design) and Practice: Can't we all just get along?, behavior analysts, trainees, researchers, and the clients affected by analytic rigor do not all bear the consequences of decisions about the analytic principle, decision point, and applied example the team is trying to connect equally, so a BCBA has to ask who is being asked to tolerate the most effort, uncertainty, or social cost. In Research (Design) and Practice: Can't we all just get along, in some cases that concern sits under informed consent and stakeholder involvement. In Research (Design) and Practice: Can't we all just get along, in others it sits under scope, documentation, or the obligation to advocate for the right level of service. In Research (Design) and Practice: Can't we all just get along, either way, the point is the same: the ethically easier option is not always the one that best protects the client or the integrity of the service. Research (Design) and Practice: Can't we all just get along is especially useful because it helps analysts link ethics to real workflow. In Research (Design) and Practice: Can't we all just get along, it is one thing to say that dignity, privacy, competence, or collaboration matter. In Research (Design) and Practice: Can't we all just get along, it is another thing to show where those values are won or lost in case notes, team messages, billing narratives, treatment meetings, supervision plans, or referral decisions. Once that connection becomes visible, the ethics discussion becomes more concrete. In Research (Design) and Practice: Can't we all just get along, the analyst can identify what should be documented, what needs clearer consent, what requires consultation, and what should stop being delegated or normalized. For many BCBAs, the deepest ethical benefit of Research (Design) and Practice: Can't we all just get along is humility. Research (Design) and Practice: Can't we all just get along? can invite strong opinions, but good practice requires a more disciplined question: what course of action best protects the client while staying within competence and making the reasoning reviewable? For Research (Design) and Practice: Can't we all just get along, that question is less glamorous than certainty, but it is usually the one that prevents avoidable harm. In Research (Design) and Practice: Can't we all just get along, ethical strength in this area is visible when the analyst can explain both the intervention choice and the guardrails that keep the choice humane and defensible.
A useful assessment stance for Research (Design) and Practice: Can't we all just get along? is to ask what information is reliable enough to act on today and what still requires clarification. For Research (Design) and Practice: Can't we all just get along, that first step matters because teams often jump from a title-level problem to a solution-level preference without examining the functional variables in between. For a BCBA working on Research (Design) and Practice: Can't we all just get along, a better process is to specify the target behavior, identify the setting events and constraints surrounding it, and determine which part of the current routine can actually be changed. The source material highlights the field of behavior analysis has a well-recognized gap between research and practice. Data selection is the next issue. Depending on Research (Design) and Practice: Can't we all just get along, useful information may include direct observation, work samples, graph review, documentation checks, stakeholder interview data, implementation fidelity measures, or evidence that a current system is producing predictable drift. The important point is not to collect everything. It is to collect enough to discriminate between likely explanations. For Research (Design) and Practice: Can't we all just get along, that prevents the analyst from making a polished but weak recommendation based on the most available story rather than the most relevant evidence. Assessment also has to include feasibility. In Research (Design) and Practice: Can't we all just get along, even technically strong plans fail when they ignore the conditions under which staff or caregivers must carry them out. That is why the decision process for Research (Design) and Practice: Can't we all just get along? should include workload, training history, language demands, competing reinforcers, and the amount of follow-up support the team can actually sustain. This is where consultation or referral sometimes becomes necessary. In Research (Design) and Practice: Can't we all just get along, if the case exceeds behavioral scope, if medical or legal issues are primary, or if another discipline holds key information, the behavior analyst should widen the team rather than forcing a narrower answer. Good decision making ends with explicit review rules. In Research (Design) and Practice: Can't we all just get along, the team should know what would count as progress, what would count as drift, and when the current plan should be revised instead of defended. For Research (Design) and Practice: Can't we all just get along, that is especially important in topics that carry professional identity or organizational pressure, because those pressures can make people protect a plan after it has stopped helping. In Research (Design) and Practice: Can't we all just get along, a BCBA who documents decision rules clearly is better able to explain later why the chosen action was reasonable and how the available data supported it. In short, assessing Research (Design) and Practice: Can't we all just get along? well means building enough clarity that the next decision can be justified to another competent professional and to the people living with the outcome.
In day-to-day practice, Research (Design) and Practice: Can't we all just get along? should lead to concrete changes rather than better-sounding conversations alone. For many BCBAs, the best starting move is to identify one current case or system that already shows the problem described by Research (Design) and Practice: Can't we all just get along. That keeps the material grounded. If Research (Design) and Practice: Can't we all just get along addresses reimbursement, privacy, feeding, language, school implementation, burnout, or culture, there is usually a live example in the caseload or organization. Using that Research (Design) and Practice: Can't we all just get along? example, the analyst can define the next observable adjustment to documentation, prompting, coaching, communication, or environmental arrangement. It is also worth tightening review routines. Topics like Research (Design) and Practice: Can't we all just get along? often degrade because they are discussed broadly and checked weakly. A better practice habit for Research (Design) and Practice: Can't we all just get along? is to build one small but recurring review into existing workflow: a graph check, a documentation spot-audit, a school-team debrief, a caregiver feasibility question, a technology verification step, or a supervision feedback loop. In Research (Design) and Practice: Can't we all just get along, small recurring checks usually do more for maintenance than one dramatic retraining event because they keep the contingency visible after the initial enthusiasm fades. In Research (Design) and Practice: Can't we all just get along, another practical shift is to improve translation for the people who need to carry the work forward. In Research (Design) and Practice: Can't we all just get along, staff and caregivers do not need a lecture on the entire conceptual background each time. In Research (Design) and Practice: Can't we all just get along, they need concise, behaviorally precise expectations tied to the setting they are in. For Research (Design) and Practice: Can't we all just get along?, that might mean rewriting a script, narrowing a target, clarifying a response chain, or revising how data are summarized. Those small moves make Research (Design) and Practice: Can't we all just get along usable because they lower ambiguity at the point of action. In Research (Design) and Practice: Can't we all just get along, the broader takeaway is that continuing education should change contingencies, not just comprehension. When a BCBA uses this course well, stronger conceptual consistency and better translational decision making become easier to protect because Research (Design) and Practice: Can't we all just get along has been turned into a repeatable practice pattern. That is the standard worth holding: not whether Research (Design) and Practice: Can't we all just get along sounded helpful in the moment, but whether it leaves behind clearer action, cleaner reasoning, and more durable performance in the setting where the learner, family, or team actually needs support. If Research (Design) and Practice: Can't we all just get along? has really been absorbed, the proof will show up in a revised routine and in better outcomes the next time the same challenge appears.
Ready to go deeper? This course covers this topic in detail with structured learning objectives and CEU credit.
Research (Design) and Practice: Can't we all just get along? — Cara Phillips · 1 BACB General CEUs · $19.99
Take This Course →We extended this guide with research from our library — dig into the peer-reviewed studies behind the topic, in plain-English summaries written for BCBAs.
258 research articles with practitioner takeaways
252 research articles with practitioner takeaways
239 research articles with practitioner takeaways
You earn CEUs from a dozen different places. Upload any certificate — from here, your employer, conferences, wherever — and always know exactly where you stand. Learning, Ethics, Supervision, all handled.
No credit card required. Cancel anytime.
All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.