Have you ever opened session notes from earlier in the week to find that even though the program was clear, and the RBT solid, the trial counts don’t quite line up with what was prescribed?
Breakdowns in trial count adherence don’t usually happen because someone forgot the plan. They happen mid-session, when attention is split, time is tight, and a dozen small decisions stack up at once. By the time a supervisor reviews the data, the moment that mattered has already passed.
This leaves clinical leaders stuck in a familiar loop: retraining capable staff, adjusting plans that were already strong, and still wondering why prescribed trial counts don’t hold the way they should.
Your RBTs are Fighting Cognitive Overload, Not the Plan
Picture a technician starting a session with a learner they haven’t seen in a few days. A couple of sessions happened without them. Some goals were missed. Others were run, but maybe not at the frequency originally planned.
Before the first trial even begins, a quiet set of questions are already in their mind. What needs to be prioritized today? Which targets can wait? How much repetition was recommended for each goal? Where did we leave off last time?
None of this is written on the wall.
Even with good training and clear note templates, the reality of a live session introduces friction. In those moments, attention gets pulled in too many directions at once. Giving instructions competes with remembering, tracking, and keeping pace. It’s what happens when one person has to hold too much context in their head at once.
Your Clinical Data Are Only Showing What Already Went Wrong
Visibility for ABA supervisors usually comes after the session is over.
You review notes, look at trial counts, and scan for missed goals or uneven pacing. Whether everything looks fine, or something’s off, you’re always working from the same position: hindsight.
That can be manageable when you’re overseeing a small team or sitting in on sessions regularly. It gets much harder as teams grow, coverage rotates, or services expand across locations. The distance between you and the session increases, but the responsibility for quality of care is still yours.
So when trial counts drift, the response is often reactive. You address it in supervision, clarify expectations, and document reminders. All reasonable moves. And yet, the pattern repeats. Because the data may be right, but you’re receiving it too late.
The Part of Treatment Fidelity Most ABA Systems Never Surface
Inside the session, a different problem is unfolding.
The technician isn’t thinking about reports or future supervision. They’re managing a live environment: learner engagement, target priorities, session flow. All while trying to remember what was prescribed and how much repetition was recommended for each goal.
Most ABA systems don’t help with that moment. They record what happened, but they don’t orient the clinician while decisions are being made. Expectations stay abstract. Trial counts live in plans or notes, not in the flow of the session itself. That leaves BTs relying on memory, habit, and personal judgment to answer the constant question: Am I on track right now?
When expectations aren’t visible during teaching, the drift usually shows up like this:
- Priorities shift toward what feels most immediate or manageable.
- “Enough trials” becomes a judgment call rather than a shared reference point.
- Small deviations occur across sessions and staff.
What this reflects is a gap between the prescribed treatment plan and its application. And that’s the part that typical ABA systems never surface. Because most data collection systems are reliable record keepers, but less helpful as guides in the moment.
What New Findings on Trial Count Adherence in ABA Are Suggesting
Trial count drift is often seen as a training issue, but the data reveal something deeper. It can be an environmental failure where the system doesn’t support the clinician’s cognitive load.
Motivity has a foundational commitment to clinical science. Our NIH-funded study examines trial count adherence under real clinical conditions and explores what changes when expectations are clearer during sessions.
Download the whitepaper to understand what the data suggest.

