
Being a physiotherapist, we need to constantly update our knowledge to deliver the therapeutic intervention that is more effective. One of the solid ways is to follow a new research paper and implement the new technique in our clinical practice.
However, often the research paper lacks information on the exact applied intervention and its dose. In other words, it lacks proper intervention fidelity.
This research paper, titled ‘Intervention Fidelity: A Common Language for Clinicians and Researchers‘ published on AJPT, highlights the issue of fidelity. I went through this study and found out why fidelity is important and how it can transform rehabilitation.
What is intervention fidelity?
Fidelity means accurate and faithful implementation of an intervention. According to the author, fidelity already exists in psychology, education and behaviour intervention. They underscored that the fidelity is weak when it comes to rehabilitation.
We will discuss how it is affecting the rehabilitation, but let us first understand the components of fidelity.
Importance of intervention fidelity and its 5 components
According to the study, fidelity is used in 2 ways
- Fidelity ensures that the intervention is delivered as intended during the trial.
- A strong fidelity improves the credibility of a trial
Without a solid fidelity, the trial feels vague.
There are 5 components of intervention fidelity
- Adhesive
- Differentiation
- Quality
- Participants responsiveness
- Dosage
How weak fidelity hampers rehabilitation
1. We Can’t Tell What Actually Works
The most fundamental problem: Without fidelity, we don’t know if we’re testing the intervention we think we’re testing.
To explain this, the author of the paper cites a finding of this study: Taghizadeh and colleagues examined 103 rehabilitation research protocols claiming to use motor learning theories. Only 24 explicitly described their active ingredients. Only 3 provided enough detail for replication1.
That means 100 out of 103 studies—97%—left clinicians guessing about what actually happened.
The consequences?
- Type I errors: We might conclude an intervention works when it doesn’t—because something else in the session (therapist attention, extra time, placebo effects) actually drove the change.
- Type II errors: We might conclude an intervention doesn’t work when it actually could—because it wasn’t delivered properly in the first place.
As the authors note, fidelity ensures “internal validity, confirming that an intervention works under ideal or controlled conditions.” Without it, we’re building evidence on sand.
2. We Can’t Distinguish New Approaches from Old Habits
Let’s consider a research team that develops a novel intervention. They train therapists extensively. They run a clinical trial. They publish impressive results.
Clinicians read the study and think: “This sounds just like what I already do.”
And maybe they’re right. Or maybe they’re wrong—but without fidelity data, no one can tell.
However, if one of the components of fidelity ‘differentiation’ were followed, then it would ensure a new intervention actually differs from its comparator. When researchers fail to measure and report differentiation:
- Novel interventions get diluted when implemented clinically
- Old, ineffective practices persist under new names
- The field stagnates because we can’t identify what’s genuinely innovative
The paper published on AJPT1 explains this with Sitting Together and Reaching to Play (START-Play) study by comparing it with usual care physical therapy in early intervention (UC-EI).
You can watch the video to find the difference in the intervention (adapted from the study paper).
Basically, START-Play focus on activity-based interventions, and UC-EI focus on stretching, strengthening and other rehabilitation exercises.
The two interventions have completely different approaches, and an early intervention therapist must follow both. The replication of the intervention is possible in clinical settings because of its solid fidelity.
3. We Don’t Know If It’s the “What” or the “How Much”
Infants receiving START-Play also continued their usual early intervention services. So they got BOTH different content AND more total therapy.
As the authors explain: “Because of the differing doses, clinicians are unable to determine if the total dose (amount) of therapy or the key ingredients of the respective interventions contributed more substantially to the research findings.”
4. Research Findings Get “Lost in Translation”
Promising interventions fail in the real world not because they don’t work, but because they’re not delivered as designed.
The paper cites Hulleman and Cordray’s study of a motivational education intervention that showed strong efficacy in controlled trials but diminished effectiveness when translated to real classrooms. Why?
Teachers introduced variations based on their individual biases and lost sight of key principles during implementation.
This is the “lost in translation” problem. And it’s rampant in rehabilitation.
5. Unwarranted Practice Variation Thrives
Weak fidelity also widens unwarranted variations in practice.
What is unwarranted variation?
The paper quotes Sutherland and Levesque’s definition: “patient care that differs in ways that are not a direct and proportionate response to available evidence; or to the health care needs and informed choices of patients.”
In plain language: Patients get different treatments not because their needs differ, but because their therapists were trained in different places, learned different habits, or interpret evidence differently.
The authors note that early intervention services “widely vary from region to region, state to state, and even district to district within the same state.” Best practice themes like “family-centred care” or “natural environments” are too vague to constrain this variation. They don’t define what therapists actually DO1.
Fidelity metrics offer a solution. When interventions are clearly defined—when we know their key ingredients, their differentiators, their optimal dosage—we have a standard to work toward. Variation becomes a matter of informed individualisation rather than random drift.
Without fidelity, variation runs unchecked. And rehabilitation’s credibility suffers.
6. We Miss Opportunities to Refine and Improve
Finally, the weak fidelity prevents us from understanding which ingredients matter most.
When fidelity is measured accurately, researchers can do more than confirm that an intervention “works.” They can analyse which specific components drive outcomes.
The paper describes statistical approaches like compiler average causal effect (CACE) analysis, which identifies whether participants who received MORE of the key ingredients showed better outcomes than those who received less. This moves beyond “did the group improve?” to “what, exactly, caused the improvement?”
Studies using CACE analysis often find larger treatment effects than standard intent-to-treat analyses—because they isolate the signal from the noise. They identify the active ingredients.
But this is only possible when fidelity metrics exist. We know something worked, but not why. We can’t refine it, streamline it, or teach it efficiently.
Conclusion
When I write an article on any issue, let’s say frozen shoulder, I have a practice to scan the research paper. Being a clinician and a blogger, I often struggle to implement the intervention (exercises) to acheive result claimed by the study paper.
So, fidelity is equally important for researchers and clinicians to translate the findings into practice, as highlighted by the author. A robust fidelity metric may bridge this gap. Fidelity can give us a precision, detailed recipe that turn goorr intentions into consistent, replicable results.
The author further adds that it may eventually ensure a standardised practice pattern, which is currently scattered. It not only threatens the credibility of physical therapy across practice settings, but it also affects patients.





