Should We Trust our Judgments about the Proficiency of Motivational Interviewing Counselors?A Glimpse at the Impact of Low Inter-rater Reliability

Chris Dunn, Doyanne Darnell, Sheng Kung Michael Yi, Mark Steyvers, Kristin Bumgardner, Sarah Peregrine Lord, Zac Imel, David C. Atkins


Standardized rating systems are often used to evaluate the proficiency of Motivational Interviewing (MI) counselors. The published inter-rater reliability (degree of coder agreement)  in many studies using these instruments has varied a great deal; some studies report MI proficiency scores that have only fair inter-rater reliability, and others report scores with excellent reliability. How much can we to trust the scores with fair versus excellent reliability? Using a Monte Carlo statistical simulation, we compared the impact of fair (0.50) versus excellent (0.90) reliability on the error rates of falsely judging a given counselor as MI proficient or not proficient. We found that improving the inter-rater reliability of any given score from 0.5 to 0.9 would cause a marked reduction in proficiency judgment errors, a reduction that in some MI evaluation situations would be critical. We discuss some practical tradeoffs inherent in various MI evaluation situations, and offer suggestions for applying findings from formal MI research to problems faced by real-world MI evaluators, to help them minimize the MI proficiency judgment errors bearing the greatest cost.


motivational interviewing; inter-rater reliability; proficiency judgments; counselor proficiency; Motivational Interviewing Treatment Integrity

Full Text:




  • There are currently no refbacks.

Creative Commons License This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.

This journal is operated by the University Library System, University of Pittsburgh as part of its D-Scribe Digital Publishing Program and is cosponsored by the Motivational Interviewing Network of Trainers.

ISSN 2160-584X (online)