Bias may exist in rating of medical trainees

Peter Yeates, M.B.B.S., M.Clin.Ed., of the University of Manchester, United Kingdom, and colleagues conducted a study to examine whether observations of the performance of postgraduate year 1 physicians influence raters' scores of subsequent performances.

"The usefulness of performance assessments within is limited by high interrater score variability, which neither rater training nor changes in scale format have successfully ameliorated. Several factors may explain raters' score variability, including a tendency of raters to make assessments by comparing against other recently viewed learners, rather than by using an absolute standard of competence. This has the potential to result in biased judgments," according to background information in the article.

The study consisted of an internet-based randomized experiment using videos of Mini Exercise (Mini-CEX) assessments of postgraduate year 1 trainees interviewing new patients. Participants were 41 attending physicians from England and Wales experienced with the Mini-CEX, with 20 watching and scoring 3 good trainee performances and 21 watching and scoring 3 poor performances. All then watched and scored the same 3 borderline video performances. The study was completed between July and November 2011.

The researchers found that attending physicians exposed to videos of good medical trainee performances rated subsequent borderline performances lower than those who had been exposed to poor performances, consistent with a contrast bias. The implication is that a rater of a trainee's performance may be unconsciously influenced by the previous trainee, rather than objectively assessing the individual in isolation.

"With the movement toward -based models of education, assessment has largely shifted to a system that relies on of performance compared with a fixed standard at which competence is achieved (criterion referencing). Although this makes conceptual sense (with its inherent ability to reassure both the profession and the public that an acceptable standard has been reached), the findings in this study, which are consistent with contrast bias, suggest that raters may not be capable of reliably judging in this way."

More information: JAMA. 2012;308[21]:2226-2232.

add to favorites email to friend print save as pdf

Related Stories

Who says girls can't compete athletically with boys?

May 31, 2012

An Indiana University study that looked at performance differences between male and female childhood athletes found little difference in certain age groups, even though boys and girls rarely compete against each other in ...

New colonoscopy skills assessment tool developed for trainees

Dec 15, 2010

Researchers at the Mayo Clinic in Rochester, Minn., have developed a new skills assessment tool for colonoscopy trainees. A report outlining the development and validation of the Mayo Colonoscopy Skills Assessment Tool (MCSAT), ...

NIDA raises the curtain on addiction

Apr 18, 2011

The National Institute on Drug Abuse (NIDA) announced today the launch of its Addiction Performance Project, an innovative continued medical education program designed to help primary care providers break down the stigma ...

Recommended for you

Exploring 3-D printing to make organs for transplants

Jul 30, 2014

Printing whole new organs for transplants sounds like something out of a sci-fi movie, but the real-life budding technology could one day make actual kidneys, livers, hearts and other organs for patients ...

High frequency of potential entrapment gaps in hospital beds

Jul 30, 2014

A survey of beds within a large teaching hospital in Ireland has shown than many of them did not comply with dimensional standards put in place to minimise the risk of entrapment. The report, published online in the journal ...

User comments