Objectives Traditionally, a 10% review has been the basis for quality assurance programs in anatomic pathology. The effectiveness of such reviews has been questioned and alternative methodologies suggested. The study investigates the error detection rates for four quality assurance protocols. Methods The detection rate for diagnostic errors in surgical pathology was calculated over a one year period using four different review procedures comprising: random 10% review, correlation of internal and external diagnoses following solicited external expert opinion, correlation of internal diagnoses with outside diagnoses in cases sent for review at a second institution treating the patient along with a focused review of dermatopathology cases over a 3 month period. Error rate was expressed as percentage of reviewed cases where the initial diagnosis differed from the review diagnosis. Error rates detected by each method were compared among the methods Results The 10% random review detected seventeen errors in 2147 cases (0.8%). Solicited case consultations requested by clinicians or internal pathologists detected five diagnostic errors in seventy cases (7.1%). Unsolicited reviews by outside institutions in the course of patient care detected three diagnostic errors in 190 cases (1.6%). Review of the dermatopathology material disclosed 5 diagnostic errors in 59 cases (8.5%). Conclusions Focused reviews initiated by diagnostic concerns of a clinician or pathologist, unsolicited reviews because of treatment at another institution and sub-specialty based reviews appear to be more effective in detecting diagnostic errors than the 10% random review. Quality assurance programs should include focused reviews in addition to 10% random review to maximize error detection.
Layfield, L. J., & Frazier, S. R. (2017). Quality assurance of anatomic pathology diagnoses: Comparison of alternate approaches. Pathology Research and Practice, 213(2), 126–129. https://doi.org/10.1016/j.prp.2016.11.007