Exact inference for the risk ratio with an imperfect diagnostic test

2Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

SUMMARY The risk ratio quantifies the risk of disease in a study population relative to a reference population. Standard methods of estimation and testing assume a perfect diagnostic test having sensitivity and specificity of 100%. However, this assumption typically does not hold, and this may invalidate naive estimation and testing for the risk ratio. We propose procedures that control for sensitivity and specificity of the diagnostic test, given the risks are measured by proportions, as it is in cross-sectional studies or studies with fixed follow-up times. These procedures provide an exact unconditional test and confidence interval for the true risk ratio. The methods also cover the case when sensitivity and specificity differ in the two groups (differential misclassification). The resulting test and confidence interval may be useful in epidemiological studies as well as in clinical and vaccine trials. We illustrate the method with real-life examples which demonstrate that ignoring sensitivity and specificity of the diagnostic test may lead to considerable bias in the estimated risk ratio.

Cite

CITATION STYLE

APA

Reiczigel, J., Singer, J., & Lang, Z. (2017). Exact inference for the risk ratio with an imperfect diagnostic test. Epidemiology and Infection, 145(1), 187–193. https://doi.org/10.1017/S0950268816002028

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free