The National Latino and Asian American Study (NLAAS) is a large scale survey of psychiatric epidemiology, the most comprehensive survey of this kind. A unique feature of NLAAS is its embedded experiment for estimating the effect of alternative orderings of interview questions. The findings from the experiment are not completely unexpected, but nevertheless alarming. Compared to the survey results from the widely used traditional ordering, the self-reported psychiatric service-use rates are often doubled or even tripled under a more sensible ordering introduced by NLAAS. These findings explain certain perplexing empirical findings in literature, but at the same time impose some grand challenges. For example, how can one assess racial disparities when different races were surveyed with different survey instruments that are now known to induce substantial differences? The project documented in this paper is part of an effort to address these questions. It creates models for imputing the original responses had the respondents under the traditional survey not taken advantage of the skip patterns to reduce interview time, which resulted in increased rates of incorrect negative responses over the course of the interview. The imputation modeling task is particularly challenging because of the complexity of the questionnaire, the small sample sizes for subgroups of interests, and the need for providing sensible imputation to whatever sub-population that a future user might be interested in studying. As a case study, we report both our findings and frustrations in our quest for dealing with these common real-life complications.
CITATION STYLE
Liu, J., Meng, X. L., Chen, C. nan, & Alegria, M. (2013). Statistics can lie but can also correct for lies: Reducing response bias in NLAAS via Bayesian imputation. Statistics and Its Interface, 6(3), 387–398. https://doi.org/10.4310/SII.2013.v6.n3.a9
Mendeley helps you to discover research relevant for your work.