Bootstrap Model-Based Constrained Optimization Tests of Indirect Effects

12Citations
Citations of this article
26Readers
Mendeley users who have this article in their library.

Abstract

In mediation analysis, conditions necessary for commonly recommended tests, including the confidence interval (CI)-based tests, to produce an accurate Type I error, do not generally hold for finite sample sizes and non-normally distributed model residuals. This is typically the case because of the complexity of testing a null hypothesis about indirect effects. To remedy these issues, we propose two extensions of the recently developed asymptotic Model-based Constrained Optimization (MBCO) likelihood ratio test (LRT), a promising new model comparison method for testing a general function of indirect effects. The proposed tests, semi-parametric and parametric bootstrap MBCO LRT are shown to yield a more accurate Type I error rate in smaller sample sizes and under various degrees of non-normality of the model residuals compared to the asymptotic MBCO LRT and the CI-based methods. We provide R script in the Supplemental Materials to perform all three MBCO LRTs.

Cite

CITATION STYLE

APA

Tofighi, D. (2020). Bootstrap Model-Based Constrained Optimization Tests of Indirect Effects. Frontiers in Psychology, 10. https://doi.org/10.3389/fpsyg.2019.02989

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free