Quantifying model selection uncertainty via bootstrapping and Akaike weights

17Citations
Citations of this article
32Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Picking one ‘winner’ model for researching a certain phenomenon while discarding the rest implies a confidence that may misrepresent the evidence. Multimodel inference allows researchers to more accurately represent their uncertainty about which model is ‘best’. But multimodel inference, with Akaike weights—weights reflecting the relative probability of each candidate model—and bootstrapping, can also be used to quantify model selection uncertainty, in the form of empirical variation in parameter estimates across models, while minimizing bias from dubious assumptions. This paper describes this approach. Results from a simulation example and an empirical study on the impact of perceived brand environmental responsibility on customer loyalty illustrate and provide support for our proposed approach.

Cite

CITATION STYLE

APA

Rigdon, E., Sarstedt, M., & Moisescu, O. I. (2023). Quantifying model selection uncertainty via bootstrapping and Akaike weights. International Journal of Consumer Studies, 47(4), 1596–1608. https://doi.org/10.1111/ijcs.12906

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free