Picking one ‘winner’ model for researching a certain phenomenon while discarding the rest implies a confidence that may misrepresent the evidence. Multimodel inference allows researchers to more accurately represent their uncertainty about which model is ‘best’. But multimodel inference, with Akaike weights—weights reflecting the relative probability of each candidate model—and bootstrapping, can also be used to quantify model selection uncertainty, in the form of empirical variation in parameter estimates across models, while minimizing bias from dubious assumptions. This paper describes this approach. Results from a simulation example and an empirical study on the impact of perceived brand environmental responsibility on customer loyalty illustrate and provide support for our proposed approach.
CITATION STYLE
Rigdon, E., Sarstedt, M., & Moisescu, O. I. (2023). Quantifying model selection uncertainty via bootstrapping and Akaike weights. International Journal of Consumer Studies, 47(4), 1596–1608. https://doi.org/10.1111/ijcs.12906
Mendeley helps you to discover research relevant for your work.