Abstract
Bayesian statistical inference under unknown or hard to asses likelihood functions is a very challenging task. Currently, approximate Bayesian computation (ABC) techniques have emerged as a widely used set of likelihood-free methods. A vast number of ABC-based approaches have appeared in the literature; however, they all share a hard dependence on free parameters selection, demanding expensive tuning procedures. In this paper, we introduce an automatic kernel learning-based ABC approach, termed AKL-ABC, to automatically compute posterior estimations from a weighting-based inference. To reach this goal, we propose a kernel learning stage to code similarities between simulation and parameter spaces using a centered kernel alignment (CKA) that is automated via an Information theoretic learning approach. Besides, a local neighborhood selection (LNS) algorithm is used to highlight local dependencies over simulations relying on graph theory. Attained results on synthetic and real-world datasets show our approach is a quite competitive method compared to other non-automatic state-of-the-art ABC techniques.
Author supplied keywords
Cite
CITATION STYLE
González-Vanegas, W., Álvarez-Meza, A., Hernández-Muriel, J., & Orozco-Gutiérrez, Á. (2019). AKL-ABC: An automatic approximate Bayesian computation approach based on kernel learning. Entropy, 21(10). https://doi.org/10.3390/e21100932
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.