In this work, we demonstrate how differentiable stochastic sampling techniques developed in the context of deep reinforcement learning can be used to perform efficient parameter inference o v er stochastic, simulation-based, forward models. As a particular example, we focus on the problem of estimating parameters of halo occupation distribution (HOD) models that are used to connect galaxies with their dark matter haloes. Using a combination of continuous relaxation and gradient re-parametrization techniques, we can obtain well-defined gradients with respect to HOD parameters through discrete galaxy catalogue realizations. Having access to these gradients allows us to leverage efficient sampling schemes, such as Hamiltonian Monte Carlo, and greatly speed up parameter inference. We demonstrate our technique on a mock galaxy catalogue generated from the Bolshoi simulation using a standard HOD model and find near-identical posteriors as standard Markov chain Monte Carlo techniques with an increase of ∼8 ×in convergence efficiency. Our differentiable HOD model also has broad applications in full forward model approaches to cosmic structure and cosmological analysis.
CITATION STYLE
Horowitz, B., Hahn, C. H., Lanusse, F., Modi, C., & Ferraro, S. (2024). Differentiable stochastic halo occupation distribution. Monthly Notices of the Royal Astronomical Society, 529(3), 2473–2482. https://doi.org/10.1093/mnras/stae350
Mendeley helps you to discover research relevant for your work.