Quantile Regression Neural Networks: A Bayesian Approach

5Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This article introduces a Bayesian neural network estimation method for quantile regression assuming an asymmetric Laplace distribution (ALD) for the response variable. It is shown that the posterior distribution for feedforward neural network quantile regression is asymptotically consistent under a misspecified ALD model. This consistency proof embeds the problem from density estimation domain and uses bounds on the bracketing entropy to derive the posterior consistency over Hellinger neighborhoods. This consistency result is shown in the setting where the number of hidden nodes grow with the sample size. The Bayesian implementation utilizes the normal-exponential mixture representation of the ALD density. The algorithm uses Markov chain Monte Carlo (MCMC) simulation technique - Gibbs sampling coupled with Metropolis–Hastings algorithm. We have addressed the issue of complexity associated with the afore-mentioned MCMC implementation in the context of chain convergence, choice of starting values, and step sizes. We have illustrated the proposed method with simulation studies and real data examples.

Cite

CITATION STYLE

APA

Jantre, S. R., Bhattacharya, S., & Maiti, T. (2021). Quantile Regression Neural Networks: A Bayesian Approach. Journal of Statistical Theory and Practice, 15(3). https://doi.org/10.1007/s42519-021-00189-w

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free