Visual scene reconstruction using a bayesian learning framework

2Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper, we focus on constructing new flexible and powerful parametric framework for visual data modeling and reconstruction. In particular, we propose a Bayesian density estimation method based upon mixtures of scaled Dirichlet distributions. The consideration of Bayesian learning is interesting in several respects. It allows simultaneous parameters estimation and model selection, it permits also taking uncertainty into account by introducing prior information about the parameters and it allows overcoming learning problems related to over- or under-fitting. In this work, three key issues related to the Bayesian mixture learning are addressed which are the choice of prior distributions, the estimation of the parameters, and the selection of the number of components. Moreover, a principled Metropolis-within-Gibbs sampler algorithm for scaled Dirichlet mixtures is developed. Finally, the proposed Bayesian framework is tested on a challenging real-life application namely visual scene reconstruction.

Cite

CITATION STYLE

APA

Bourouis, S., Bouguila, N., Li, Y., & Azam, M. (2018). Visual scene reconstruction using a bayesian learning framework. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10884 LNCS, pp. 225–232). Springer Verlag. https://doi.org/10.1007/978-3-319-94211-7_25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free