The hard-cut EM algorithm for mixture of sparse gaussian processes

8Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The mixture of Gaussian Processes (MGP) is a powerful and fast developed machine learning framework. In order to make its learning more efficient, certain sparsity constraints have been adopted to form the mixture of sparse Gaussian Processes (MSGP). However, the existing MGP and MSGP models are rather complicated and their learning algorithms involve various approximation schemes. In this paper, we refine the MSGP model and develop the hard-cut EM algorithm for MSGP from its original version for MGP. It is demonstrated by the experiments on both synthetic and real datasets that our refined MSGP model and the hard-cut EM algorithm are feasible and can outperform some typical regression algorithms on prediction. Moreover, with sparse technique, the parameter learning of our proposed MSGP model is much more efficient than that of the MGP model.

Cite

CITATION STYLE

APA

Chen, Z., & Ma, J. (2015). The hard-cut EM algorithm for mixture of sparse gaussian processes. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9227, pp. 13–24). Springer Verlag. https://doi.org/10.1007/978-3-319-22053-6_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free