Variational weakly supervised gaussian processes

10Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

Abstract

We introduce the first model to perform weakly supervised learning with Gaussian processes on up to millions of instances. The key ingredient to achieve this scalability is to replace the standard assumption of MIL that the bag-level prediction is the maximum of instance-level estimates with the accumulated evidence of instances within a bag. This enables us to devise a novel variational inference scheme that operates solely by closed-form updates. Keeping all its parameters but one fixed, our model updates the remaining parameter to the global optimum. This virtue leads to charmingly fast convergence, fitting perfectly to large-scale learning setups. Our model performs significantly better in two medical applications than adaptation of GPMIL to scalable inference and various scalable MIL algorithms. It also proves to be very competitive in object classification against state-of-the-art adaptations of deep learning to weakly supervised learning.

Cite

CITATION STYLE

APA

Kandemir, M., Haußmann, M., Diego, F., Rajamani, K., van der Laak, J., & Hamprecht, F. A. (2016). Variational weakly supervised gaussian processes. In British Machine Vision Conference 2016, BMVC 2016 (Vol. 2016-September, pp. 71.1-71.12). British Machine Vision Conference, BMVC. https://doi.org/10.5244/C.30.71

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free