Dictionary Learning on Grassmann Manifolds

Citations of this article
Mendeley users who have this article in their library.
Get full text


Sparse representations have recently led to notable results in various visual recognition tasks. In a separate line of research, Riemannian manifolds have been shown useful for dealing with features and models that do not lie in Euclidean spaces. With the aim of building a bridge between the two realms, we address the problem of sparse coding and dictionary learning in Grassmann manifolds, i.e, the space of linear subspaces. To this end, we introduce algorithms for sparse coding and dictionary learning by embedding Grassmann manifolds into the space of symmetric matrices. Furthermore, to handle nonlinearity in data, we propose positive definite kernels on Grassmann manifolds and make use of them to perform coding and dictionary learning.




Harandi, M., Hartley, R., Salzmann, M., & Trumpf, J. (2016). Dictionary Learning on Grassmann Manifolds. In Advances in Computer Vision and Pattern Recognition (pp. 145–172). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-319-45026-1_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free