POMDP-based dialogue manager adaptation to extended domains

  • Gasic M
  • Breslin C
  • Henderson M
 et al. 
  • 44

    Readers

    Mendeley users who have this article in their library.
  • 21

    Citations

    Citations of this article.

Abstract

The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and joint training of deep learning, covering advances in probabilistic models, auto-encoders, manifold learning, and deep architectures. This motivates longer-term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation and manifold learning.

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document

  • ISBN: 9781937284954
  • SCOPUS: 2-s2.0-84987858757
  • PUI: 612238217
  • SGR: 84987858757

Authors

  • Milica Gasic

  • Catherine Breslin

  • Matthew Henderson

  • Dongho Kim

  • Martin Szummer

  • Blaise Thomson

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free