Unsupervised neural hidden Markov models

39Citations
Citations of this article
173Readers
Mendeley users who have this article in their library.

Abstract

In this work, we present the first results for neuralizing an Unsupervised Hidden Markov Model. We evaluate our approach on tag induction. Our approach outperforms existing generative models and is competitive with the state-of-the-art though with a simpler model easily extended to include additional context.

References Powered by Scopus

77845Citations
26301Readers
Get full text

Effective approaches to attention-based neural machine translation

4140Citations
5769Readers

Dropout Improves Recurrent Neural Networks for Handwriting Recognition

426Citations
417Readers
Get full text

Cited by Powered by Scopus

Learning neural templates for text generation

129Citations
452Readers

CycleNER: An Unsupervised Training Approach for Named Entity Recognition

27Citations
31Readers

Semi-supervised structured prediction with neural CRF autoencoder

25Citations
136Readers

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Tran, K., Bisk, Y., Vaswani, A., Marcu, D., & Knight, K. (2016). Unsupervised neural hidden Markov models. In Proceedings of the Workshop on Structured Prediction for Natural Language Processing, NLP 2016 at the Conference on Empirical Methods in Natural Language Processing, EMNLP 2016 (pp. 63–71). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w16-5907

Readers over time

‘16‘17‘18‘19‘20‘21‘22‘23‘24‘2507142128

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 81

73%

Researcher 24

22%

Professor / Associate Prof. 3

3%

Lecturer / Post doc 3

3%

Readers' Discipline

Tooltip

Computer Science 91

81%

Engineering 10

9%

Linguistics 7

6%

Business, Management and Accounting 4

4%

Save time finding and organizing research with Mendeley

Sign up for free
0