On parameter tying by quantization

6Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

The maximum likelihood estimator (MLE) is generally asymptotically consistent but is susceptible to overfitting. To combat this problem, regularization methods which reduce the variance at the cost of (slightly) increasing the bias are often employed in practice. In this paper, we present an alternative variance reduction (regularization) technique that quantizes the MLE estimates as a post processing step, yielding a smoother model having several tied parameters. We provide and prove error bounds for our new technique and demonstrate experimentally that it often yields models having higher test-set log-likelihood than the ones learned using the MLE. We also propose a new importance sampling algorithm for fast approximate inference in models having several tied parameters. Our experiments show that our new inference algorithm is superior to existing approaches such as Gibbs sampling and MCSAT on models having tied parameters, learned using our quantization-based approach.

Cite

CITATION STYLE

APA

Chou, L., Sarkhel, S., Ruozzi, N., & Gogate, V. (2016). On parameter tying by quantization. In 30th AAAI Conference on Artificial Intelligence, AAAI 2016 (pp. 3241–3247). AAAI press. https://doi.org/10.1609/aaai.v30i1.10429

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free