We present a method to jointly learn features and weights directly from distributional data in a log-linear framework. Specifically, we propose a non-parametric Bayesian model for learning phonological markedness constraints directly from the distribution of input-output mappings in an Optimality Theory (OT) setting. The model uses an Indian Buffet Process prior to learn the feature values used in the loglinear method, and is the first algorithm for learning phonological constraints without presupposing constraint structure. The model learns a system of constraints that explains observed data as well as the phonologically-grounded constraints of a standard analysis, with a violation structure corresponding to the standard constraints. These results suggest an alternative data-driven source for constraints instead of a fully innate constraint set. © 2014 Association for Computational Linguistics.
CITATION STYLE
Doyle, G., Bicknell, K., & Levy, R. (2014). Nonparametric learning of phonological constraints in optimality theory. In 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014 - Proceedings of the Conference (Vol. 1, pp. 1094–1103). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/p14-1103
Mendeley helps you to discover research relevant for your work.