We propose a new method, based on sparse distributed memory, for studying dependence relations between syntactic parameters in the Principles and Parameters model of Syntax. By storing data of syntactic structures of world languages in a Kanerva network and checking recoverability of corrupted data from the network, we identify two different effects: an overall underlying relation between the prevalence of parameters across languages and their degree of recoverability, and a finer effect that makes some parameters more easily recoverable beyond what their prevalence would indicate. The latter can be seen as an indication of the existence of dependence relations, through which a given parameter can be determined using the remaining uncorrupted data.
CITATION STYLE
Park, J. J., Boettcher, R., Zhao, A., Mun, A., Yuh, K., Kumar, V., & Marcolli, M. (2017). Prevalence and recoverability of syntactic parameters in sparse distributed memories. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10589 LNCS, pp. 265–272). Springer Verlag. https://doi.org/10.1007/978-3-319-68445-1_31
Mendeley helps you to discover research relevant for your work.