Representing languages by learnable rewriting systems

2Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Powerful methods and algorithms are known to learn regular languages. Aiming at extending them to more complex grammars, we choose to change the way we represent these languages. Among the formalisms that allow to define classes of languages, the one of string-rewriting systems (SRS) has outstanding properties. Indeed, SRS are expressive enough to define, in a uniform way, a noteworthy and non trivial class of languages that contains all the regular languages, {anbn: n ≥ 0}, {w ∈ {a, b}*: |w|a = |w|b}, the parenthesis languages of Dyck, the language of Lukasewitz, and many others. Moreover, SRS constitute an efficient (often linear) parsing device for strings, and are thus promising and challenging candidates in forthcoming applications of Grammatical Inference. In this paper, we pioneer the problem of their learnability. We propose a novel and sound algorithm which allows to identify them in polynomial time. We illustrate the execution of our algorithm throughout a large amount of examples and finally raise some open questions and research directions. © Springer-Verlag Berlin Heidelberg 2004.

Cite

CITATION STYLE

APA

Eyraud, R., De La Higuera, C., & Janodet, J. C. (2004). Representing languages by learnable rewriting systems. In Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) (Vol. 3264, pp. 139–150). Springer Verlag. https://doi.org/10.1007/978-3-540-30195-0_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free