Constraints, linguistic theories, and natural language processing

9Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The notion of constraints is generally used in modern linguistics (in particular in syntax and phonology) for representing properties that an object must satisfy (see [4], [15]). Constraints can be general (or universal), valid for different lan-guages, or at the opposite very specific, representing for example the variability of a given language. In all cases, the idea consists of stipulating properties ruling out structures which don't belong to the language. Most linguistic theories now integrate this notion, in particular constraint-based approaches (HPSG being the theory making the most intensive use of this notion), but also in the principle and parameters paradigm (in particular Optimality Theory, see[1]). Even dependency grammars propose a constraint-based version called Constraint Dependency Grammars (see [11]). However, the interpretation of this notion can be very different from one approach to another. It is interesting to note that constraints are also used in computer science (see [10], [17]), and logic programming in particular. The question addressed here concerns precisely the adequation of the notion of constraints in linguistics and in computer science (see [8], [13], [14]).We want to show that, while a superposition of both points of view is possible, the linguistic interpretation doesn't exploit all the properties of a constraint system. Concretely, several approaches in parsing try to interpret the parsing process as a constraint satisfaction problem. However, in most cases, constraints are used in a passive sense We show here that this problem comes in particular from the generative interpretation of the relation between grammar and language. More precisely, the derivation relation entails a conception of the parsing process consisting in building first a local structure (usually a tree) and then in verifying some properties over it. Then, the information allowing to build trees doesn't have the same status as the other linguistic knowledge. In such a perspective, constraints are relegated to a secondary role. We propose an approach representing all the information by means of constraints, at the same level and allowing to consider the parsing process as one of constraint satisfaction. In a first section, we detail some examples illustrating different interpreta-tions (and different uses) of constraints in linguistics. We then describe more precisely how constraints generally work within linguistic theories. Concretely several characteristics inhibit constraints from playing the same role as in com-puter science. The second section describes these limits and presents some prop-erties that should be present in the definition of a constraint-based formalism. The third section proposes such a formalism, called Property Grammars, illus-trating how constraints can form a system, useful both in a descriptive perspec-tive and for parsing. The last section shows how such a system can be used for implementation.

Cite

CITATION STYLE

APA

Blache, P. (2000). Constraints, linguistic theories, and natural language processing. In Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) (Vol. 1835, pp. 221–232). Springer Verlag. https://doi.org/10.1007/3-540-45154-4_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free