The computing cost of many NLP tasks increases faster than linearly with the length of the representation of a sentence. For parsing the representation is tokens, while for operations on syntax and semantics it will be more complex. In this paper we propose a new task of sentence chunking: splitting sentence representations into coherent substructures. Its aim is to make further processing of long sentences more tractable. We investigate this idea experimentally using the Dependency Minimal Recursion Semantics (DMRS) representation.
CITATION STYLE
Muszyńska, E. (2016). Graph- And surface-level sentence chunking. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 2016-August, pp. 93–99). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/p16-3014
Mendeley helps you to discover research relevant for your work.