Most natural language processing tasks depend on the outputs of some other tasks. Thus, they involve other tasks as subtasks. The main problem of this type of pipelined model is that the optimality of the subtasks that are trained with their own data is not guaranteed in the final target task, since the subtasks are not optimized with respect to the target task. As a solution to this problem, this paper proposes a consolidation of subtasks for a target task (CST2). In CST2, all parameters of a target task and its subtasks are optimized to fulfill the objective of the target task. CST2finds such optimized parameters through a backpropagation algorithm. In experiments in which text chunking is a target task and part-of-speech tagging is its subtask, CST2outperforms a traditional pipelined text chunker. The experimental results prove the effectiveness of optimizing subtasks with respect to the target task.
CITATION STYLE
Son, J. W., Cho, K., Ryu, W., Yoon, H., & Park, S. B. (2014). Consolidation of subtasks for target task in pipelined NLP model. ETRI Journal, 36(5), 704–713. https://doi.org/10.4218/etrij.14.2214.0035
Mendeley helps you to discover research relevant for your work.