We describe a broadly-applicable conservative error correcting model, N-fold Templated Piped Correction or NTPC ("nitpick"), that consistently improves the accuracy of existing high-accuracy base models. Under circumstances where most obvious approaches actually reduce accuracy more than they improve it, NTPC nevertheless comes with little risk of accidentally degrading performance. NTPC is particularly well suited for natural language applications involving high-dimensional feature spaces, such as bracketing and disambiguation tasks, since its easily customizable template-driven learner allows efficient search over the kind of complex feature combinations that have typically eluded the base models. We show empirically that NTPC yields small but consistent accuracy gains on top of even high-performing models like boosting. We also give evidence that the various extreme design parameters in NTPC are indeed necessary for the intended operating range, even though they diverge from usual practice. © Springer-Verlag Berlin Heidelberg 2005.
CITATION STYLE
Wu, D., Ngai, G., & Carpuat, M. (2005). NTPC: N-fold templated piped correction. In Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) (Vol. 3248, pp. 476–486). Springer Verlag. https://doi.org/10.1007/978-3-540-30211-7_50
Mendeley helps you to discover research relevant for your work.