In this report, we describe our Transformers for text classification baseline (TTCB) submissions to a shared task on implicit and underspecified language 2021. We cast the task of predicting revision requirements in collaboratively edited instructions as text classification. We considered Transformer-based models which are the current state-of-the-art methods for text classification. We explored different training schemes, loss functions, and data augmentations. Our best result of 68.45% test accuracy (68.84% validation accuracy), however, consists of an XLNet model with a linear annealing scheduler and a cross-entropy loss. We do not observe any significant gain on any validation metric based on our various design choices except the MiniLM which has a higher validation F1 score and is faster to train by a half but also a lower validation accuracy score.
CITATION STYLE
Wiriyathammabhum, P. (2021). TTCB System Description to a Shared Task on Implicit and Underspecified Language 2021. In UNIMPLICIT 2021 - 1st Workshop on Understanding Implicit and Underspecified Language, Proceedings of the Workshop (pp. 64–70). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.unimplicit-1.8
Mendeley helps you to discover research relevant for your work.