Recurrent Neural Networks (RNN) can deal with (textual) input with various length and hence have a lot of applications in software systems and software engineering applications. RNNs depend on word embeddings that are usually pre-trained by third parties to encode textual inputs to numerical values. It is well known that problematic word embeddings can lead to low model accuracy. In this paper, we propose a new technique to automatically diagnose how problematic embeddings impact model performance, by comparing model execution traces from correctly and incorrectly executed samples. We then leverage the diagnosis results as guidance to harden/repair the embeddings. Our experiments show that TRADER can consistently and effectively improve accuracy for real world models and datasets by 5.37% on average, which represents substantial improvement in the literature of RNN models.
CITATION STYLE
Tao, G., Ma, S., Liu, Y., Xu, Q., & Zhang, X. (2020). Trader: Trace divergence analysis and embedding regulation for debugging recurrent neural networks. In Proceedings - International Conference on Software Engineering (pp. 986–998). IEEE Computer Society. https://doi.org/10.1145/3377811.3380423
Mendeley helps you to discover research relevant for your work.