Pre-trained transformer models are the current state-of-the-art for natural language models processing. seBERT is such a model, that was developed based on the BERT architecture, but trained from scratch with software engineering data. We fine-tuned this model for the NLBSE challenge for the task of issue type prediction. Our model dominates the baseline fastText for all three issue types in both recall and precision to achieve an overall F1-score of 85.7%, which is an increase of 4.1% over the baseline.
CITATION STYLE
Trautsch, A., & Herbold, S. (2022). Predicting Issue Types with seBERT. In Proceedings - 1st International Workshop on Natural Language-Based Software Engineering, NLBSE 2022 (pp. 37–39). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1145/3528588.3528661
Mendeley helps you to discover research relevant for your work.