Predicting Issue Types with seBERT

9Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

Pre-trained transformer models are the current state-of-the-art for natural language models processing. seBERT is such a model, that was developed based on the BERT architecture, but trained from scratch with software engineering data. We fine-tuned this model for the NLBSE challenge for the task of issue type prediction. Our model dominates the baseline fastText for all three issue types in both recall and precision to achieve an overall F1-score of 85.7%, which is an increase of 4.1% over the baseline.

Cite

CITATION STYLE

APA

Trautsch, A., & Herbold, S. (2022). Predicting Issue Types with seBERT. In Proceedings - 1st International Workshop on Natural Language-Based Software Engineering, NLBSE 2022 (pp. 37–39). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1145/3528588.3528661

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free