Abstract
In this study we addressed automatic summarizations generated using modern artificial intelligence techniques. Several mathematical methods for evaluating the performance of automatic summarization exist. Such methods are commonly used as they allow many test cases to be assessed with little human effort as manual assessments are challenging and time consuming. One question is whether the output of such measures matches human perception of summarization quality. In this study we document a study involving the human evaluation of the automatic summarization of 22 academic texts. The unique aspect of this study is that our participants had strong familiarity with the texts as they had studied these texts in depth. The results are quite varied but do not give the impression of unanimous agreement that automatic summarizations are of high quality and are trusted.
Author supplied keywords
Cite
CITATION STYLE
Lotfigolian, M., Papanikolaou, C., Taghizadeh, S., & Sandnes, F. E. (2023). Human Experts’ Perceptions of Auto-Generated Summarization Quality. In ACM International Conference Proceeding Series (pp. 95–98). Association for Computing Machinery. https://doi.org/10.1145/3594806.3594828
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.