Evaluating the selection of content in a summary is important both for human-written summaries, which can be a useful pedagogical tool for reading and writing skills, and machinegenerated summaries, which are increasingly being deployed in information management. The pyramid method assesses a summary by aggregating content units from the summaries of a wise crowd (a form of crowdsourcing). It has proven highly reliable but has largely depended on manual annotation. We propose PEAK, the first method to automatically assess summary content using the pyramid method that also generates the pyramid content models. PEAK relies on open information extraction and graph algorithms. The resulting scores correlate well with manually derived pyramid scores on both human and machine summaries, opening up the possibility of wide-spread use in numerous applications.
CITATION STYLE
Yang, Q., Passonneau, R. J., & De Melo, G. (2016). PEAK: Pyramid evaluation via automated knowledge extraction. In 30th AAAI Conference on Artificial Intelligence, AAAI 2016 (pp. 2673–2679). AAAI press. https://doi.org/10.1609/aaai.v30i1.10336
Mendeley helps you to discover research relevant for your work.