Building Real-World Meeting Summarization Systems using Large Language Models: A Practical Perspective

26Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper studies how to effectively build meeting summarization systems for real-world usage using large language models (LLMs). For this purpose, we conduct an extensive evaluation and comparison of various closed-source and open-source LLMs, namely, GPT-4, GPT-3.5, PaLM-2, and LLaMA-2. Our findings reveal that most closed-source LLMs are generally better in terms of performance. However, much smaller open-source models like LLaMA-2 (7B and 13B) could still achieve performance comparable to the large closed-source models even in zero-shot scenarios. Considering the privacy concerns of closed-source models for only being accessible via API, alongside the high cost associated with using fine-tuned versions of the closed-source models, the open-source models that can achieve competitive performance are more advantageous for industrial use. Balancing performance with associated costs and privacy concerns, the LLaMA-2-7B model looks more promising for industrial usage. In sum, this paper offers practical insights on using LLMs for real-world business meeting summarization, shedding light on the trade-offs between performance and cost.

Cite

CITATION STYLE

APA

Laskar, M. T. R., Fu, X. Y., Chen, C., & Bhushan, S. T. N. (2023). Building Real-World Meeting Summarization Systems using Large Language Models: A Practical Perspective. In EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Industry Track (pp. 343–352). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.emnlp-industry.33

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free