arXiv Open Access 2024

Prompting and Fine-Tuning of Small LLMs for Length-Controllable Telephone Call Summarization

David Thulke Yingbo Gao Rricha Jalota Christian Dugast Hermann Ney
Lihat Sumber

Abstrak

This paper explores the rapid development of a telephone call summarization system utilizing large language models (LLMs). Our approach involves initial experiments with prompting existing LLMs to generate summaries of telephone conversations, followed by the creation of a tailored synthetic training dataset utilizing stronger frontier models. We place special focus on the diversity of the generated data and on the ability to control the length of the generated summaries to meet various use-case specific requirements. The effectiveness of our method is evaluated using two state-of-the-art LLM-as-a-judge-based evaluation techniques to ensure the quality and relevance of the summaries. Our results show that fine-tuned Llama-2-7B-based summarization model performs on-par with GPT-4 in terms of factual accuracy, completeness and conciseness. Our findings demonstrate the potential for quickly bootstrapping a practical and efficient call summarization system.

Topik & Kata Kunci

Penulis (5)

D

David Thulke

Y

Yingbo Gao

R

Rricha Jalota

C

Christian Dugast

H

Hermann Ney

Format Sitasi

Thulke, D., Gao, Y., Jalota, R., Dugast, C., Ney, H. (2024). Prompting and Fine-Tuning of Small LLMs for Length-Controllable Telephone Call Summarization. https://arxiv.org/abs/2410.18624

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓