arXiv Open Access 2024

Evaluating Zero-Shot Multilingual Aspect-Based Sentiment Analysis with Large Language Models

Chengyan Wu Bolei Ma Zheyu Zhang Ningyuan Deng Yanqing He +1 lainnya
Lihat Sumber

Abstrak

Aspect-based sentiment analysis (ABSA), a sequence labeling task, has attracted increasing attention in multilingual contexts. While previous research has focused largely on fine-tuning or training models specifically for ABSA, we evaluate large language models (LLMs) under zero-shot conditions to explore their potential to tackle this challenge with minimal task-specific adaptation. We conduct a comprehensive empirical evaluation of a series of LLMs on multilingual ABSA tasks, investigating various prompting strategies, including vanilla zero-shot, chain-of-thought (CoT), self-improvement, self-debate, and self-consistency, across nine different models. Results indicate that while LLMs show promise in handling multilingual ABSA, they generally fall short of fine-tuned, task-specific models. Notably, simpler zero-shot prompts often outperform more complex strategies, especially in high-resource languages like English. These findings underscore the need for further refinement of LLM-based approaches to effectively address ABSA task across diverse languages.

Topik & Kata Kunci

Penulis (6)

C

Chengyan Wu

B

Bolei Ma

Z

Zheyu Zhang

N

Ningyuan Deng

Y

Yanqing He

Y

Yun Xue

Format Sitasi

Wu, C., Ma, B., Zhang, Z., Deng, N., He, Y., Xue, Y. (2024). Evaluating Zero-Shot Multilingual Aspect-Based Sentiment Analysis with Large Language Models. https://arxiv.org/abs/2412.12564

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓