arXiv Open Access 2025

ExpliCa: Evaluating Explicit Causal Reasoning in Large Language Models

Martina Miliani Serena Auriemma Alessandro Bondielli Emmanuele Chersoni Lucia Passaro +2 lainnya
Lihat Sumber

Abstrak

Large Language Models (LLMs) are increasingly used in tasks requiring interpretive and inferential accuracy. In this paper, we introduce ExpliCa, a new dataset for evaluating LLMs in explicit causal reasoning. ExpliCa uniquely integrates both causal and temporal relations presented in different linguistic orders and explicitly expressed by linguistic connectives. The dataset is enriched with crowdsourced human acceptability ratings. We tested LLMs on ExpliCa through prompting and perplexity-based metrics. We assessed seven commercial and open-source LLMs, revealing that even top models struggle to reach 0.80 accuracy. Interestingly, models tend to confound temporal relations with causal ones, and their performance is also strongly influenced by the linguistic order of the events. Finally, perplexity-based scores and prompting performance are differently affected by model size.

Topik & Kata Kunci

Penulis (7)

M

Martina Miliani

S

Serena Auriemma

A

Alessandro Bondielli

E

Emmanuele Chersoni

L

Lucia Passaro

I

Irene Sucameli

A

Alessandro Lenci

Format Sitasi

Miliani, M., Auriemma, S., Bondielli, A., Chersoni, E., Passaro, L., Sucameli, I. et al. (2025). ExpliCa: Evaluating Explicit Causal Reasoning in Large Language Models. https://arxiv.org/abs/2502.15487

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓