arXiv Open Access 2023

Rethinking Large Language Models in Mental Health Applications

Shaoxiong Ji Tianlin Zhang Kailai Yang Sophia Ananiadou Erik Cambria
Lihat Sumber

Abstrak

Large Language Models (LLMs) have become valuable assets in mental health, showing promise in both classification tasks and counseling applications. This paper offers a perspective on using LLMs in mental health applications. It discusses the instability of generative models for prediction and the potential for generating hallucinatory outputs, underscoring the need for ongoing audits and evaluations to maintain their reliability and dependability. The paper also distinguishes between the often interchangeable terms ``explainability'' and ``interpretability'', advocating for developing inherently interpretable methods instead of relying on potentially hallucinated self-explanations generated by LLMs. Despite the advancements in LLMs, human counselors' empathetic understanding, nuanced interpretation, and contextual awareness remain irreplaceable in the sensitive and complex realm of mental health counseling. The use of LLMs should be approached with a judicious and considerate mindset, viewing them as tools that complement human expertise rather than seeking to replace it.

Topik & Kata Kunci

Penulis (5)

S

Shaoxiong Ji

T

Tianlin Zhang

K

Kailai Yang

S

Sophia Ananiadou

E

Erik Cambria

Format Sitasi

Ji, S., Zhang, T., Yang, K., Ananiadou, S., Cambria, E. (2023). Rethinking Large Language Models in Mental Health Applications. https://arxiv.org/abs/2311.11267

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2023
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓