arXiv Open Access 2024

Harnessing Large Language Models: Fine-tuned BERT for Detecting Charismatic Leadership Tactics in Natural Language

Yasser Saeid Felix Neubürger Stefanie Krügl Helena Hüster Thomas Kopinski +1 lainnya
Lihat Sumber

Abstrak

This work investigates the identification of Charismatic Leadership Tactics (CLTs) in natural language using a fine-tuned Bidirectional Encoder Representations from Transformers (BERT) model. Based on an own extensive corpus of CLTs generated and curated for this task, our methodology entails training a machine learning model that is capable of accurately identifying the presence of these tactics in natural language. A performance evaluation is conducted to assess the effectiveness of our model in detecting CLTs. We find that the total accuracy over the detection of all CLTs is 98.96\% The results of this study have significant implications for research in psychology and management, offering potential methods to simplify the currently elaborate assessment of charisma in texts.

Topik & Kata Kunci

Penulis (6)

Y

Yasser Saeid

F

Felix Neubürger

S

Stefanie Krügl

H

Helena Hüster

T

Thomas Kopinski

R

Ralf Lanwehr

Format Sitasi

Saeid, Y., Neubürger, F., Krügl, S., Hüster, H., Kopinski, T., Lanwehr, R. (2024). Harnessing Large Language Models: Fine-tuned BERT for Detecting Charismatic Leadership Tactics in Natural Language. https://arxiv.org/abs/2409.18984

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓