DOAJ Open Access 2024

Enhancing Misinformation Detection in Spanish Language with Deep Learning: BERT and RoBERTa Transformer Models

Yolanda Blanco-Fernández Javier Otero-Vizoso Alberto Gil-Solla Jorge García-Duque

Abstrak

This paper presents an approach to identifying political fake news in Spanish using Transformer architectures. Current methodologies often overlook political news due to the lack of quality datasets, especially in Spanish. To address this, we created a synthetic dataset of 57,231 Spanish political news articles, gathered via automated web scraping and enhanced with generative large language models. This dataset is used for fine-tuning and benchmarking Transformer models like BERT and RoBERTa for fake news detection. Our fine-tuned models showed outstanding performance on this dataset, with accuracy ranging from 97.4% to 98.6%. However, testing with a smaller, independent hand-curated dataset, including statements from political leaders during Spain’s July 2023 electoral debates, revealed a performance drop to 71%. Although this suggests that the model needs additional refinements to handle the complexity and variability of real-world political discourse, achieving over 70% accuracy seems a promising result in the under-explored domain of Spanish political fake news detection.

Penulis (4)

Y

Yolanda Blanco-Fernández

J

Javier Otero-Vizoso

A

Alberto Gil-Solla

J

Jorge García-Duque

Format Sitasi

Blanco-Fernández, Y., Otero-Vizoso, J., Gil-Solla, A., García-Duque, J. (2024). Enhancing Misinformation Detection in Spanish Language with Deep Learning: BERT and RoBERTa Transformer Models. https://doi.org/10.3390/app14219729

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber doi.org/10.3390/app14219729
Informasi Jurnal
Tahun Terbit
2024
Sumber Database
DOAJ
DOI
10.3390/app14219729
Akses
Open Access ✓