CrossRef Open Access 2023 28 sitasi

A multimodal turn in Digital Humanities. Using contrastive machine learning models to explore, enrich, and analyze digital visual historical collections

Thomas Smits Melvin Wevers

Abstrak

Abstract Until recently, most research in the Digital Humanities (DH) was monomodal, meaning that the object of analysis was either textual or visual. Seeking to integrate multimodality theory into the DH, this article demonstrates that recently developed multimodal deep learning models, such as Contrastive Language Image Pre-training (CLIP), offer new possibilities to explore and analyze image–text combinations at scale. These models, which are trained on image and text pairs, can be applied to a wide range of text-to-image, image-to-image, and image-to-text prediction tasks. Moreover, multimodal models show high accuracy in zero-shot classification, i.e. predicting unseen categories across heterogeneous datasets. Based on three exploratory case studies, we argue that this zero-shot capability opens up the way for a multimodal turn in DH research. Moreover, multimodal models allow scholars to move past the artificial separation of text and images that was dominant in the field and analyze multimodal meaning at scale. However, we also need to be aware of the specific (historical) bias of multimodal deep learning that stems from biases in the training data used to train these models.

Penulis (2)

T

Thomas Smits

M

Melvin Wevers

Format Sitasi

Smits, T., Wevers, M. (2023). A multimodal turn in Digital Humanities. Using contrastive machine learning models to explore, enrich, and analyze digital visual historical collections. https://doi.org/10.1093/llc/fqad008

Akses Cepat

Lihat di Sumber doi.org/10.1093/llc/fqad008
Informasi Jurnal
Tahun Terbit
2023
Bahasa
en
Total Sitasi
28×
Sumber Database
CrossRef
DOI
10.1093/llc/fqad008
Akses
Open Access ✓