ABSTRACT
This article investigates the relationship between language and civic identity in two interconnected contexts: Classical Athens and Hellenistic-Roman Delos. Drawing on literary, epigraphic, and archaeological sources, it shows how linguistic competence served as a marker of civic inclusion or exclusion. In Athens, mastery of Attic was central to ideological constructions of citizenship and often used polemically, especially against demagogues. In Delos, by contrast, daily interaction among Greeks, Romans, and Easterners gave rise to multilingual practices and hybrid identity negotiations. Greek emerged as a vehicle for social permeability, while Latin reinforced legal distinctions. Adopting the theoretical framework of ‘metrolingualism’, the article argues that language, space, and material culture operated together in shaping civic identity. Yet, this fluidity unfolded within a stable juridical-institutional background that remained largely untouched: despite the multiplicity of voices and interactions, the boundary between citizens and non-citizens, between inclusion and exclusion, continued to be defined and preserved at the institutional level.
History of the Greco-Roman World, Greek language and literature. Latin language and literature
This paper evaluates the performance of transformer-based language models on split-ergative case alignment in Georgian, a particularly rare system for assigning grammatical cases to mark argument roles. We focus on subject and object marking determined through various permutations of nominative, ergative, and dative noun forms. A treebank-based approach for the generation of minimal pairs using the Grew query language is implemented. We create a dataset of 370 syntactic tests made up of seven tasks containing 50-70 samples each, where three noun forms are tested in any given sample. Five encoder- and two decoder-only models are evaluated with word- and/or sentence-level accuracy metrics. Regardless of the specific syntactic makeup, models performed worst in assigning the ergative case correctly and strongest in assigning the nominative case correctly. Performance correlated with the overall frequency distribution of the three forms (NOM > DAT > ERG). Though data scarcity is a known issue for low-resource languages, we show that the highly specific role of the ergative along with a lack of available training data likely contributes to poor performance on this case. The dataset is made publicly available and the methodology provides an interesting avenue for future syntactic evaluations of languages where benchmarks are limited.
Recent advancements in Natural Language Processing and Deep Learning have enabled the development of Large Language Models (LLMs), which have significantly advanced the state-of-the-art across a wide range of tasks, including Question Answering (QA). Despite these advancements, research on LLMs has primarily targeted high-resourced languages (e.g., English), and only recently has attention shifted toward multilingual models. However, these models demonstrate a training data bias towards a small number of popular languages or rely on transfer learning from high- to under-resourced languages; this may lead to a misrepresentation of social, cultural, and historical aspects. To address this challenge, monolingual LLMs have been developed for under-resourced languages; however, their effectiveness remains less studied when compared to multilingual counterparts on language-specific tasks. In this study, we address this research gap in Greek QA by contributing: (i) DemosQA, a novel dataset, which is constructed using social media user questions and community-reviewed answers to better capture the Greek social and cultural zeitgeist; (ii) a memory-efficient LLM evaluation framework adaptable to diverse QA datasets and languages; and (iii) an extensive evaluation of 11 monolingual and multilingual LLMs on 6 human-curated Greek QA datasets using 3 different prompting strategies. We release our code and data to facilitate reproducibility.
Jacqueline Rowe, Mateusz Klimaszewski, Liane Guillou
et al.
Large language models increasingly support multiple languages, yet most benchmarks for gender bias remain English-centric. We introduce EuroGEST, a dataset designed to measure gender-stereotypical reasoning in LLMs across English and 29 European languages. EuroGEST builds on an existing expert-informed benchmark covering 16 gender stereotypes, expanded in this work using translation tools, quality estimation metrics, and morphological heuristics. Human evaluations confirm that our data generation method results in high accuracy of both translations and gender labels across languages. We use EuroGEST to evaluate 24 multilingual language models from six model families, demonstrating that the strongest stereotypes in all models across all languages are that women are 'beautiful', 'empathetic' and 'neat' and men are 'leaders', 'strong, tough' and 'professional'. We also show that larger models encode gendered stereotypes more strongly and that instruction finetuning does not consistently reduce gendered stereotypes. Our work highlights the need for more multilingual studies of fairness in LLMs and offers scalable methods and resources to audit gender bias across languages.
Similar to LLMs, the development of vision language models is mainly driven by English datasets and models trained in English and Chinese language, whereas support for other languages, even those considered high-resource languages such as German, remains significantly weaker. In this work we present an analysis of open-weight VLMs on factual knowledge in the German and English language. We disentangle the image-related aspects from the textual ones by analyzing accu-racy with jury-as-a-judge in both prompt languages and images from German and international contexts. We found that for celebrities and sights, VLMs struggle because they are lacking visual cognition of German image contents. For animals and plants, the tested models can often correctly identify the image contents ac-cording to the scientific name or English common name but fail in German lan-guage. Cars and supermarket products were identified equally well in English and German images across both prompt languages.
Na osnovi historiografskih tekstov in pisem je analiziran odnos Eneja Silvija Piccolominija do rodbine Celjskih grofov. Avtor ugotavlja, da se Piccolominijevi pogledi na rodbino drastično spremenijo po ustanovitvi Mailberške zveze avstrijskega plemstva leta 1452, ki je zahtevala skrbništvo nad mladoletnim Ladislavom Posmrtnim. Avtor vidi razloge za tako spremembo predvsem v postopanju Ulrika Celjskega, ki se je kljub drugačnim obljubam pridružil zvezi in s tem izneveril zaupanje vladarja. Piccolominijevi opisi sicer nimajo večje kronološke in interpretativne vrednosti za zgodovino Celjskih grofov, so pa dokončno utrdili njihovo negativno podobo v humanistični in zgodnjenovoveški historiografiji.
Greek language and literature. Latin language and literature
The paper detects some ancient transpositions in Seneca’s De beneficiis which caused serious corruptions. Therefore, a new constitutio textus is proposed at Sen. ben. 2.14.5, 2.34.3, 3.23.4, 3.29.5, 4.40.2, 5.1.5, 5.12.4.
Greek language and literature. Latin language and literature, History of the Greco-Roman World
Reference errors, such as citation and quotation errors, are common in scientific papers. Such errors can result in the propagation of inaccurate information, but are difficult and time-consuming to detect, posing a significant challenge to scientific publishing. To support automatic detection of reference errors, this work evaluated the ability of large language models in OpenAI's GPT family to detect quotation errors. Specifically, we prepared an expert-annotated, general-domain dataset of statement-reference pairs from journal articles. Large language models were evaluated in different settings with varying amounts of reference information provided by retrieval augmentation. Our results showed that large language models are able to detect erroneous citations with limited context and without fine-tuning. This study contributes to the growing literature that seeks to utilize artificial intelligence to assist in the writing, reviewing, and publishing of scientific papers. Potential avenues for further improvements in this task are also discussed.
Specific Language Impairment (SLI) is a disorder that affects communication and can affect both comprehension and expression. This study focuses on effectively detecting SLI in children using transcripts of spontaneous narratives from 1063 interviews. A three-stage cascading pipeline was proposed f. In the first stage, feature extraction and dimensionality reduction of the data are performed using the Random Forest (RF) and Spearman correlation methods. In the second stage, the most predictive variables from the first stage are estimated using logistic regression, which is used in the last stage to detect SLI in children from transcripts of spontaneous narratives using a nearest neighbor classifier. The results revealed an accuracy of 97.13% in identifying SLI, highlighting aspects such as the length of the responses, the quality of their utterances, and the complexity of the language. This new approach, framed in natural language processing, offers significant benefits to the field of SLI detection by avoiding complex subjective variables and focusing on quantitative metrics directly related to the child's performance.
Rapid advancements of large language model (LLM) technologies led to the introduction of powerful open-source instruction-tuned LLMs that have the same text generation quality as the state-of-the-art counterparts such as GPT-4. While the emergence of such models accelerates the adoption of LLM technologies in sensitive-information environments the authors of such models don not disclose the training data necessary for replication of the results thus making the achievements model-exclusive. Since those open-source models are also multilingual this in turn reduces the benefits of training a language specific LLMs as improved inference computation efficiency becomes the only guaranteed advantage of such costly procedure. More cost-efficient options such as vocabulary extension and subsequent continued pre-training are also inhibited by the lack of access to high-quality instruction-tuning data since it is the major factor behind the resulting LLM task-solving capabilities. To address the limitations and cut the costs of the language adaptation pipeline we propose Learned Embedding Propagation (LEP). Unlike existing approaches our method has lower training data size requirements due to minimal impact on existing LLM knowledge which we reinforce using novel ad-hoc embedding propagation procedure that allows to skip the instruction-tuning step and instead implant the new language knowledge directly into any existing instruct-tuned variant. We evaluated four Russian vocabulary adaptations for LLaMa-3-8B and Mistral-7B, showing that LEP is competitive with traditional instruction-tuning methods, achieving performance comparable to OpenChat 3.5 and LLaMa-3-8B-Instruct, with further improvements via self-calibration and continued tuning enhancing task-solving capabilities.
Elmurod Kuriyozov, Ulugbek Salaev, Sanatbek Matlatipov
et al.
Text classification is an important task in Natural Language Processing (NLP), where the goal is to categorize text data into predefined classes. In this study, we analyse the dataset creation steps and evaluation techniques of multi-label news categorisation task as part of text classification. We first present a newly obtained dataset for Uzbek text classification, which was collected from 10 different news and press websites and covers 15 categories of news, press and law texts. We also present a comprehensive evaluation of different models, ranging from traditional bag-of-words models to deep learning architectures, on this newly created dataset. Our experiments show that the Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN) based models outperform the rule-based models. The best performance is achieved by the BERTbek model, which is a transformer-based BERT model trained on the Uzbek corpus. Our findings provide a good baseline for further research in Uzbek text classification.
Este artículo investiga algunos pasajes de la República de Platón para comprender algunos presupuestos y consecuencias de la argumentación del filósofo sobre el estatus de las pasiones, que culmina con la incisiva y radical afirmación, en el libro X, de que “es necesario secar las emociones” (Rep. 606d). Ello se debe a que, según Platón, ellas conducen a los hombres a actuar de forma irracional y cobarde, especialmente a aquellos ciudadanos que se acostumbran a la poesía, especialmente la dramática, que incita a las pasiones. Frente a la crítica del filósofo al valor del teatro y a las expresiones de dolor (como el lamento y el llanto), propongo revalorizar el papel de la emoción en la formación del ciudadano y el fortalecimiento político de un grupo, tomando un caso contemporáneo: la ceremonia fúnebre realizada por el gobierno francés con motivo de la muerte del actor Jean-Paul Belmondo, en el contexto de la crisis pandémica de SARS-CoV-2.
Greek language and literature. Latin language and literature
Scandinavian countries are perceived as role-models when it comes to gender equality. With the advent of pre-trained language models and their widespread usage, we investigate to what extent gender-based harmful and toxic content exist in selected Scandinavian language models. We examine nine models, covering Danish, Swedish, and Norwegian, by manually creating template-based sentences and probing the models for completion. We evaluate the completions using two methods for measuring harmful and toxic completions and provide a thorough analysis of the results. We show that Scandinavian pre-trained language models contain harmful and gender-based stereotypes with similar values across all languages. This finding goes against the general expectations related to gender equality in Scandinavian countries and shows the possible problematic outcomes of using such models in real-world settings.
Benjamin Muller, Deepanshu Gupta, Siddharth Patwardhan
et al.
Multi-lingual language models (LM), such as mBERT, XLM-R, mT5, mBART, have been remarkably successful in enabling natural language tasks in low-resource languages through cross-lingual transfer from high-resource ones. In this work, we try to better understand how such models, specifically mT5, transfer *any* linguistic and semantic knowledge across languages, even though no explicit cross-lingual signals are provided during pre-training. Rather, only unannotated texts from each language are presented to the model separately and independently of one another, and the model appears to implicitly learn cross-lingual connections. This raises several questions that motivate our study, such as: Are the cross-lingual connections between every language pair equally strong? What properties of source and target language impact the strength of cross-lingual transfer? Can we quantify the impact of those properties on the cross-lingual transfer? In our investigation, we analyze a pre-trained mT5 to discover the attributes of cross-lingual connections learned by the model. Through a statistical interpretation framework over 90 language pairs across three tasks, we show that transfer performance can be modeled by a few linguistic and data-derived features. These observations enable us to interpret cross-lingual understanding of the mT5 model. Through these observations, one can favorably choose the best source language for a task, and can anticipate its training data demands. A key finding of this work is that similarity of syntax, morphology and phonology are good predictors of cross-lingual transfer, significantly more than just the lexical similarity of languages. For a given language, we are able to predict zero-shot performance, that increases on a logarithmic scale with the number of few-shot target language data points.
Accuracy of English-language Question Answering (QA) systems has improved significantly in recent years with the advent of Transformer-based models (e.g., BERT). These models are pre-trained in a self-supervised fashion with a large English text corpus and further fine-tuned with a massive English QA dataset (e.g., SQuAD). However, QA datasets on such a scale are not available for most of the other languages. Multi-lingual BERT-based models (mBERT) are often used to transfer knowledge from high-resource languages to low-resource languages. Since these models are pre-trained with huge text corpora containing multiple languages, they typically learn language-agnostic embeddings for tokens from different languages. However, directly training an mBERT-based QA system for low-resource languages is challenging due to the paucity of training data. In this work, we augment the QA samples of the target language using translation and transliteration into other languages and use the augmented data to fine-tune an mBERT-based QA model, which is already pre-trained in English. Experiments on the Google ChAII dataset show that fine-tuning the mBERT model with translations from the same language family boosts the question-answering performance, whereas the performance degrades in the case of cross-language families. We further show that introducing a contrastive loss between the translated question-context feature pairs during the fine-tuning process, prevents such degradation with cross-lingual family translations and leads to marginal improvement. The code for this work is available at https://github.com/gokulkarthik/mucot.
In 1971 Louis Holtz gave the editio princeps of a grammatical chapter about interjections, from the ms. Paris, Bibliothèque Nationale, Lat. 7530 (=P), sec. VIIIex. (ff. 220v, 31 Sed illae in quibus exprimitur – 221, r, 18 aut uerentis ut pro pudor et reliqua). The paper brings a new witness, the ms. Berlin, Staatsbibliothek, Diez. B Sant. 66 (=B), sec. VIIIex. (pp. 124, 19 +ConLECTIONES VOCUM – 125, 17 ut pro pudor et reliqua), provides a new critical text of the chapter with an Italian translation, analyses connections with the Anonymus ad Cuimnanum in <XXV> De interiectione and examines sources and parallels for the chapter.
Greek language and literature. Latin language and literature, History of the Greco-Roman World
Wav2vec 2.0 is a recently proposed self-supervised framework for speech representation learning. It follows a two-stage training process of pre-training and fine-tuning, and performs well in speech recognition tasks especially ultra-low resource cases. In this work, we attempt to extend self-supervised framework to speaker verification and language identification. First, we use some preliminary experiments to indicate that wav2vec 2.0 can capture the information about the speaker and language. Then we demonstrate the effectiveness of wav2vec 2.0 on the two tasks respectively. For speaker verification, we obtain a new state-of-the-art result, Equal Error Rate (EER) of 3.61% on the VoxCeleb1 dataset. For language identification, we obtain an EER of 12.02% on 1 second condition and an EER of 3.47% on full-length condition of the AP17-OLR dataset. Finally, we utilize one model to achieve the unified modeling by the multi-task learning for the two tasks.
Liqun Shao, Sahitya Mantravadi, Tom Manzini
et al.
In this paper, we detail novel strategies for interpolating personalized language models and methods to handle out-of-vocabulary (OOV) tokens to improve personalized language models. Using publicly available data from Reddit, we demonstrate improvements in offline metrics at the user level by interpolating a global LSTM-based authoring model with a user-personalized n-gram model. By optimizing this approach with a back-off to uniform OOV penalty and the interpolation coefficient, we observe that over 80% of users receive a lift in perplexity, with an average of 5.2% in perplexity lift per user. In doing this research we extend previous work in building NLIs and improve the robustness of metrics for downstream tasks.