Hasil untuk "Greek language and literature. Latin language and literature"

Menampilkan 20 dari ~2863454 hasil · dari DOAJ, CrossRef, arXiv, Semantic Scholar

JSON API
S2 Open Access 2026
Latin literature

Anke Walter

Introducing newcomers to Latin literature and its history is an important aim, which Laurel Fulkerson and Jeffrey Tatum achieve amazingly well in their ambitious history of Latin literature ‘from its beginnings to the age of Augustus’.1 They provide a thoughtful and exciting introduction to the key genres and texts up to and including the Augustan age: the beginnings of Latin literature; Republican drama; oratory and rhetoric; the ‘personal voice’ in satire, Catullus’ poetry and Cicero’s letters; didactic literature; history and biography; Augustan love poetry; Augustan epic; and the Augustan ‘personal’ poetry of Vergil’s Eclogues, the works of Horace, and Ovid’s exile poetry. The writing is lively and clearly conveys the authors’ passion. I particularly liked the fact that the chapters include discussion of individual lines and phrases, to give readers an idea of the sound, rhythm, and style of the language. Another thread that runs through the volume is the way Latin literature developed through a dialogue with Greek texts, and how later authors kept shaping it in a reaction to both their Greek and Latin predecessors. A number of useful ‘sidebars’ (that appear at the end of each chapter, though) provide introductions to basic concepts such as Roman nomenclature, Latin metre, slavery in Rome, Callimachus’ Aetia, or the civil wars, followed by recommendations for ‘further readings’, of both primary texts in English translation and some key secondary literature, commendably containing a section of a few crucial works in languages other than English. A timeline of historical events and the lives of key Roman authors, maps, and a glossary provide further orientation for readers with no prior knowledge. The only aspect that I thought should have received a bit more attention is the transmission of Latin literature and the role of textual criticism, which would have provided more background, e.g. for the discussion of an important textual variant in the proem of Ovid’s Metamorphoses. Otherwise, I very much enjoyed this lucid and intriguing account of Latin literature up to the age of Augustus and hope that it will reach many newcomers as well as students of Latin – and that Fulkerson and Tatum, or others, will soon undertake the task of writing a follow-up volume on imperial Latin literature.

S2 Open Access 2026
The language of medicine today: English as the new Latin - benefits and challenges

Aleksandar Vuletic, Natasa Selmic

The language of medicine constitutes a specialized register characterized by precision, distinctive functional elements, and historical continuity. Rooted in Latin and Greek, medical terminology has long served as the foundation of stable cross-linguistic communication. For centuries, Latin functioned as the lingua franca of medical education, scholarship, and clinical practice, before gradually being replaced by vernacular languages. After World War II, English emerged as the dominant language of medicine, supported by the geopolitical influence of Anglophone countries, the rise of international organizations, and the globalization of medical publishing and education. The aim of this paper is to critically examine the establishment of English as the new Latin in global medical communication, highlighting both the benefits and challenges of this phenomenon. The primary benefits include universality of communication, standardized terminology and education, facilitated access to scientific literature, international collaboration, efficiency in crisis situations as well as increased visibility and impact of scholarly research. Conversely, the challenges entail linguistic inequality, obstacles for non-native speakers, loss of linguistic and cultural diversity, bias in research dissemination, and limited accessibility for patients. Undoubtedly, medical English has become the lingua franca of the international health care community in the 21st century. Yet concerted efforts are required to ensure professional inclusivity, preserve linguistic diversity, and establish a balance between the principles of efficiency and equity in future global medical communication.

S2 Open Access 2026
LINGUISTIC DETERMINANTS OF PROFESSIONAL COMPETENCE OF FUTURE MEDICAL PROFESSIONALS: INTEGRATION OF LATIN AND ENGLISH

N. Hantimurova, I. Vorona

. The study is devoted to analyzing the systemic role of Latin and English in the formation of the comprehensive professional competence of higher medical education students (medical doctors, pharmacists, dentists, and paramedics). In the context of the dynamic development of global healthcare and the critical expansion of English-language scientific literature, the professional readiness of medical specialists extends far beyond subject-specific knowledge and requires mastery of medical terminology and intercultural communication skills. It has been established that Latin and medical terminology constitute an indispensable foundation for the development of cognitive competence, ensuring terminological literacy (over 75% of medical terms have Greek and Latin origins) and the international standardization of medical nomenclature. This constitutes the key to understanding approximately 500,000 medical terms and the correct compilation of prescriptions. Meanwhile, English for Specific Purposes serves as the primary tool for developing social and communicative competence by providing access to up-to-date scientific sources, facilitating international interaction with colleagues and patients and supporting professional identity through the simulation of authentic clinical situations. The authors

S2 Open Access 2025
Does Studying Latin Make Pupils Smarter? Presenting the Field of Classical Language Impact Studies

A. Vereeck, M. Janse, Katja De Herdt et al.

Abstract:The study of Latin and/or Ancient Greek is said to have a wide array of cognitive and non-cognitive benefits, from language aptitude to cultural awareness, from reasoning ability to self-discipline, et cetera. These presumed benefits are frequently mentioned as arguments in favor of studying classical languages in school. What is less well-known, is the existence of an extensive empirical research literature on this topic. For the first time, we present here the field of classical language impact studies, a part of the history of classical scholarship which has not yet been recognized as such. We take stock of this fascinating field that above all came into being because classicists sought to defend their discipline and its place within education. After a general introduction to classical language impact studies and its characteristics, we devote most of this contribution to American research on cognitive impact, from the very beginning in the early 1900s until the present day. The findings and the methods by which they were arrived at are thoroughly discussed and contextualized, both from a historical and a cognitive-psychological viewpoint. We conclude that more methodologically refined studies will be necessary to answer the field’s pressing research questions.

arXiv Open Access 2025
From Alignment to Advancement: Bootstrapping Audio-Language Alignment with Synthetic Data

Chun-Yi Kuan, Hung-yi Lee

Audio-aware large language models (ALLMs) have recently made great strides in understanding and processing audio inputs. These models are typically adapted from text-based large language models (LLMs) through additional training on audio-related tasks. This adaptation process presents two major limitations. First, ALLMs often suffer from catastrophic forgetting, where crucial textual capabilities like instruction-following are lost after training on audio data. In some cases, models may even hallucinate sounds that are not present in the input audio, raising concerns about reliability. Second, achieving cross-modal alignment between audio and language typically relies on large collections of task-specific question-answer pairs for instruction tuning, making it resource-intensive. To address these issues, previous works have leveraged the backbone LLMs to synthesize general-purpose, caption-style alignment data. In this paper, we propose a data generation framework that produces contrastive-like training data, designed to enhance ALLMs' ability to differentiate between present and absent sounds. We further extend our approach to multi-audio scenarios, enabling the model to either explain differences between audio inputs or produce unified captions that describe all inputs, thereby enhancing audio-language alignment. We refer to the entire ALLM training framework as bootstrapping audio-language alignment via synthetic data generation from backbone LLMs (BALSa). Experimental results indicate that our method effectively mitigates audio hallucinations while reliably maintaining strong performance on audio understanding and reasoning benchmarks, as well as instruction-following skills. Moreover, incorporating multi-audio training further enhances the model's comprehension and reasoning capabilities. Overall, BALSa offers an efficient and scalable approach to developing ALLMs.

en eess.AS, cs.AI
arXiv Open Access 2025
Retrospex: Language Agent Meets Offline Reinforcement Learning Critic

Yufei Xiang, Yiqun Shen, Yeqin Zhang et al.

Large Language Models (LLMs) possess extensive knowledge and commonsense reasoning capabilities, making them valuable for creating powerful agents. However, existing LLM agent frameworks have not fully utilized past experiences for improvement. This work introduces a new LLM-based agent framework called Retrospex, which addresses this challenge by analyzing past experiences in depth. Unlike previous approaches, Retrospex does not directly integrate experiences into the LLM's context. Instead, it combines the LLM's action likelihood with action values estimated by a Reinforcement Learning (RL) Critic, which is trained on past experiences through an offline ''retrospection'' process. Additionally, Retrospex employs a dynamic action rescoring mechanism that increases the importance of experience-based values for tasks that require more interaction with the environment. We evaluate Retrospex in ScienceWorld, ALFWorld and Webshop environments, demonstrating its advantages over strong, contemporary baselines.

en cs.CL, cs.AI
arXiv Open Access 2025
Detecting Latin in Historical Books with Large Language Models: A Multimodal Benchmark

Yu Wu, Ke Shu, Jonas Fischer et al.

This paper presents a novel task of extracting low-resourced and noisy Latin fragments from mixed-language historical documents with varied layouts. We benchmark and evaluate the performance of large foundation models against a multimodal dataset of 724 annotated pages. The results demonstrate that reliable Latin detection with contemporary zero-shot models is achievable, yet these models lack a functional comprehension of Latin. This study establishes a comprehensive baseline for processing Latin within mixed-language corpora, supporting quantitative analysis in intellectual history and historical linguistics. Both the dataset and code are available at https://github.com/COMHIS/EACL26-detect-latin.

en cs.CL, cs.AI
arXiv Open Access 2025
Redefining technology for indigenous languages

Silvia Fernandez-Sabido, Laura Peniche-Sabido

In this paper, we offer an overview of indigenous languages, identifying the causes of their devaluation and the need for legislation on language rights. We review the technologies used to revitalize these languages, finding that when they come from outside, they often have the opposite effect to what they seek; however, when developed from within communities, they become powerful instruments of expression. We propose that the inclusion of Indigenous knowledge in large language models (LLMs) will enrich the technological landscape, but must be done in a participatory environment that encourages the exchange of knowledge.

en cs.CY, cs.AI
S2 Open Access 2025
REVIEW OF THE BOOK: A Textbook of Ancient Greek by Marina N. Slaviatinskaya. 3d ed., corrected and amended. Moscow, FLINTA Publ., 2022. 732 p.

O. Saveljeva

The review examines the Textbook of the Ancient Greek Language by Marina N. Slavyatinskaya (2022), which is a valuable compendium as a proper methodological guide for studying the Ancient Greek language and at the same time an educational source containing a large amount of information on many areas related to Greek: the history of the Greek language, the Greek language in comparative historical linguistics and its significance for the development of this science, the dialect picture of the ancient period, Greek literature, comparison of Greek and Latin languages and Greco-Roman literature, Greek-Slavic relations, their importance in the development of the Church Slavonic language and church literature. The range of directions and topics corresponds to the concept of the textbook: it is actually didactics with the main task of mastering the Greek language, its grammar, necessarily in connection with the history of society. The book has interdisciplinary significance and can be productively applied in the study of the Greek language for all humanitarian specialties.

S2 Open Access 2025
Claudel et la constellation des langues bibliques. Latin, grec, hébreu, araméen

Jean-François Poisson-Gueffier

The Claudelian conception of biblical languages is based on a paradox. Latin is the greatest, Greek seems to him inferior to the brilliance of his literature, while Hebrew remains unknown to him, but he vaguely perceives its fulguration under the Latin of Saint Jerome. He includes Hebrew in a vast project of unification of languages aimed at reducing their gap, to consider them in the same flow of meaning. For a biblical language, whether that of the Vulgate, the Septuagint, or the Tanakh, is not a closed linguistic system but takes place in a vast network of correspondences, parallelisms, and convergences since the truth of a language is notgrammatical but symbolic.

arXiv Open Access 2024
SpeechPrompt: Prompting Speech Language Models for Speech Processing Tasks

Kai-Wei Chang, Haibin Wu, Yu-Kai Wang et al.

Prompting has become a practical method for utilizing pre-trained language models (LMs). This approach offers several advantages. It allows an LM to adapt to new tasks with minimal training and parameter updates, thus achieving efficiency in both storage and computation. Additionally, prompting modifies only the LM's inputs and harnesses the generative capabilities of language models to address various downstream tasks in a unified manner. This significantly reduces the need for human labor in designing task-specific models. These advantages become even more evident as the number of tasks served by the LM scales up. Motivated by the strengths of prompting, we are the first to explore the potential of prompting speech LMs in the domain of speech processing. Recently, there has been a growing interest in converting speech into discrete units for language modeling. Our pioneer research demonstrates that these quantized speech units are highly versatile within our unified prompting framework. Not only can they serve as class labels, but they also contain rich phonetic information that can be re-synthesized back into speech signals for speech generation tasks. Specifically, we reformulate speech processing tasks into speech-to-unit generation tasks. As a result, we can seamlessly integrate tasks such as speech classification, sequence generation, and speech generation within a single, unified prompting framework. The experiment results show that the prompting method can achieve competitive performance compared to the strong fine-tuning method based on self-supervised learning models with a similar number of trainable parameters. The prompting method also shows promising results in the few-shot setting. Moreover, with the advanced speech LMs coming into the stage, the proposed prompting framework attains great potential.

en eess.AS, cs.AI
arXiv Open Access 2024
Fast Vocabulary Transfer for Language Model Compression

Leonidas Gee, Andrea Zugarini, Leonardo Rigutini et al.

Real-world business applications require a trade-off between language model performance and size. We propose a new method for model compression that relies on vocabulary transfer. We evaluate the method on various vertical domains and downstream tasks. Our results indicate that vocabulary transfer can be effectively used in combination with other compression techniques, yielding a significant reduction in model size and inference time while marginally compromising on performance.

en cs.CL, cs.AI
arXiv Open Access 2024
THaMES: An End-to-End Tool for Hallucination Mitigation and Evaluation in Large Language Models

Mengfei Liang, Archish Arun, Zekun Wu et al.

Hallucination, the generation of factually incorrect content, is a growing challenge in Large Language Models (LLMs). Existing detection and mitigation methods are often isolated and insufficient for domain-specific needs, lacking a standardized pipeline. This paper introduces THaMES (Tool for Hallucination Mitigations and EvaluationS), an integrated framework and library addressing this gap. THaMES offers an end-to-end solution for evaluating and mitigating hallucinations in LLMs, featuring automated test set generation, multifaceted benchmarking, and adaptable mitigation strategies. It automates test set creation from any corpus, ensuring high data quality, diversity, and cost-efficiency through techniques like batch processing, weighted sampling, and counterfactual validation. THaMES assesses a model's ability to detect and reduce hallucinations across various tasks, including text generation and binary classification, applying optimal mitigation strategies like In-Context Learning (ICL), Retrieval Augmented Generation (RAG), and Parameter-Efficient Fine-tuning (PEFT). Evaluations of state-of-the-art LLMs using a knowledge base of academic papers, political news, and Wikipedia reveal that commercial models like GPT-4o benefit more from RAG than ICL, while open-weight models like Llama-3.1-8B-Instruct and Mistral-Nemo gain more from ICL. Additionally, PEFT significantly enhances the performance of Llama-3.1-8B-Instruct in both evaluation tasks.

en cs.CL
arXiv Open Access 2024
Mapping 'when'-clauses in Latin American and Caribbean languages: an experiment in subtoken-based typology

Nilo Pedrazzini

Languages can encode temporal subordination lexically, via subordinating conjunctions, and morphologically, by marking the relation on the predicate. Systematic cross-linguistic variation among the former can be studied using well-established token-based typological approaches to token-aligned parallel corpora. Variation among different morphological means is instead much harder to tackle and therefore more poorly understood, despite being predominant in several language groups. This paper explores variation in the expression of generic temporal subordination ('when'-clauses) among the languages of Latin America and the Caribbean, where morphological marking is particularly common. It presents probabilistic semantic maps computed on the basis of the languages of the region, thus avoiding bias towards the many world's languages that exclusively use lexified connectors, incorporating associations between character $n$-grams and English $when$. The approach allows capturing morphological clause-linkage devices in addition to lexified connectors, paving the way for larger-scale, strategy-agnostic analyses of typological variation in temporal subordination.

en cs.CL, cs.IR
S2 Open Access 2024
TEACHING ENGLISH LITERATURE IN EFL CLASSROOM AS THE STRENGHTENING OF LANGUAGE USE : FROM ANCIENT PEDAGOGY TO MODERN ACADEMY

Dodi Oktariza

The paper discussed about teaching of English in EFL classroom seen generally as a regular activity as well as teaching of English literature as a rich source of authentic material of English language teaching itself and the strengthening of English language use. For many years, English literature has been taught at a secondary or tertiary level even at University level. However, teaching English literature has not been given yet much emphasis on an appropriate methodology of teaching due to it is still considered as one of the most difficult subjects to teach.Generally, there are two terms of  methodology  that mostly discussed by experts, namely ancient/traditional pedagogy  and modern academy. Learning about traditional, it has its roots in the ancient pedagogy of classical language instruction. The points are students only mimicking and parroting their teachers knowledge. In fact, such pedagogy was being successful in beginning Latin and Greek classes. In contrary, we do not intend to suggest that students learn only from their teachers, who transmit their knowledge and understanding freely, but what do we mean that pedagogy frames course content and different frames invite different kinds of understanding of content and that  parts of modern academy. At last, central to teaching literature in the classroom is giving the students’ right to be involved freely in their experiences and let them observed literary as part of their life more closely.

S2 Open Access 2024
On the history of classical studies in the Imperial Kazan University: the department of Roman literature in the 1880s — 1890s

Natalia Almazova

The article deals with the history of classical studies in the Imperial Kazan' University in 1880-1890s, which was connected with the activities of two professors at the Chair of the Roman Literature, Darius Naguevskiy (1845-1918) and Stanislaw Opatskiy (1847-?, after 1900). According to the students of the history of classical research at the Kazan' University in the late 19th century, studies in the Roman history and literature were represented there in this period at a considerably lower scale than those of the Ancient Greek history, language and literature. One should ask why the Kazan' University failed to shape the specific tradition of studying the Roman antiquity, in spite of its employing throughout the 19th century prominent Romanists (N.M. Blagoveshchenskiy, V.I. Modestov). Besides the Ministry of Public Enlightenment was obviously interested in promoting the courses of the Chair of the Roman Literature, as it appointed for it with a brief interval two Classical philologists specialized in Latin poetry and prose, Dr. Lit. D. Naguevskiy in 1883 and M. Lit. S. Opatskiy in 1885. The article attempts to answer this question taking into account both essential and personal factors of the problem, the latter being the individual features of the two professors and their conflicts.

DOAJ Open Access 2023
Hore si quod sanguinem minuare debes: un horario semanal para la sangría (con un apéndice sobre el término iouius, ‘jueves’)

Arsenio Ferraces Rodríguez

Edición crítica, traducción y comentario de un calendario semanal que señala las horas favorables para efectuar una sangría en cada día de la semana. En apéndice se ofrece una explicación sobre el término iouius, que debe ser restituido en el texto y que hasta ahora no estaba atestiguado como nombre del jueves en ninguna fuente conocida.

History of the Greco-Roman World, Greek language and literature. Latin language and literature
DOAJ Open Access 2023
Agamennone e Oreste nell’Odissea: logiche narrative e tracce di committenza pisistratide

Elisabetta Pitotto

Questo articolo analizza le varianti mitologiche relative alla saga atride compresenti nell’Odissea. Le versioni con cui sono presentati il ritorno e l’assassinio di Agamennone (Od. XI 385-464 e XXIV 191-202) servono a dipingere il suo destino in senso contrario al felice νόστος di Odisseo. La caratterizzazione riservata al personaggio di Oreste e il modo con cui è delineata la sua vendetta (Od. I 28-43 e III 192-316), sfrondata dai tratti tradizionali e riproposta in senso più politico, sembrano invece da porre in relazione con le esigenze dei Pisistratidi, verosimili committenti della registrazione scritta del poema.

History of the Greco-Roman World, Greek language and literature. Latin language and literature
arXiv Open Access 2023
A Zero-shot and Few-shot Study of Instruction-Finetuned Large Language Models Applied to Clinical and Biomedical Tasks

Yanis Labrak, Mickael Rouvier, Richard Dufour

We evaluate four state-of-the-art instruction-tuned large language models (LLMs) -- ChatGPT, Flan-T5 UL2, Tk-Instruct, and Alpaca -- on a set of 13 real-world clinical and biomedical natural language processing (NLP) tasks in English, such as named-entity recognition (NER), question-answering (QA), relation extraction (RE), etc. Our overall results demonstrate that the evaluated LLMs begin to approach performance of state-of-the-art models in zero- and few-shot scenarios for most tasks, and particularly well for the QA task, even though they have never seen examples from these tasks before. However, we observed that the classification and RE tasks perform below what can be achieved with a specifically trained model for the medical field, such as PubMedBERT. Finally, we noted that no LLM outperforms all the others on all the studied tasks, with some models being better suited for certain tasks than others.

en cs.CL, cs.AI

Halaman 2 dari 143173