Hasil untuk "Greek language and literature. Latin language and literature"

Menampilkan 20 dari ~2869227 hasil · dari CrossRef, DOAJ, arXiv, Semantic Scholar

JSON API
CrossRef Open Access 2026
RHYTHM AS A MEDIATING FACTOR IN EARLY LANGUAGE DEVELOPMENT: AN ACTION RESEARCH STUDY IN GREEK PRESCHOOL EDUCATION

Paraskevi Bika, Maria Argyriou

The research focuses on rhythm as a mediating factor in early language development within preschool education, with particular emphasis on rhythm-centred musical activities embedded in everyday classroom practice. Grounded in contemporary research on music cognition, rhythmic perception, and early language acquisition, the study explores how structured yet playful rhythmic engagement may support language-related behaviours, including speech rhythm, auditory responsiveness, sustained attention, and expressive participation in young children. Adopting an action research design, the study draws on data collected from preschool teachers and student teachers who implemented rhythm-based musical activities in public and private kindergarten settings in Greece. The research was carried out over a clearly defined period, with data collection taking place between 23 November and 13 January 2025. This timeframe allowed for the systematic implementation of rhythm-based activities and the documentation of educators’ observations within authentic preschool settings. Data sources included educators’ observational records, pedagogical documentation, and reflective accounts concerning the integration of rhythmic musical practices into daily classroom routines. The analysis foregrounds educators’ perspectives on children’s engagement and communicative behaviours, rather than aiming to establish causal effects. The findings suggest that repeated engagement with rhythm-focused musical activities is perceived to support key aspects of early language development, particularly sensitivity to speech rhythm, auditory discrimination, verbal expression, and attentional regulation. Rhythm emerges as a shared temporal and communicative framework linking musical and linguistic experience through embodied and socially interactive learning processes. By situating rhythm-centred musical practice within authentic preschool contexts, the study contributes to interdisciplinary discussions on music–language relationships and embodied, multimodal approaches to early learning, while offering practice-oriented insights for early childhood educators seeking inclusive and accessible pedagogical strategies for language-rich learning environments.<p> </p><p><strong> Article visualizations:</strong></p><p><img src="/-counters-/edu/0141/a.php" alt="Hit counter" /></p>

S2 Open Access 2026
New words and terms of the digital state

Danylo Krokhmalnyi

The article examines the terminological system of the digital state that emerged in Ukraine in the first decades of the 21st century and is associated with the digitization of public services aimed at facilitating administrative documentation processes. Diia is an ecosystem created by the Ministry of Digital Transformation of Ukraine that offers more than 70 online services for citizens and businesses. It is the first large-scale project related to the development of citizens’ digital literacy. Terms and their underlying concepts are of interest from the perspective of forming mechanisms of public governance and the penetration of reforms into various domains, including law, economics, education, science, culture, production, and everyday life. The relevance of the study is обусловлена the rapid transition of public administration, business, and education to electronic formats, which has led to the emergence of a large number of new nominations requiring systematization and linguistic analysis. It is emphasized that Ukraine has become a global leader in the field of document digitization, and the domestic model of a «state in a smartphone» is already being integrated into the digital systems of other countries (Estonia etc.). The article focuses on the Diia.Education portal, where nearly one million Ukrainian learners have started their studies. Since this is a new system, it has generated new words and terms that require familiarization, classification, and commentary. The study outlines the main thematic groups of terms presented on the Diia.Education platform and analyzes their semantic content. The source base consists of electronic glossaries of the project containing over 900 units, most of which entered usage in the first decades of the 21st century and have not yet received comprehensive coverage in scholarly literature. Within the scope of the research, the terminology is classified into four major domains: digital literacy and infrastructure; slang and neologisms of digital communication; media literacy and information hygiene; and barrier-free communication. The linguistic specificity of the analyzed units is highlighted, in particular the dominance of Anglicisms and the active functioning of term elements of Greek and Latin origin (video-, digital-). Special attention is paid to distinguishing the concepts of digitization, digitalization, and digital transformation. The article concludes that the contemporary digital terminological system is in a stage of active formation, has an interdisciplinary character, and reflects global trends in technological development.

CrossRef Open Access 2025
The function of feedback in second language writing

Qing Huang

Feedback plays an important role in language learning. Feedback-seeking be-havior (FSB) includes feedback monitoring and inquiry and the diagnostic in-formation obtained through FSB can help seekers improve their performance. Most of the previous studies have explored the factors that influence FSB, such as language mindsets and motivational factors. However, FSB itself has an im-portant role to play in second language writing. Therefore, this study tries to combine FSB with second language writing to investigate the following ques-tions: 1. What is the role of FSB in second language writing? 2. How can FSB ex-ert its influence in second language writing? This research has selected 20 sen-ior students from science class to join in an semi-structural interview and ques-tionnaire. The results reveal that monitoring feedback unconditionally but in-quiring feedback conditionally. Herein, implications for L2 writing pedagogy are provided.

arXiv Open Access 2025
Large Language Models Meet Text-Attributed Graphs: A Survey of Integration Frameworks and Applications

Guangxin Su, Hanchen Wang, Jianwei Wang et al.

Large Language Models (LLMs) have achieved remarkable success in natural language processing through strong semantic understanding and generation. However, their black-box nature limits structured and multi-hop reasoning. In contrast, Text-Attributed Graphs (TAGs) provide explicit relational structures enriched with textual context, yet often lack semantic depth. Recent research shows that combining LLMs and TAGs yields complementary benefits: enhancing TAG representation learning and improving the reasoning and interpretability of LLMs. This survey provides the first systematic review of LLM--TAG integration from an orchestration perspective. We introduce a novel taxonomy covering two fundamental directions: LLM for TAG, where LLMs enrich graph-based tasks, and TAG for LLM, where structured graphs improve LLM reasoning. We categorize orchestration strategies into sequential, parallel, and multi-module frameworks, and discuss advances in TAG-specific pretraining, prompting, and parameter-efficient fine-tuning. Beyond methodology, we summarize empirical insights, curate available datasets, and highlight diverse applications across recommendation systems, biomedical analysis, and knowledge-intensive question answering. Finally, we outline open challenges and promising research directions, aiming to guide future work at the intersection of language and graph learning.

en cs.CL, cs.AI
arXiv Open Access 2025
Does Localization Inform Unlearning? A Rigorous Examination of Local Parameter Attribution for Knowledge Unlearning in Language Models

Hwiyeong Lee, Uiji Hwang, Hyelim Lim et al.

Large language models often retain unintended content, prompting growing interest in knowledge unlearning. Recent approaches emphasize localized unlearning, restricting parameter updates to specific regions in an effort to remove target knowledge while preserving unrelated general knowledge. However, their effectiveness remains uncertain due to the lack of robust and thorough evaluation of the trade-off between the competing goals of unlearning. In this paper, we begin by revisiting existing localized unlearning approaches. We then conduct controlled experiments to rigorously evaluate whether local parameter updates causally contribute to unlearning. Our findings reveal that the set of parameters that must be modified for effective unlearning is not strictly determined, challenging the core assumption of localized unlearning that parameter locality is inherently indicative of effective knowledge removal.

en cs.CL
arXiv Open Access 2025
La Leaderboard: A Large Language Model Leaderboard for Spanish Varieties and Languages of Spain and Latin America

María Grandury, Javier Aula-Blasco, Júlia Falcão et al.

Leaderboards showcase the current capabilities and limitations of Large Language Models (LLMs). To motivate the development of LLMs that represent the linguistic and cultural diversity of the Spanish-speaking community, we present La Leaderboard, the first open-source leaderboard to evaluate generative LLMs in languages and language varieties of Spain and Latin America. La Leaderboard is a community-driven project that aims to establish an evaluation standard for everyone interested in developing LLMs for the Spanish-speaking community. This initial version combines 66 datasets in Basque, Catalan, Galician, and different Spanish varieties, showcasing the evaluation results of 50 models. To encourage community-driven development of leaderboards in other languages, we explain our methodology, including guidance on selecting the most suitable evaluation setup for each downstream task. In particular, we provide a rationale for using fewer few-shot examples than typically found in the literature, aiming to reduce environmental impact and facilitate access to reproducible results for a broader research community.

DOAJ Open Access 2024
The Rivalry of Procopius of Caesarea and Antonina the Patrician

David Alan Parnell

Procopius of Caesarea traveled with the household of the general Belisarius for many years. If his Secret History is any indication, the historian gained a rich acquaintance with Belisarius’s formidable wife, Antonina. It is possible that the negative treatment of Antonina in the Secret History reflects a rivalry between her and Procopius. This competition becomes most clear when examining the moments in which Procopius becomes a participant in his own narrative of the History of the Wars, and especially in the attempt to resupply Rome (under siege by the Goths) from Naples in 537 AD. Although the historian portrays this moment, when Belisarius entrusted him with fetching reinforcements and supplies for the beleaguered Roman army, as his time to shine, Procopius was upstaged by Antonina. If there was a competition for influence with Belisarius, it seems to have been one that Antonina won handily. It is worth therefore examining the outrageous critiques of Antonina in the Secret History through the lens of a disappointed or even revengeful Procopius.

Ancient history, Greek language and literature. Latin language and literature
arXiv Open Access 2024
Why do objects have many names? A study on word informativeness in language use and lexical systems

Eleonora Gualdoni, Gemma Boleda

Human lexicons contain many different words that speakers can use to refer to the same object, e.g., "purple" or "magenta" for the same shade of color. On the one hand, studies on language use have explored how speakers adapt their referring expressions to successfully communicate in context, without focusing on properties of the lexical system. On the other hand, studies in language evolution have discussed how competing pressures for informativeness and simplicity shape lexical systems, without tackling in-context communication. We aim at bridging the gap between these traditions, and explore why a soft mapping between referents and words is a good solution for communication, by taking into account both in-context communication and the structure of the lexicon. We propose a simple measure of informativeness for words and lexical systems, grounded in a visual space, and analyze color naming data for English and Mandarin Chinese. We conclude that optimal lexical systems are those where multiple words can apply to the same referent, conveying different amounts of information. Such systems allow speakers to maximize communication accuracy and minimize the amount of information they convey when communicating about referents in contexts.

en cs.CL
arXiv Open Access 2024
Why We Build Local Large Language Models: An Observational Analysis from 35 Japanese and Multilingual LLMs

Koshiro Saito, Sakae Mizuki, Masanari Ohi et al.

Why do we build local large language models (LLMs)? What should a local LLM learn from the target language? Which abilities can be transferred from other languages? Do language-specific scaling laws exist? To explore these research questions, we evaluated 35 Japanese, English, and multilingual LLMs on 19 evaluation benchmarks for Japanese and English, taking Japanese as a local language. Adopting an observational approach, we analyzed correlations of benchmark scores, and conducted principal component analysis (PCA) on the scores to derive \textit{ability factors} of local LLMs. We found that training on English text can improve the scores of academic subjects in Japanese (JMMLU). In addition, it is unnecessary to specifically train on Japanese text to enhance abilities for solving Japanese code generation, arithmetic reasoning, commonsense, and reading comprehension tasks. In contrast, training on Japanese text could improve question-answering tasks about Japanese knowledge and English-Japanese translation, which indicates that abilities for solving these two tasks can be regarded as \textit{Japanese abilities} for LLMs. Furthermore, we confirmed that the Japanese abilities scale with the computational budget for Japanese text.

en cs.CL
arXiv Open Access 2024
Danoliteracy of Generative Large Language Models

Søren Vejlgaard Holm, Lars Kai Hansen, Martin Carsten Nielsen

The language technology moonshot moment of Generative Large Language Models (GLLMs) was not limited to English: These models brought a surge of technological applications, investments, and hype to low-resource languages as well. However, the capabilities of these models in languages such as Danish were, until recently, difficult to verify beyond qualitative demonstrations due to a lack of applicable evaluation corpora. We present a GLLM benchmark to evaluate \emph{Danoliteracy}, a measure of Danish language and cultural competency across eight diverse scenarios such as Danish citizenship tests and abstractive social media question answering. This limited-size benchmark was found to produce a robust ranking that correlates to human feedback at $ρ\sim 0.8$ with GPT-4 and Claude Opus models achieving the highest rankings. Analyzing these model results across scenarios, we find one strong underlying factor explaining $95\%$ of scenario performance variance for GLLMs in Danish, suggesting a $g$ factor of model consistency in language adaptation.

en cs.CL, cs.AI
arXiv Open Access 2024
3D-LEX v1.0: 3D Lexicons for American Sign Language and Sign Language of the Netherlands

Oline Ranum, Gomer Otterspeer, Jari I. Andersen et al.

In this work, we present an efficient approach for capturing sign language in 3D, introduce the 3D-LEX v1.0 dataset, and detail a method for semi-automatic annotation of phonetic properties. Our procedure integrates three motion capture techniques encompassing high-resolution 3D poses, 3D handshapes, and depth-aware facial features, and attains an average sampling rate of one sign every 10 seconds. This includes the time for presenting a sign example, performing and recording the sign, and archiving the capture. The 3D-LEX dataset includes 1,000 signs from American Sign Language and an additional 1,000 signs from the Sign Language of the Netherlands. We showcase the dataset utility by presenting a simple method for generating handshape annotations directly from 3D-LEX. We produce handshape labels for 1,000 signs from American Sign Language and evaluate the labels in a sign recognition task. The labels enhance gloss recognition accuracy by 5% over using no handshape annotations, and by 1% over expert annotations. Our motion capture data supports in-depth analysis of sign features and facilitates the generation of 2D projections from any viewpoint. The 3D-LEX collection has been aligned with existing sign language benchmarks and linguistic resources, to support studies in 3D-aware sign language processing.

en cs.CV, cs.AI
arXiv Open Access 2024
SPRING Lab IITM's submission to Low Resource Indic Language Translation Shared Task

Hamees Sayed, Advait Joglekar, Srinivasan Umesh

We develop a robust translation model for four low-resource Indic languages: Khasi, Mizo, Manipuri, and Assamese. Our approach includes a comprehensive pipeline from data collection and preprocessing to training and evaluation, leveraging data from WMT task datasets, BPCC, PMIndia, and OpenLanguageData. To address the scarcity of bilingual data, we use back-translation techniques on monolingual datasets for Mizo and Khasi, significantly expanding our training corpus. We fine-tune the pre-trained NLLB 3.3B model for Assamese, Mizo, and Manipuri, achieving improved performance over the baseline. For Khasi, which is not supported by the NLLB model, we introduce special tokens and train the model on our Khasi corpus. Our training involves masked language modelling, followed by fine-tuning for English-to-Indic and Indic-to-English translations.

en cs.CL, cs.AI
arXiv Open Access 2024
Towards unearthing neglected climate innovations from scientific literature using Large Language Models

César Quilodrán-Casas, Christopher Waite, Nicole Alhadeff et al.

Climate change poses an urgent global threat, needing the rapid identification and deployment of innovative solutions. We hypothesise that many of these solutions already exist within scientific literature but remain underutilised. To address this gap, this study employs a curated dataset sourced from OpenAlex, a comprehensive repository of scientific papers. Utilising Large Language Models (LLMs), such as GPT4-o from OpenAI, we evaluate title-abstract pairs from scientific papers on seven dimensions, covering climate change mitigation potential, stage of technological development, and readiness for deployment. The outputs of the language models are then compared with human evaluations to assess their effectiveness in identifying promising yet overlooked climate innovations. Our findings suggest that these LLM-based models can effectively augment human expertise, uncovering climate solutions that are potentially impactful but with far greater speed, throughput and consistency. Here, we focused on UK-based solutions, but the workflow is region-agnostic. This work contributes to the discovery of neglected innovations in scientific literature and demonstrates the potential of AI in enhancing climate action strategies.

en cs.IR, cs.AI
S2 Open Access 2023
Why Plato needs psychology. Proposal for a theoretical framework underpinning research on the cognitive transfer effects of studying classical languages

A. Vereeck, M. Janse, Katja De Herdt et al.

Psychology is one of the seven hub sciences, which involves great responsibility for psychologists but also great opportunities for both psychologists and other scholars; that was the theme of the 17th European Congress of Psychology organized by the Slovenian Psychologists’ Association. This article contains a detailed example of how psychology functions as a hub science today. The research topic finds its origin in the seemingly unrelated discipline of classics. Latin and Ancient Greek have been taught in Europe for centuries, and even today there are many pupils in secondary education who study them. This custom does not go uncriticized, as the classical languages are often perceived as irrelevant in the modern world. Classicists have therefore been forced, and continue to be forced, to defend the very existence of their discipline. One of the arguments they have adduced, is that the study of classical languages has a beneficial impact on pupils’ linguistic and general cognitive abilities. This claim is closely related to the general issue of transfer of learning which has long preoccupied philosophers and psychologists. The only way to verify such a claim, is to resort to a psychological approach. This article presents the first fully elaborated theoretical framework for the cognitive impact of classical language education, which paves the way for sound and rigorous research on this topic. The framework starts from cognitive transfer as a central construct and goes on to combine insights from various psychological and non-psychological literatures. As such, a fruitful interaction comes about: Not only does psychology contribute to classical language impact research, the latter will also enrich cognitive psychology and psycholinguistics by broaching new terrain.

2 sitasi en
S2 Open Access 2023
The Wicked Stepmother

Emilie van Opstall

Abstract The story of Syntipas/the Seven Sages travelled as an international bestseller through the Near East and Europe, from the Middle Ages up to Early Modernity, and was adapted by its translators to each new context. Belonging to the genre of wisdom literature, it circulated in over thirty languages and under various titles. This article addresses processes of creative adaptation in different cultural contexts by comparing two early versions, the Book of Syntipas the Philosopher in Greek from the eleventh century and Dolopathos in Latin from the twelfth century. By way of case study, it offers an analysis of the ‘bedroom scene’ in both versions and discusses the different ways with which female agency is dealt.

arXiv Open Access 2023
Abstract Visual Reasoning Enabled by Language

Giacomo Camposampiero, Loic Houmard, Benjamin Estermann et al.

While artificial intelligence (AI) models have achieved human or even superhuman performance in many well-defined applications, they still struggle to show signs of broad and flexible intelligence. The Abstraction and Reasoning Corpus (ARC), a visual intelligence benchmark introduced by François Chollet, aims to assess how close AI systems are to human-like cognitive abilities. Most current approaches rely on carefully handcrafted domain-specific program searches to brute-force solutions for the tasks present in ARC. In this work, we propose a general learning-based framework for solving ARC. It is centered on transforming tasks from the vision to the language domain. This composition of language and vision allows for pre-trained models to be leveraged at each stage, enabling a shift from handcrafted priors towards the learned priors of the models. While not yet beating state-of-the-art models on ARC, we demonstrate the potential of our approach, for instance, by solving some ARC tasks that have not been solved previously.

en cs.AI, cs.CL
arXiv Open Access 2023
Hierarchical Prompting Assists Large Language Model on Web Navigation

Abishek Sridhar, Robert Lo, Frank F. Xu et al.

Large language models (LLMs) struggle on processing complicated observations in interactive decision making tasks. To alleviate this issue, we propose a simple hierarchical prompting approach. Diverging from previous prompting approaches that always put the full observation (e.g. a web page) to the prompt, we propose to first construct an action-aware observation which is more condensed and relevant with a dedicated SUMMARIZER prompt. The ACTOR prompt then predicts the next action based on the summarized observation. While our method has broad applicability, we particularly demonstrate its efficacy in the complex domain of web navigation where a full observation often contains redundant and irrelevant information. Our approach outperforms the previous state-of-the-art prompting mechanics by 6.2% on task success rate, demonstrating its potential on interactive decision making tasks with long observation traces.

en cs.CL, cs.AI

Halaman 24 dari 143462