Hasil untuk "Greek language and literature. Latin language and literature"

Menampilkan 20 dari ~2864230 hasil · dari CrossRef, DOAJ, arXiv, Semantic Scholar

JSON API
DOAJ Open Access 2025
Lettera di un re Seleuco a Erofanto

Francia, Riccardo

Questa iscrizione è stata recentemente scoperta in Iran. Il testo riporta una lettera attribuita a un re Seleuco e indirizzata a un funzionario di nome Erofanto. Il documento, datato probabilmente al regno di Seleuco II (246‑226 a.C.) o Seleuco IV (187‑175 a.C.), riguarda concessioni fiscali a una comunità coinvolta nell’allevamento di cavalli da guerra. L’iscrizione, di ubicazione ignota, è stata pubblicata per la prima volta nel 2012 da George Rougemont. Il testo, seppur frammentario, è un’importante testimonianza amministrativa del dominio seleucide in Oriente, ma presenta incertezze sulla localizzazione e sull’identità esatta del sovrano.

Ancient history, Greek philology and language
arXiv Open Access 2025
Morpheme Induction for Emergent Language

Brendon Boldt, David Mortensen

We introduce CSAR, an algorithm for inducing morphemes from emergent language corpora of parallel utterances and meanings. It is a greedy algorithm that (1) weights morphemes based on mutual information between forms and meanings, (2) selects the highest-weighted pair, (3) removes it from the corpus, and (4) repeats the process to induce further morphemes (i.e., Count, Select, Ablate, Repeat). The effectiveness of CSAR is first validated on procedurally generated datasets and compared against baselines for related tasks. Second, we validate CSAR's performance on human language data to show that the algorithm makes reasonable predictions in adjacent domains. Finally, we analyze a handful of emergent languages, quantifying linguistic characteristics like degree of synonymy and polysemy.

en cs.CL
arXiv Open Access 2025
GenQuest: An LLM-based Text Adventure Game for Language Learners

Qiao Wang, Adnan Labib, Robert Swier et al.

GenQuest is a generative text adventure game that leverages Large Language Models (LLMs) to facilitate second language learning through immersive, interactive storytelling. The system engages English as a Foreign Language (EFL) learners in a collaborative "choose-your-own-adventure" style narrative, dynamically generated in response to learner choices. Game mechanics such as branching decision points and story milestones are incorporated to maintain narrative coherence while allowing learner-driven plot development. Key pedagogical features include content generation tailored to each learner's proficiency level, and a vocabulary assistant that provides in-context explanations of learner-queried text strings, ranging from words and phrases to sentences. Findings from a pilot study with university EFL students in China indicate promising vocabulary gains and positive user perceptions. Also discussed are suggestions from participants regarding the narrative length and quality, and the request for multi-modal content such as illustrations.

en cs.CL, cs.AI
arXiv Open Access 2024
Türkçe Dil Modellerinin Performans Karşılaştırması Performance Comparison of Turkish Language Models

Eren Dogan, M. Egemen Uzun, Atahan Uz et al.

The developments that language models have provided in fulfilling almost all kinds of tasks have attracted the attention of not only researchers but also the society and have enabled them to become products. There are commercially successful language models available. However, users may prefer open-source language models due to cost, data privacy, or regulations. Yet, despite the increasing number of these models, there is no comprehensive comparison of their performance for Turkish. This study aims to fill this gap in the literature. A comparison is made among seven selected language models based on their contextual learning and question-answering abilities. Turkish datasets for contextual learning and question-answering were prepared, and both automatic and human evaluations were conducted. The results show that for question-answering, continuing pretraining before fine-tuning with instructional datasets is more successful in adapting multilingual models to Turkish and that in-context learning performances do not much related to question-answering performances.

en cs.CL, cs.AI
arXiv Open Access 2024
Application of GPT Language Models for Innovation in Activities in University Teaching

Manuel de Buenaga, Francisco Javier Bueno

The GPT (Generative Pre-trained Transformer) language models are an artificial intelligence and natural language processing technology that enables automatic text generation. There is a growing interest in applying GPT language models to university teaching in various dimensions. From the perspective of innovation in student and teacher activities, they can provide support in understanding and generating content, problem-solving, as well as personalization and test correction, among others. From the dimension of internationalization, the misuse of these models represents a global problem that requires taking a series of common measures in universities from different geographical areas. In several countries, there has been a review of assessment tools to ensure that work is done by students and not by AI. To this end, we have conducted a detailed experiment in a representative subject of Computer Science such as Software Engineering, which has focused on evaluating the use of ChatGPT as an assistant in theory activities, exercises, and laboratory practices, assessing its potential use as a support tool for both students and teachers.

en cs.CY, cs.AI
arXiv Open Access 2024
PRODIS -- a speech database and a phoneme-based language model for the study of predictability effects in Polish

Zofia Malisz, Jan Foremski, Małgorzata Kul

We present a speech database and a phoneme-level language model of Polish. The database and model are designed for the analysis of prosodic and discourse factors and their impact on acoustic parameters in interaction with predictability effects. The database is also the first large, publicly available Polish speech corpus of excellent acoustic quality that can be used for phonetic analysis and training of multi-speaker speech technology systems. The speech in the database is processed in a pipeline that achieves a 90% degree of automation. It incorporates state-of-the-art, freely available tools enabling database expansion or adaptation to additional languages.

en cs.CL, cs.SD
arXiv Open Access 2024
Learning From Failure: Integrating Negative Examples when Fine-tuning Large Language Models as Agents

Renxi Wang, Haonan Li, Xudong Han et al.

Large language models (LLMs) have achieved success in acting as agents, which interact with environments through tools such as search engines. However, LLMs are optimized for language generation instead of tool use during training or alignment, limiting their effectiveness as agents. To resolve this problem, previous work has first collected interaction trajectories between LLMs and environments, using only trajectories that successfully finished the task to fine-tune smaller models, making fine-tuning data scarce and acquiring it both difficult and costly. Discarding failed trajectories also leads to significant wastage of data and resources and limits the possible optimization paths during fine-tuning. In this paper, we argue that unsuccessful trajectories offer valuable insights, and LLMs can learn from these trajectories through appropriate quality control and fine-tuning strategies. By simply adding a prefix or suffix that tells the model whether to generate a successful trajectory during training, we improve model performance by a large margin on mathematical reasoning, multi-hop question answering, and strategic question answering tasks. We further analyze the inference results and find that our method provides a better trade-off between valuable information and errors in unsuccessful trajectories. To our knowledge, we are the first to demonstrate the value of negative trajectories and their application in agent-tunning scenarios. Our findings offer guidance for developing better agent-tuning methods and low-resource data usage techniques.

en cs.CL
arXiv Open Access 2024
MMNeuron: Discovering Neuron-Level Domain-Specific Interpretation in Multimodal Large Language Model

Jiahao Huo, Yibo Yan, Boren Hu et al.

Projecting visual features into word embedding space has become a significant fusion strategy adopted by Multimodal Large Language Models (MLLMs). However, its internal mechanisms have yet to be explored. Inspired by multilingual research, we identify domain-specific neurons in multimodal large language models. Specifically, we investigate the distribution of domain-specific neurons and the mechanism of how MLLMs process features from diverse domains. Furthermore, we propose a three-stage mechanism for language model modules in MLLMs when handling projected image features, and verify this hypothesis using logit lens. Extensive experiments indicate that while current MLLMs exhibit Visual Question Answering (VQA) capability, they may not fully utilize domain-specific information. Manipulating domain-specific neurons properly will result in a 10% change of accuracy at most, shedding light on the development of cross-domain, all-encompassing MLLMs in the future. The source code is available at https://github.com/Z1zs/MMNeuron.

DOAJ Open Access 2023
Defectos latentes y vicios ocultos: dos problemas para la compraventa de esclavos en Roma

Martha Patricia Irigoyen Troconis

En toda transacción comercial, la determinación del grado de responsabilidad de las partes involucradas es fundamental. En la antigua Roma, ésta fue legislada a fines de la época republicana a fin de establecer el posible resarcimiento de daños ocasionados por los denominados "defectos latentes" y "vicios ocultos" en objetos susceptibles de compra y venta,  específicamente, esclavos y ganado. Este trabajo se refiere al edicto edilicio que legisló al respecto, así como a los comentarios que de los jurisprudentes conservamos en el Digesto.

Greek language and literature. Latin language and literature
DOAJ Open Access 2023
Aristófanes, Riqueza, intr., vers. y nts. P. Cavallero (dir.), M. J. Coscolla, D. Frenkel y J. Gallego, Buenos Aires, Facultad de Filosofía y Letras, UBA (Textos & Estudios, 4), 2002, 275 págs.

Claudia N. Fernández

No es la presente edición de Plutos una nueva traducción castellana de Aristófanes al estilo de las que habitualmente ofrecen las ediciones de bolsillo, destinadas a un vasto sector del público lector, mayormente no especializado en la literatura antigua...

Greek language and literature. Latin language and literature
arXiv Open Access 2023
KGLens: Towards Efficient and Effective Knowledge Probing of Large Language Models with Knowledge Graphs

Shangshang Zheng, He Bai, Yizhe Zhang et al.

Large Language Models (LLMs) might hallucinate facts, while curated Knowledge Graph (KGs) are typically factually reliable especially with domain-specific knowledge. Measuring the alignment between KGs and LLMs can effectively probe the factualness and identify the knowledge blind spots of LLMs. However, verifying the LLMs over extensive KGs can be expensive. In this paper, we present KGLens, a Thompson-sampling-inspired framework aimed at effectively and efficiently measuring the alignment between KGs and LLMs. KGLens features a graph-guided question generator for converting KGs into natural language, along with a carefully designed importance sampling strategy based on parameterized KG structure to expedite KG traversal. Our simulation experiment compares the brute force method with KGLens under six different sampling methods, demonstrating that our approach achieves superior probing efficiency. Leveraging KGLens, we conducted in-depth analyses of the factual accuracy of ten LLMs across three large domain-specific KGs from Wikidata, composing over 19K edges, 700 relations, and 21K entities. Human evaluation results indicate that KGLens can assess LLMs with a level of accuracy nearly equivalent to that of human annotators, achieving 95.7% of the accuracy rate.

en cs.AI, cs.CL
arXiv Open Access 2022
MALM: Mixing Augmented Language Modeling for Zero-Shot Machine Translation

Kshitij Gupta

Large pre-trained language models have brought remarkable progress in NLP. Pre-training and Fine-tuning have given state-of-art performance across tasks in text processing. Data Augmentation techniques have also helped build state-of-art models on low or zero resource tasks. Many works in the past have attempted at learning a single massively-multilingual machine translation model for zero-shot translation. Although those translation models are producing correct translations, the main challenge is those models are producing the wrong languages for zero-shot translation. This work and its results indicate that prompt conditioned large models do not suffer from off-target language errors i.e. errors arising due to translation to wrong languages. We empirically demonstrate the effectiveness of self-supervised pre-training and data augmentation for zero-shot multi-lingual machine translation.

en cs.CL, cs.LG
arXiv Open Access 2022
Language Control Diffusion: Efficiently Scaling through Space, Time, and Tasks

Edwin Zhang, Yujie Lu, Shinda Huang et al.

Training generalist agents is difficult across several axes, requiring us to deal with high-dimensional inputs (space), long horizons (time), and generalization to novel tasks. Recent advances with architectures have allowed for improved scaling along one or two of these axes, but are still computationally prohibitive to use. In this paper, we propose to address all three axes by leveraging \textbf{L}anguage to \textbf{C}ontrol \textbf{D}iffusion models as a hierarchical planner conditioned on language (LCD). We effectively and efficiently scale diffusion models for planning in extended temporal, state, and task dimensions to tackle long horizon control problems conditioned on natural language instructions, as a step towards generalist agents. Comparing LCD with other state-of-the-art models on the CALVIN language robotics benchmark finds that LCD outperforms other SOTA methods in multi-task success rates, whilst improving inference speed over other comparable diffusion models by 3.3x~15x. We show that LCD can successfully leverage the unique strength of diffusion models to produce coherent long range plans while addressing their weakness in generating low-level details and control.

en cs.LG, cs.AI
arXiv Open Access 2022
Controlling Translation Formality Using Pre-trained Multilingual Language Models

Elijah Rippeth, Sweta Agrawal, Marine Carpuat

This paper describes the University of Maryland's submission to the Special Task on Formality Control for Spoken Language Translation at \iwslt, which evaluates translation from English into 6 languages with diverse grammatical formality markers. We investigate to what extent this problem can be addressed with a \textit{single multilingual model}, simultaneously controlling its output for target language and formality. Results show that this strategy can approach the translation quality and formality control achieved by dedicated translation models. However, the nature of the underlying pre-trained language model and of the finetuning samples greatly impact results.

en cs.CL
arXiv Open Access 2021
Worst of Both Worlds: Biases Compound in Pre-trained Vision-and-Language Models

Tejas Srinivasan, Yonatan Bisk

Numerous works have analyzed biases in vision and pre-trained language models individually - however, less attention has been paid to how these biases interact in multimodal settings. This work extends text-based bias analysis methods to investigate multimodal language models, and analyzes intra- and inter-modality associations and biases learned by these models. Specifically, we demonstrate that VL-BERT (Su et al., 2020) exhibits gender biases, often preferring to reinforce a stereotype over faithfully describing the visual scene. We demonstrate these findings on a controlled case-study and extend them for a larger set of stereotypically gendered entities.

en cs.CL
DOAJ Open Access 2020
«In King Cambyses’ Vein»: Reconsidering the Relationship between Thomas Preston’s Cambises and Herodotus

Francesco Dall'Olio

The relationship between Thomas Preston’s early Elizabethan tragedy Cambises (printed 1569) and the Book III of Herodotus’ Histories has often been downplayed, owing to the lack of printed editions or translations of Herodotus in England at the time and the much more evident connection between the tragedy and the second book of Richard Taverner’s Garden of Wysedome (1547). However, a closer look at the play’s sources reveals how a connection may exist, and how the version of the story Preston staged may be influenced by the tale of Cambyses as presented by the ancient historian. The insistence on the relationship between the king and his subjects (a central issue in both Preston’s tragedy and its sources) may derive from Herodotus, especially if viewed in contrast with the previous versions of the story in medieval literature, the focus of which was mainly on the ethical exempla they provided. Through a comparison of those texts, and a consideration of the availability of Herodotus’ work at the time, either in print or in manuscript form, this paper will then suggest that the version Preston staged in his tragedy is closer to Herodotus than the previous literary tradition.

History of the Greco-Roman World, Greek language and literature. Latin language and literature

Halaman 13 dari 143212