Hasil untuk "Language acquisition"

Menampilkan 20 dari ~5476854 hasil · dari CrossRef, DOAJ, arXiv, Semantic Scholar

JSON API
S2 Open Access 2009
Bilingual First Language Acquisition

A. D. Houwer

Chapter 1 Introducing Bilingual First Language Acquisition Chapter 2 Bilingual children's language development: an overview Chapter 3 Research methods in BFLA Chapter 4 Socializing environments and BLFA Chapter 5 Sounds in BFLA Chapter 6 Words in BFLA Chapter 7 Sentences in BFLA Chapter 8 Harmonious bilingual development

657 sitasi en Computer Science
DOAJ Open Access 2026
Language as an evolutionary pressure of human handedness

José M.M. Gázquez, Thomas Castelain, Miquel Llorente

Language, rooted in left-hemisphere dominance and a species-wide bias toward right-handedness, stands as a singularly complex cognitive and behavioural domain that has profoundly influenced human evolution. We argue that language should be understood not solely as an adaptation or a by-product of neural expansion and laryngeal descent, but as an active evolutionary force that reinforced pre-existing motor lateralisation. Integrating Tinbergen's four questions with the Baldwin Effect framework, we synthesise evidence from behavioural ecology, comparative neuroscience, and palaeoanthropology to trace the interplay between gestural origins, vocal learning, hemispheric specialisation, and handedness. We propose that language arose as a form of social pressure, amplifying an ancestral right-handed gestural bias through increasingly complex vocal acquisition, thereby linking developmental (ontogenetic) processes with deep evolutionary (phylogenetic) change. This co-evolutionary dynamic produced the pronounced hemispheric asymmetry and right-handedness that distinguish Homo sapiens.

DOAJ Open Access 2025
Exploring Phraseological Patterns in Business English Non-Finite Clauses

Olfa Ben Amor

The increasing availability of large-scale corpora and advanced data-processing tools has enhanced the analysis of phraseological units. This study investigates the phraseology of English non-finite clauses – specifically to-infinitive, -ing, and past participle clauses –headed by adjectives, adverbs, nouns, and pronouns. It explores the phraseological patterns of these structures and their semantic extensions within a specialized corpus of business English. The corpus comprises academic and journalistic registers, with the academic register including research articles from four leading journals and graduate theses from Tunisian institutions, while the news register features business articles from The Economist and Financial Times. The study identifies lexico-grammatical patterns forming various phraseologies of non-finite clauses and categorizes these patterns into semantic sets based on the degree of fixedness. Findings reveal differences in the frequency of non-finite phraseologies across the academic and news register, and similarities in the degree of fixedness and functions of the most frequent patterns. The study offers a corpus-based account of how non-finite clause constructions are used across business registers, contributing to a broader understanding of register variation, discourse organization, and phraseological conventions in business discourse.

Special aspects of education, Language acquisition
DOAJ Open Access 2025
Adaptive Spelling in Immersive Reality: The Impact of Gamified VR and LLMs on Young Learners’ English Color Word Acquisition

Jalal Safari Bazargani, Abolghasem Sadeghi-Niaraki, Xinyu Shi et al.

Spelling is a crucial language skill, yet traditional instruction often relies on rote memorization rather than meaningful learning. Despite various approaches to spelling instruction, the potential of virtual reality (VR) along with the integration of gamification and LLMs, remains underexplored. This study explores a VR-based, gamified approach using LLM-driven adaptive learning to improve spelling acquisition of English color words among young learners. The study employed a quasi-experimental, pre-test-post-test design with a control group. The participants were 50 male students aged 10, divided into an experimental group (N=25) that used the LLM-enhanced VR game and a control group (N=25) that received traditional instruction. The VR intervention consisted of a three-stage game built on the whole-word approach, featuring gamified elements and adaptive feedback from an LLM. Data were collected via spelling tests (pre-test, immediate post-test, and delayed post-test), user experience surveys, and semi-structured interviews. Results showed that the VR-based approach, significantly improved spelling performance and engagement. Specifically, the experimental group demonstrated substantially higher scores in both immediate vocabulary uptake and long-term retention after one week compared to the control group. Furthermore, qualitative and survey data indicated the VR experience was perceived as significantly more interesting, effective, and motivating. These findings highlight the potential of immersive, gamified learning environments to enhance spelling education, offering an effective alternative to conventional methods.

Electrical engineering. Electronics. Nuclear engineering
arXiv Open Access 2025
Searching for the Most Human-like Emergent Language

Brendon Boldt, David Mortensen

In this paper, we design a signalling game-based emergent communication environment to generate state-of-the-art emergent languages in terms of similarity to human language. This is done with hyperparameter optimization, using XferBench as the objective function. XferBench quantifies the statistical similarity of emergent language to human language by measuring its suitability for deep transfer learning to human language. Additionally, we demonstrate the predictive power of entropy on the transfer learning performance of emergent language as well as corroborate previous results on the entropy-minimization properties of emergent communication systems. Finally, we report generalizations regarding what hyperparameters produce more realistic emergent languages, that is, ones which transfer better to human language.

en cs.CL
arXiv Open Access 2025
Seeing, Signing, and Saying: A Vision-Language Model-Assisted Pipeline for Sign Language Data Acquisition and Curation from Social Media

Shakib Yazdani, Yasser Hamidullah, Cristina España-Bonet et al.

Most existing sign language translation (SLT) datasets are limited in scale, lack multilingual coverage, and are costly to curate due to their reliance on expert annotation and controlled recording setup. Recently, Vision Language Models (VLMs) have demonstrated strong capabilities as evaluators and real-time assistants. Despite these advancements, their potential remains untapped in the context of sign language dataset acquisition. To bridge this gap, we introduce the first automated annotation and filtering framework that utilizes VLMs to reduce reliance on manual effort while preserving data quality. Our method is applied to TikTok videos across eight sign languages and to the already curated YouTube-SL-25 dataset in German Sign Language for the purpose of additional evaluation. Our VLM-based pipeline includes a face visibility detection, a sign activity recognition, a text extraction from video content, and a judgment step to validate alignment between video and text, implementing generic filtering, annotation and validation steps. Using the resulting corpus, TikTok-SL-8, we assess the performance of two off-the-shelf SLT models on our filtered dataset for German and American Sign Languages, with the goal of establishing baselines and evaluating the robustness of recent models on automatically extracted, slightly noisy data. Our work enables scalable, weakly supervised pretraining for SLT and facilitates data acquisition from social media.

en cs.CL
arXiv Open Access 2025
Aligning Sentence Simplification with ESL Learner's Proficiency for Language Acquisition

Guanlin Li, Yuki Arase, Noel Crespi

Text simplification is crucial for improving accessibility and comprehension for English as a Second Language (ESL) learners. This study goes a step further and aims to facilitate ESL learners' language acquisition by simplification. Specifically, we propose simplifying complex sentences to appropriate levels for learners while also increasing vocabulary coverage of the target level in the simplifications. We achieve this without a parallel corpus by conducting reinforcement learning on a large language model. Our method employs token-level and sentence-level rewards, and iteratively trains the model on its self-generated outputs to guide the model to search for simplification hypotheses that satisfy the target attributes. Experiment results on CEFR-SP and TurkCorpus datasets show that the proposed method can effectively increase the frequency and diversity of vocabulary of the target level by more than $20\%$ compared to baseline models, while maintaining high simplification quality.

en cs.CL, cs.AI
arXiv Open Access 2025
Understanding Network Behaviors through Natural Language Question-Answering

Mingzhe Xing, Chang Tian, Jianan Zhang et al.

Modern large-scale networks introduce significant complexity in understanding network behaviors, increasing the risk of misconfiguration. Prior work proposed to understand network behaviors by mining network configurations, typically relying on domain-specific languages interfaced with formal models. While effective, they suffer from a steep learning curve and limited flexibility. In contrast, natural language (NL) offers a more accessible and interpretable interface, motivating recent research on NL-guided network behavior understanding. Recent advances in large language models (LLMs) further enhance this direction, leveraging their extensive prior knowledge of network concepts and strong reasoning capabilities. However, three key challenges remain: 1) numerous router devices with lengthy configuration files challenge LLM's long-context understanding ability; 2) heterogeneity across devices and protocols impedes scalability; and 3) complex network topologies and protocols demand advanced reasoning abilities beyond the current capabilities of LLMs. To tackle the above challenges, we propose NetMind, a novel framework for querying networks using NL. Our approach introduces a tree-based configuration chunking strategy to preserve semantic coherence while enabling efficient partitioning. We then construct a unified fact graph as an intermediate representation to normalize vendor-specific configurations. Finally, we design a hybrid imperative-declarative language to reduce the reasoning burden on LLMs and enhance precision. We contribute a benchmark consisting of NL question-answer pairs paired with network configurations. Experiments demonstrate that NetMind achieves accurate and scalable network behavior understanding, outperforming existing baselines.

en cs.CL, cs.AI

Halaman 6 dari 273843