Hasil untuk "Language acquisition"

Menampilkan 20 dari ~5483832 hasil · dari CrossRef, DOAJ, arXiv, Semantic Scholar

JSON API
S2 Open Access 2013
Identity and Language Learning

B. Norton

Preface Introduction 1. Fact and fiction in language learning 2. Researching identity and language learning 3. The world of adult immigrant language learners 4. Eva and Mai: Old heads on young shoulders 5. Mothers, migration and language learning 6. Second language acquisition theory revisited 7. Claiming the right to speak in classrooms and communities Afterword by Claire Kramsch

860 sitasi en Computer Science, Psychology
S2 Open Access 1998
Second Language Learning Theories

R. Mitchell, F. Myles, E. Marsden

Book Description An introduction to the field of second language learning for students without a substantial background in linguistics, this book became an instant success when it was first published in 1998, and was immediately hailed by the academic community as one of the clearest expositions of current theory in the field of second language learning. Written by an educationalist specialising in the teaching of a second language, and a linguist specialising in second language acquisition, this new edition of 'Second Language Learning Theories' provides an up-to-date introductory survey of the most active and significant theoretical perspectives on the subject. Synopsis Second Language Learning Theories is an introduction to the field of second language learning for students without a substantial background in linguistics. Drawing on the expertise of both a specialist in the teaching of second languages and a linguist specializing in second language acquisition, this textbook provides an up-to-date introductory survey of the most active and significant perspectives on the subject. In this new edition, the authors have revised and updated the text throughout to reflect the substantial developments that have taken place in the field in recent years. New studies have been incorporated as examples and there is more material on work in L2 phonology and lexis, as well as syntax. The evaluation sections in each chapter have been expanded and generally the book is rebalanced in favour of newer material. The first edition quickly established itself as the textbook of choice for students new to second language learning. The updates and revisions in this new edition ensure that the book remains as fresh, engaging and useful as the day it was first published.

1499 sitasi en Computer Science
DOAJ Open Access 2026
An Experimental Study on Digital Media Integrated Movie Clips for EFL Students’ Vocabulary Mastery

Muhammad Priyo Gading Pradana, Pasca Kalisa

This study focuses on whether or not EFL students' vocabulary mastery might be improved by using digital learning tools, specifically the Cake mobile application in conjunction with real-world movie clips. A quasi-experimental methodology was used, drawing on other studies that highlight the potential of mobile assisted language acquisition and audiovisual input for vocabulary development. 64 eleventh-grade students from SMA Teuku Umar Semarang participated in the study. They were split into two groups: an experimental group that received training via the Cake app and movie clips, and a control group that received instruction via traditional techniques. 50-item multiple-choice pre-tests and post-tests were used to gauge vocabulary proficiency. An independent-samples t-test was used after normality and homogeneity testing in the data analysis. The results showed that while both groups improved, there was no significant difference between the experimental and control groups (sig. = 0.217 > 0.05). These findings suggest that better vocabulary increases are not always the result of integrating digital media and movie clips. The results may have been impacted by elements including insufficient contextual scaffolding, limited repeated exposure, and variations in learner preparedness for mobile assisted learning. In terms of pedagogy, the study recommends that in order to optimize their efficacy, digital media should be integrated into thoughtfully crafted educational frameworks that offer clear direction, contextualized practice, and prolonged exposure to target vocabulary.

Language and Literature
arXiv Open Access 2026
Misconception Acquisition Dynamics in Large Language Models

Naiming Liu, Xinghe Chen, Richard Baraniuk et al.

Effective educational AI depends on modeling student misconceptions. Such models enable realistic learner simulation and diagnostic, adaptive tutoring. However, instruction-tuning large language models on student responses containing misconception errors can degrade reasoning abilities, creating a tension between faithful misconception modeling and preserving correct reasoning in other contexts. To support both learner simulation and tutoring, we study two misconception-aware models: the Novice Student Misconception Model, trained to acquire a single misconception for simulating an individual student, and the Expert Tutor Misconception Model, trained on multiple misconceptions to capture the error patterns a tutor encounters across students. To study the misconception acquisition dynamics of both models, we develop MalAlgoLib, a library that generates algebra problems with correct solution traces and misconception-specific erroneous traces. Our experiments across three LLMs reveal that the student and the tutor model exhibit fundamentally different misconception acquisition dynamics. For the student model, a single misconception is not learned as a context-specific behavior. Models overapply it across problems, degrading correct-solving accuracy unless training includes correct examples to enforce boundaries. In contrast, the tutor model can learn multiple misconceptions jointly without sacrificing correct-solving accuracy. Critically, intermediate reasoning steps are the bottleneck. With final-answer supervision alone, models cannot learn where error enters the solution, so neither the student model nor the tutor model acquires misconceptions regardless of data size. Together, these results, enabled by MalAlgoLib, provide an interpretable account of misconception acquisition under instruction tuning and guidance for training misconception-aware LLMs while preserving correct reasoning.

en cs.CY
DOAJ Open Access 2025
geneEX: An Integrated Phenotype‐Driven Algorithm for Rapid Identification of Causative Variants in Monogenic Disorders

Junyu Zhang, Dongyun Liu, Mei Chen et al.

ABSTRACT Background In the diagnostic process of monogenic genetic disorders, identifying pathogenic variants is a crucial step. Thanks to the widespread adoption of Next‐Generation Sequencing (NGS) technology, diagnostic efficiency has been significantly enhanced. However, with the increasing demand for diagnostic accuracy in clinical practice for monogenic genetic diseases, accurately and swiftly pinpointing pathogenic variants among numerous candidate variants remains a significant challenge. The complexity of data analysis and interpretation continues to limit both the efficiency and accuracy of diagnosis. Methods In this study, we have developed an innovative phenotype‐driven algorithm, geneEX. This algorithm integrates large language model technology to accurately extract phenotypes from clinical information and automatically acquire Human Phenotype Ontology (HPO) information through a semantic vector representation model, thereby identifying HPO‐associated genes. Additionally, it supports semantic matching between patients' free‐text phenotypic descriptions and disease phenotypes, further enhancing the identification of pathogenic genes. The algorithm can rank candidate causative variants, enabling rapid and precise identification of potential pathogenic variants in rare genetic disorders. Results geneEX demonstrates commendable performance in ranking pathogenic variants across both virtual and clinical datasets. The supplementary matching of phenotypes in free‐text form significantly enhances the precision of candidate variant prioritization for samples. Conclusion geneEX has achieved automated HPO acquisition through its independently developed phenotype extraction and standardization methods, thereby enabling the full‐process automated identification from clinical samples to pathogenic variants. Additionally, by integrating free‐text phenotypic descriptions with disease phenotype matching, it enhances the accuracy of pathogenic gene identification. This innovative approach significantly improves the precision and efficiency of identifying pathogenic variants in rare genetic disorders, providing robust support for the diagnosis of monogenic diseases.

arXiv Open Access 2025
Reinforcement Learning Meets Large Language Models: A Survey of Advancements and Applications Across the LLM Lifecycle

Keliang Liu, Dingkang Yang, Ziyun Qian et al.

In recent years, training methods centered on Reinforcement Learning (RL) have markedly enhanced the reasoning and alignment performance of Large Language Models (LLMs), particularly in understanding human intents, following user instructions, and bolstering inferential strength. Although existing surveys offer overviews of RL augmented LLMs, their scope is often limited, failing to provide a comprehensive summary of how RL operates across the full lifecycle of LLMs. We systematically review the theoretical and practical advancements whereby RL empowers LLMs, especially Reinforcement Learning with Verifiable Rewards (RLVR). First, we briefly introduce the basic theory of RL. Second, we thoroughly detail application strategies for RL across various phases of the LLM lifecycle, including pre-training, alignment fine-tuning, and reinforced reasoning. In particular, we emphasize that RL methods in the reinforced reasoning phase serve as a pivotal driving force for advancing model reasoning to its limits. Next, we collate existing datasets and evaluation benchmarks currently used for RL fine-tuning, spanning human-annotated datasets, AI-assisted preference data, and program-verification-style corpora. Subsequently, we review the mainstream open-source tools and training frameworks available, providing clear practical references for subsequent research. Finally, we analyse the future challenges and trends in the field of RL-enhanced LLMs. This survey aims to present researchers and practitioners with the latest developments and frontier trends at the intersection of RL and LLMs, with the goal of fostering the evolution of LLMs that are more intelligent, generalizable, and secure.

en cs.CL
arXiv Open Access 2025
CrossTL: A Universal Programming Language Translator with Unified Intermediate Representation

Nripesh Niketan, Vaatsalya Shrivastva

We present CrossTL, a universal programming language translator enabling bidirectional translation between multiple languages through a unified intermediate representation called CrossGL. Traditional approaches require separate translators for each language pair, leading to exponential complexity growth. CrossTL uses a single universal IR to facilitate translations between CUDA, HIP, Metal, DirectX HLSL, OpenGL GLSL, Vulkan SPIR-V, Rust, and Mojo, with Slang support in development. Our system consists of: language-specific lexers/parsers converting source code to ASTs, bidirectional CrossGL translation modules implementing ToCrossGLConverter classes for importing code and CodeGen classes for target generation, and comprehensive backend implementations handling full translation pipelines. We demonstrate effectiveness through comprehensive evaluation across programming domains, achieving successful compilation and execution across all supported backends. The universal IR design enables adding new languages with minimal effort, requiring only language-specific frontend/backend components. Our contributions include: (1) a unified IR capturing semantics of multiple programming paradigms, (2) a modular architecture enabling extensibility, (3) a comprehensive framework supporting GPU compute, graphics programming, and systems languages, and (4) empirical validation demonstrating practical viability of universal code translation. CrossTL represents a significant step toward language-agnostic programming, enabling write-once, deploy-everywhere development.

en cs.PL, cs.CL
arXiv Open Access 2025
Small but Significant: On the Promise of Small Language Models for Accessible AIED

Yumou Wei, Paulo Carvalho, John Stamper

GPT has become nearly synonymous with large language models (LLMs), an increasingly popular term in AIED proceedings. A simple keyword-based search reveals that 61% of the 76 long and short papers presented at AIED 2024 describe novel solutions using LLMs to address some of the long-standing challenges in education, and 43% specifically mention GPT. Although LLMs pioneered by GPT create exciting opportunities to strengthen the impact of AI on education, we argue that the field's predominant focus on GPT and other resource-intensive LLMs (with more than 10B parameters) risks neglecting the potential impact that small language models (SLMs) can make in providing resource-constrained institutions with equitable and affordable access to high-quality AI tools. Supported by positive results on knowledge component (KC) discovery, a critical challenge in AIED, we demonstrate that SLMs such as Phi-2 can produce an effective solution without elaborate prompting strategies. Hence, we call for more attention to developing SLM-based AIED approaches.

en cs.CL, cs.AI

Halaman 27 dari 274192