DOAJ Open Access 2025

Dynamic Assessment with AI (Agentic RAG) and Iterative Feedback: A Model for the Digital Transformation of Higher Education in the Global EdTech Ecosystem

Rubén Juárez Antonio Hernández-Fernández Claudia de Barros-Camargo David Molero

Abstrak

This article formalizes AI-assisted assessment as a discrete-time <i>policy-level</i> design for iterative feedback and evaluates it in a digitally transformed higher-education setting. We integrate an <i>agentic</i> retrieval-augmented generation (RAG) feedback engine—operationalized through <i>planning</i> (rubric-aligned task decomposition), <i>tool use</i> beyond retrieval (tests, static/dynamic analyzers, rubric checker), and <i>self-critique</i> (checklist-based verification)—into a six-iteration dynamic evaluation cycle. Learning trajectories are modeled with three complementary formulations: (i) an interpretable update rule with explicit parameters <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mi>η</mi></semantics></math></inline-formula> and <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mi>λ</mi></semantics></math></inline-formula> that links next-step gains to feedback quality and the gap-to-target and yields iteration-complexity and stability conditions; (ii) a logistic-convergence model capturing diminishing returns near ceiling; and (iii) a relative-gain regression quantifying the marginal effect of feedback quality on the fraction of the gap closed per iteration. In a <i>Concurrent Programming</i> course (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>n</mi><mo>=</mo><mn>35</mn></mrow></semantics></math></inline-formula>), the cohort mean increased from 58.4 to 91.2 (0–100), while dispersion decreased from 9.7 to 5.8 across six iterations; a Greenhouse–Geisser corrected repeated-measures ANOVA indicated significant within-student change. Parameter estimates show that higher-quality, evidence-grounded feedback is associated with larger next-step gains and faster convergence. Beyond performance, we engage the broader pedagogical question of <i>what to value and how to assess</i> in AI-rich settings: we elevate <i>process and provenance</i>—planning artifacts, tool-usage traces, test outcomes, and evidence citations—to first-class assessment signals, and outline defensible formats (trace-based walkthroughs and oral/code defenses) that our controller can instrument. We position this as a <i>design model for feedback policy</i>, complementary to state-estimation approaches such as knowledge tracing. We discuss implications for instrumentation, equity-aware metrics, reproducibility, and epistemically aligned rubrics. Limitations include the observational, single-course design; future work should test causal variants (e.g., stepped-wedge trials) and cross-domain generalization.

Penulis (4)

R

Rubén Juárez

A

Antonio Hernández-Fernández

C

Claudia de Barros-Camargo

D

David Molero

Format Sitasi

Juárez, R., Hernández-Fernández, A., Barros-Camargo, C.d., Molero, D. (2025). Dynamic Assessment with AI (Agentic RAG) and Iterative Feedback: A Model for the Digital Transformation of Higher Education in the Global EdTech Ecosystem. https://doi.org/10.3390/a18110712

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber doi.org/10.3390/a18110712
Informasi Jurnal
Tahun Terbit
2025
Sumber Database
DOAJ
DOI
10.3390/a18110712
Akses
Open Access ✓