arXiv Open Access 2025

Leveraging Large Language Models for Identifying Knowledge Components

Canwen Wang Jionghao Lin Kenneth R. Koedinger
Lihat Sumber

Abstrak

Knowledge Components (KCs) are foundational to adaptive learning systems, but their manual identification by domain experts is a significant bottleneck. While Large Language Models (LLMs) offer a promising avenue for automating this process, prior research has been limited to small datasets and has been shown to produce superfluous, redundant KC labels. This study addresses these limitations by first scaling a "simulated textbook" LLM prompting strategy (using GPT-4o-mini) to a larger dataset of 646 multiple-choice questions. We found that this initial automated approach performed significantly worse than an expert-designed KC model (RMSE 0.4285 vs. 0.4206) and generated an excessive number of KCs (569 vs. 101). To address the issue of redundancy, we proposed and evaluated a novel method for merging semantically similar KC labels based on their cosine similarity. This merging strategy significantly improved the model's performance; a model using a cosine similarity threshold of 0.8 achieved the best result, reducing the KC count to 428 and improving the RMSE to 0.4259. This demonstrates that while scaled LLM generation alone is insufficient, combining it with a semantic merging technique offers a viable path toward automating and refining KC identification.

Topik & Kata Kunci

Penulis (3)

C

Canwen Wang

J

Jionghao Lin

K

Kenneth R. Koedinger

Format Sitasi

Wang, C., Lin, J., Koedinger, K.R. (2025). Leveraging Large Language Models for Identifying Knowledge Components. https://arxiv.org/abs/2511.09935

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓