DOAJ Open Access 2026

Knowledge intensive agents

Zhenghao Liu Pengcheng Huang Zhipeng Xu Xinze Li Shuliang Liu +9 lainnya

Abstrak

Large Language Models (LLMs) have exhibited impressive capabilities in reasoning and language understanding. However, their reliance on memorized knowledge and tendency to generate hallucinated content limit their reliability in real-world applications. Retrieval-Augmented Generation (RAG) mitigates these issues by integrating a retrieval module that supplements LLMs with relevant external knowledge. This paradigm bridges parametric memory and explicit retrieval, offering a principled way to ground generation in factual evidence. Despite substantial progress, most prior work has focused on optimizing isolated components, either retrieval or generation, while overlooking the agentic perspective, in which LLMs act as autonomous agents capable of actively acquiring and strategically utilizing knowledge. In this perspectives paper, we argue for reinterpreting RAG as a collaborative knowledge process among agents with distinct yet complementary roles. We categorize knowledge-intensive agents into two primary roles: knowledge acquisition (e.g., routing, query reformulation) and knowledge utilization (e.g., knowledge refinement, response generation). From this viewpoint, RAG becomes a dynamic system in which knowledge is continuously transmitted, transformed, and aligned across agent roles. To fully realize this paradigm, we advocate a joint optimization framework for knowledge-intensive agents within RAG systems. This framework explicitly models the dynamics of knowledge flow in multi-agent settings, aligning knowledge supply with knowledge demand through LLM-driven data synthesis, feedback, and evaluation. By fostering adaptive and targeted knowledge exchange, the framework mitigates conflicts between parametric and retrieved knowledge, thereby enhancing both coherence and factuality. We argue that this multi-agent joint optimization paradigm improves RAG systems in scalability, reliability, and adaptability, unlocking the potential for next-generation knowledge-intensive LLMs that reason, retrieve, and collaborate across deep retrieval processes and diverse vertical domains.

Penulis (14)

Z

Zhenghao Liu

P

Pengcheng Huang

Z

Zhipeng Xu

X

Xinze Li

S

Shuliang Liu

C

Chunyi Peng

H

Haidong Xin

Y

Yukun Yan

S

Shuo Wang

X

Xu Han

Z

Zhiyuan Liu

M

Maosong Sun

Y

Yu Gu

G

Ge Yu

Format Sitasi

Liu, Z., Huang, P., Xu, Z., Li, X., Liu, S., Peng, C. et al. (2026). Knowledge intensive agents. https://doi.org/10.1016/j.aiopen.2026.02.002

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber doi.org/10.1016/j.aiopen.2026.02.002
Informasi Jurnal
Tahun Terbit
2026
Sumber Database
DOAJ
DOI
10.1016/j.aiopen.2026.02.002
Akses
Open Access ✓