arXiv Open Access 2025

HistLLM: A Unified Framework for LLM-Based Multimodal Recommendation with User History Encoding and Compression

Chen Zhang Bo Hu Weidong Chen Zhendong Mao
Lihat Sumber

Abstrak

While large language models (LLMs) have proven effective in leveraging textual data for recommendations, their application to multimodal recommendation tasks remains relatively underexplored. Although LLMs can process multimodal information through projection functions that map visual features into their semantic space, recommendation tasks often require representing users' history interactions through lengthy prompts combining text and visual elements, which not only hampers training and inference efficiency but also makes it difficult for the model to accurately capture user preferences from complex and extended prompts, leading to reduced recommendation performance. To address this challenge, we introduce HistLLM, an innovative multimodal recommendation framework that integrates textual and visual features through a User History Encoding Module (UHEM), compressing multimodal user history interactions into a single token representation, effectively facilitating LLMs in processing user preferences. Extensive experiments demonstrate the effectiveness and efficiency of our proposed mechanism.

Topik & Kata Kunci

Penulis (4)

C

Chen Zhang

B

Bo Hu

W

Weidong Chen

Z

Zhendong Mao

Format Sitasi

Zhang, C., Hu, B., Chen, W., Mao, Z. (2025). HistLLM: A Unified Framework for LLM-Based Multimodal Recommendation with User History Encoding and Compression. https://arxiv.org/abs/2504.10150

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓