CrossRef Open Access 2025

CE-Prompt: enhance prompt expression stability by multiple understanding

Wujian Yang Chunxu Jin Guanlin Chen Haotian Jin

Abstrak

In this article, we propose CE-Prompt, an enhanced version of Prompt-Tuning designed to address issues such as the instability of random initialization and inefficiencies caused by long text in pre-trained large language models (LLMs). Inspired by the multi-head attention mechanism, CE-Prompt introduces the concept of composite embedding, which utilizes multiple randomly initialized embedding layers to generate more expressive prompt representations. To effectively integrate the information expressed by these composite embeddings, an additive fusion approach is employed, allowing each prompt vector to capture task-specific information more comprehensively, thereby improving the model’s task adaptability and inference efficiency. Experimental results show that CE-Prompt outperforms traditional Prompt-Tuning methods, with average improvements of 0.82% in Bilingual Evaluation Understudy (BLEU)-4 and 0.65% in ROUGE-L. Additionally, time complexity analysis indicates that CE-Prompt significantly reduces computational costs during inference. Compared to other methods, it achieves higher efficiency with the same training parameter budget, providing a more efficient solution for practical deployment.

Penulis (4)

W

Wujian Yang

C

Chunxu Jin

G

Guanlin Chen

H

Haotian Jin

Format Sitasi

Yang, W., Jin, C., Chen, G., Jin, H. (2025). CE-Prompt: enhance prompt expression stability by multiple understanding. https://doi.org/10.7717/peerj-cs.3231

Akses Cepat

Lihat di Sumber doi.org/10.7717/peerj-cs.3231
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
CrossRef
DOI
10.7717/peerj-cs.3231
Akses
Open Access ✓