arXiv Open Access 2022

Prompt Compression and Contrastive Conditioning for Controllability and Toxicity Reduction in Language Models

David Wingate Mohammad Shoeybi Taylor Sorensen
Lihat Sumber

Abstrak

We explore the idea of compressing the prompts used to condition language models, and show that compressed prompts can retain a substantive amount of information about the original prompt. For severely compressed prompts, while fine-grained information is lost, abstract information and general sentiments can be retained with surprisingly few parameters, which can be useful in the context of decode-time algorithms for controllability and toxicity reduction. We explore contrastive conditioning to steer language model generation towards desirable text and away from undesirable text, and find that some complex prompts can be effectively compressed into a single token to guide generation. We also show that compressed prompts are largely compositional, and can be constructed such that they can be used to control independent aspects of generated text.

Topik & Kata Kunci

Penulis (3)

D

David Wingate

M

Mohammad Shoeybi

T

Taylor Sorensen

Format Sitasi

Wingate, D., Shoeybi, M., Sorensen, T. (2022). Prompt Compression and Contrastive Conditioning for Controllability and Toxicity Reduction in Language Models. https://arxiv.org/abs/2210.03162

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2022
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓