arXiv Open Access 2024

SciRIFF: A Resource to Enhance Language Model Instruction-Following over Scientific Literature

David Wadden Kejian Shi Jacob Morrison Alan Li Aakanksha Naik +9 lainnya
Lihat Sumber

Abstrak

We present SciRIFF (Scientific Resource for Instruction-Following and Finetuning), a dataset of 137K instruction-following instances for training and evaluation, covering 54 tasks. These tasks span five core scientific literature understanding capabilities: information extraction, summarization, question answering, claim verification, and classification. SciRIFF is unique in being entirely expert-written, high-quality instruction-following dataset for extracting and synthesizing information from research literature across diverse scientific fields. It features complex instructions with long input contexts, detailed task descriptions, and structured outputs. To demonstrate its utility, we finetune a series of large language models (LLMs) using a mix of general-domain and SciRIFF instructions. On nine out-of-distribution held-out tasks (referred to as SciRIFF-Eval), LLMs finetuned on SciRIFF achieve 70.6% average improvement over baselines trained only on general-domain instructions. SciRIFF facilitates the development and evaluation of LLMs to help researchers navigate the rapidly growing body of scientific literature.

Topik & Kata Kunci

Penulis (14)

D

David Wadden

K

Kejian Shi

J

Jacob Morrison

A

Alan Li

A

Aakanksha Naik

S

Shruti Singh

N

Nitzan Barzilay

K

Kyle Lo

T

Tom Hope

L

Luca Soldaini

S

Shannon Zejiang Shen

D

Doug Downey

H

Hannaneh Hajishirzi

A

Arman Cohan

Format Sitasi

Wadden, D., Shi, K., Morrison, J., Li, A., Naik, A., Singh, S. et al. (2024). SciRIFF: A Resource to Enhance Language Model Instruction-Following over Scientific Literature. https://arxiv.org/abs/2406.07835

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓