arXiv Open Access 2026

Unlocking Data Value in Finance: A Study on Distillation and Difficulty-Aware Training

Chuxue Cao Honglin Lin Zhanping Zhong Xin Gao Mengzhang Cai +3 lainnya
Lihat Sumber

Abstrak

Large Language Models (LLMs) have demonstrated strong general capabilities, yet their deployment in finance remains challenging due to dense domain-specific terminology, stringent numerical reasoning requirements, and low tolerance for factual errors. We conduct a controlled empirical study showing that in specialized vertical domains, performance is largely determined by the quality and difficulty/verifiability profile of post-training data. We introduce \textbf{ODA-Fin-SFT-318k}, constructed via multi-stage distillation and verification to produce high-quality Chain-of-Thought supervision, and \textbf{ODA-Fin-RL-12k}, curated for hard-but-verifiable tasks that balance reward precision and task diversity. Using standard SFT and RL pipelines, we show that high-quality CoT distillation establishes a robust foundation during SFT, while difficulty- and verifiability-aware sampling improves RL generalization. Evaluated on nine benchmarks spanning general financial tasks, sentiment analysis, and numerical reasoning, our ODA-Fin-RL-8B consistently surpasses open-source state-of-the-art (SOTA) financial LLMs of comparable size. We release our ODA-Fin-SFT-318k and ODA-Fin-RL-12k datasets, along with trained models to advance data-centric financial AI research.

Topik & Kata Kunci

Penulis (8)

C

Chuxue Cao

H

Honglin Lin

Z

Zhanping Zhong

X

Xin Gao

M

Mengzhang Cai

C

Conghui He

S

Sirui Han

L

Lijun Wu

Format Sitasi

Cao, C., Lin, H., Zhong, Z., Gao, X., Cai, M., He, C. et al. (2026). Unlocking Data Value in Finance: A Study on Distillation and Difficulty-Aware Training. https://arxiv.org/abs/2603.07223

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2026
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓