arXiv Open Access 2026

Visual-ERM: Reward Modeling for Visual Equivalence

Ziyu Liu Shengyuan Ding Xinyu Fang Xuanlang Dai Penghui Yang +5 lainnya
Lihat Sumber

Abstrak

Vision-to-code tasks require models to reconstruct structured visual inputs, such as charts, tables, and SVGs, into executable or structured representations with high visual fidelity. While recent Large Vision Language Models (LVLMs) achieve strong results via supervised fine-tuning, reinforcement learning remains challenging due to misaligned reward signals. Existing rewards either rely on textual rules or coarse visual embedding similarity, both of which fail to capture fine-grained visual discrepancies and are vulnerable to reward hacking. We propose Visual Equivalence Reward Model (Visual-ERM), a multimodal generative reward model that provides fine-grained, interpretable, and task-agnostic feedback to evaluate vision-to-code quality directly in the rendered visual space. Integrated into RL, Visual-ERM improves Qwen3-VL-8B-Instruct by +8.4 on chart-to-code and yields consistent gains on table and SVG parsing (+2.7, +4.1 on average), and further strengthens test-time scaling via reflection and revision. We also introduce VisualCritic-RewardBench (VC-RewardBench), a benchmark for judging fine-grained image-to-image discrepancies on structured visual data, where Visual-ERM at 8B decisively outperforms Qwen3-VL-235B-Instruct and approaches leading closed-source models. Our results suggest that fine-grained visual reward supervision is both necessary and sufficient for vision-to-code RL, regardless of task specificity.

Topik & Kata Kunci

Penulis (10)

Z

Ziyu Liu

S

Shengyuan Ding

X

Xinyu Fang

X

Xuanlang Dai

P

Penghui Yang

J

Jianze Liang

J

Jiaqi Wang

K

Kai Chen

D

Dahua Lin

Y

Yuhang Zang

Format Sitasi

Liu, Z., Ding, S., Fang, X., Dai, X., Yang, P., Liang, J. et al. (2026). Visual-ERM: Reward Modeling for Visual Equivalence. https://arxiv.org/abs/2603.13224

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2026
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓