arXiv Open Access 2025

Tuning LLM-based Code Optimization via Meta-Prompting: An Industrial Perspective

Jingzhi Gong Rafail Giavrimis Paul Brookes Vardan Voskanyan Fan Wu +6 lainnya
Lihat Sumber

Abstrak

There is a growing interest in leveraging multiple large language models (LLMs) for automated code optimization. However, industrial platforms deploying multiple LLMs face a critical challenge: prompts optimized for one LLM often fail with others, requiring expensive model-specific prompt engineering. This cross-model prompt engineering bottleneck severely limits the practical deployment of multi-LLM systems in production environments. We introduce Meta-Prompted Code Optimization (MPCO), a framework that automatically generates high-quality, task-specific prompts across diverse LLMs while maintaining industrial efficiency requirements. MPCO leverages metaprompting to dynamically synthesize context-aware optimization prompts by integrating project metadata, task requirements, and LLM-specific contexts. It is an essential part of the ARTEMIS code optimization platform for automated validation and scaling. Our comprehensive evaluation on five real-world codebases with 366 hours of runtime benchmarking demonstrates MPCO's effectiveness: it achieves overall performance improvements up to 19.06% with the best statistical rank across all systems compared to baseline methods. Analysis shows that 96% of the top-performing optimizations stem from meaningful edits. Through systematic ablation studies and meta-prompter sensitivity analysis, we identify that comprehensive context integration is essential for effective meta-prompting and that major LLMs can serve effectively as meta-prompters, providing actionable insights for industrial practitioners.

Topik & Kata Kunci

Penulis (11)

J

Jingzhi Gong

R

Rafail Giavrimis

P

Paul Brookes

V

Vardan Voskanyan

F

Fan Wu

M

Mari Ashiga

M

Matthew Truscott

M

Mike Basios

L

Leslie Kanthan

J

Jie Xu

Z

Zheng Wang

Format Sitasi

Gong, J., Giavrimis, R., Brookes, P., Voskanyan, V., Wu, F., Ashiga, M. et al. (2025). Tuning LLM-based Code Optimization via Meta-Prompting: An Industrial Perspective. https://arxiv.org/abs/2508.01443

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓