arXiv Open Access 2026

ArchBench: Benchmarking Generative-AI for Software Architecture Tasks

Bassam Adnan Aviral Gupta Sreemaee Akshathala Karthik Vaidhyanathan
Lihat Sumber

Abstrak

Benchmarks for large language models (LLMs) have progressed from snippet-level function generation to repository-level issue resolution, yet they overwhelmingly target implementation correctness. Software architecture tasks remain under-specified and difficult to compare across models, despite their central role in maintaining and evolving complex systems. We present ArchBench, the first unified platform for benchmarking LLM capabilities on software architecture tasks. ArchBench provides a command-line tool with a standardized pipeline for dataset download, inference with trajectory logging, and automated evaluation, alongside a public web interface with an interactive leaderboard. The platform is built around a plugin architecture where each task is a self-contained module, making it straightforward for the community to contribute new architectural tasks and evaluation results. We use the term LLMs broadly to encompass generative AI (GenAI) solutions for software engineering, including both standalone models and LLM-based coding agents equipped with tools. Both the CLI tool and the web platform are openly available to support reproducible research and community-driven growth of architectural benchmarking.

Topik & Kata Kunci

Penulis (4)

B

Bassam Adnan

A

Aviral Gupta

S

Sreemaee Akshathala

K

Karthik Vaidhyanathan

Format Sitasi

Adnan, B., Gupta, A., Akshathala, S., Vaidhyanathan, K. (2026). ArchBench: Benchmarking Generative-AI for Software Architecture Tasks. https://arxiv.org/abs/2603.17833

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2026
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓