Optimizing Hyperparameters of Neural-Based Image Compressors
Abstrak
The performance of neural image coders is heavily dependent on their architecture and, hence, on the selection of hyperparameters. Such performance, for a given architecture, is often ascertained by trial, that is, after training and inference, so that many trials may be conducted to select the hyperparameters. We propose a multi-objective hyperparameter optimization (MOHPO) method for neural image compression based on rate-distortion-complexity (RDC) analysis, which drastically reduces the number of networks to try (train and test), thereby saving resources. We validate it on well-established benchmark problems and demonstrate its use with popular autoencoders, measuring their complexities in terms of the number of parameters and floating-point operations. Our method, which we refer to as the greedy lower convex hull (GLCH), aims to track the lower convex hull of a cloud of hyperparameter possibilities. We compare our method with other well-established state-of-the-art MOHPO methods in terms of log-hypervolume difference as a function of the number of trained networks. The results indicate that the proposed method is highly competitive, particularly with fewer trained networks, which is a critical scenario in practice. Furthermore, it is deterministic, that is, it remains consistent across different runs.
Topik & Kata Kunci
Penulis (2)
Lucas S. Lopes
Ricardo L. de Queiroz
Akses Cepat
PDF tidak tersedia langsung
Cek di sumber asli →- Tahun Terbit
- 2026
- Sumber Database
- DOAJ
- DOI
- 10.1109/ACCESS.2026.3672113
- Akses
- Open Access ✓