arXiv Open Access 2022

MUSTACHE: Multi-Step-Ahead Predictions for Cache Eviction

Gabriele Tolomei Lorenzo Takanen Fabio Pinelli
Lihat Sumber

Abstrak

In this work, we propose MUSTACHE, a new page cache replacement algorithm whose logic is learned from observed memory access requests rather than fixed like existing policies. We formulate the page request prediction problem as a categorical time series forecasting task. Then, our method queries the learned page request forecaster to obtain the next $k$ predicted page memory references to better approximate the optimal Bélády's replacement algorithm. We implement several forecasting techniques using advanced deep learning architectures and integrate the best-performing one into an existing open-source cache simulator. Experiments run on benchmark datasets show that MUSTACHE outperforms the best page replacement heuristic (i.e., exact LRU), improving the cache hit ratio by 1.9% and reducing the number of reads/writes required to handle cache misses by 18.4% and 10.3%.

Topik & Kata Kunci

Penulis (3)

G

Gabriele Tolomei

L

Lorenzo Takanen

F

Fabio Pinelli

Format Sitasi

Tolomei, G., Takanen, L., Pinelli, F. (2022). MUSTACHE: Multi-Step-Ahead Predictions for Cache Eviction. https://arxiv.org/abs/2211.02177

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2022
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓