Semantic Scholar Open Access 2016 790 sitasi

Clipper: A Low-Latency Online Prediction Serving System

D. Crankshaw Xin Wang Giulio Zhou M. Franklin Joseph E. Gonzalez +1 lainnya

Abstrak

Machine learning is being deployed in a growing number of applications which demand real-time, accurate, and robust predictions under heavy query load. However, most machine learning frameworks and systems only address model training and not deployment. In this paper, we introduce Clipper, a general-purpose low-latency prediction serving system. Interposing between end-user applications and a wide range of machine learning frameworks, Clipper introduces a modular architecture to simplify model deployment across frameworks and applications. Furthermore, by introducing caching, batching, and adaptive model selection techniques, Clipper reduces prediction latency and improves prediction throughput, accuracy, and robustness without modifying the underlying machine learning frameworks. We evaluate Clipper on four common machine learning benchmark datasets and demonstrate its ability to meet the latency, accuracy, and throughput demands of online serving applications. Finally, we compare Clipper to the TensorFlow Serving system and demonstrate that we are able to achieve comparable throughput and latency while enabling model composition and online learning to improve accuracy and render more robust predictions.

Topik & Kata Kunci

Penulis (6)

D

D. Crankshaw

X

Xin Wang

G

Giulio Zhou

M

M. Franklin

J

Joseph E. Gonzalez

I

Ion Stoica

Format Sitasi

Crankshaw, D., Wang, X., Zhou, G., Franklin, M., Gonzalez, J.E., Stoica, I. (2016). Clipper: A Low-Latency Online Prediction Serving System. https://www.semanticscholar.org/paper/0a5ff7336879c99513dca6fce6ef44984ebf3f55

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2016
Bahasa
en
Total Sitasi
790×
Sumber Database
Semantic Scholar
Akses
Open Access ✓