arXiv Open Access 2023

On the Benefits of Public Representations for Private Transfer Learning under Distribution Shift

Pratiksha Thaker Amrith Setlur Zhiwei Steven Wu Virginia Smith
Lihat Sumber

Abstrak

Public pretraining is a promising approach to improve differentially private model training. However, recent work has noted that many positive research results studying this paradigm only consider in-distribution tasks, and may not apply to settings where there is distribution shift between the pretraining and finetuning data -- a scenario that is likely when finetuning private tasks due to the sensitive nature of the data. In this work, we show empirically across three tasks that even in settings with large distribution shift, where both zero-shot performance from public data and training from scratch with private data give unusably weak results, public features can in fact improve private training accuracy by up to 67\% over private training from scratch. We provide a theoretical explanation for this phenomenon, showing that if the public and private data share a low-dimensional representation, public representations can improve the sample complexity of private training even if it is impossible to learn the private task from the public data alone. Altogether, our results provide evidence that public data can indeed make private training practical in realistic settings of extreme distribution shift.

Topik & Kata Kunci

Penulis (4)

P

Pratiksha Thaker

A

Amrith Setlur

Z

Zhiwei Steven Wu

V

Virginia Smith

Format Sitasi

Thaker, P., Setlur, A., Wu, Z.S., Smith, V. (2023). On the Benefits of Public Representations for Private Transfer Learning under Distribution Shift. https://arxiv.org/abs/2312.15551

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2023
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓