arXiv Open Access 2022

Domain Adversarial Training: A Game Perspective

David Acuna Marc T Law Guojun Zhang Sanja Fidler
Lihat Sumber

Abstrak

The dominant line of work in domain adaptation has focused on learning invariant representations using domain-adversarial training. In this paper, we interpret this approach from a game theoretical perspective. Defining optimal solutions in domain-adversarial training as a local Nash equilibrium, we show that gradient descent in domain-adversarial training can violate the asymptotic convergence guarantees of the optimizer, oftentimes hindering the transfer performance. Our analysis leads us to replace gradient descent with high-order ODE solvers (i.e., Runge-Kutta), for which we derive asymptotic convergence guarantees. This family of optimizers is significantly more stable and allows more aggressive learning rates, leading to high performance gains when used as a drop-in replacement over standard optimizers. Our experiments show that in conjunction with state-of-the-art domain-adversarial methods, we achieve up to 3.5% improvement with less than of half training iterations. Our optimizers are easy to implement, free of additional parameters, and can be plugged into any domain-adversarial framework.

Topik & Kata Kunci

Penulis (4)

D

David Acuna

M

Marc T Law

G

Guojun Zhang

S

Sanja Fidler

Format Sitasi

Acuna, D., Law, M.T., Zhang, G., Fidler, S. (2022). Domain Adversarial Training: A Game Perspective. https://arxiv.org/abs/2202.05352

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2022
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓