arXiv Open Access 2023

Implicit biases in multitask and continual learning from a backward error analysis perspective

Benoit Dherin
Lihat Sumber

Abstrak

Using backward error analysis, we compute implicit training biases in multitask and continual learning settings for neural networks trained with stochastic gradient descent. In particular, we derive modified losses that are implicitly minimized during training. They have three terms: the original loss, accounting for convergence, an implicit flatness regularization term proportional to the learning rate, and a last term, the conflict term, which can theoretically be detrimental to both convergence and implicit regularization. In multitask, the conflict term is a well-known quantity, measuring the gradient alignment between the tasks, while in continual learning the conflict term is a new quantity in deep learning optimization, although a basic tool in differential geometry: The Lie bracket between the task gradients.

Topik & Kata Kunci

Penulis (1)

B

Benoit Dherin

Format Sitasi

Dherin, B. (2023). Implicit biases in multitask and continual learning from a backward error analysis perspective. https://arxiv.org/abs/2311.00235

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2023
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓