arXiv Open Access 2018

From Principal Subspaces to Principal Components with Linear Autoencoders

Elad Plaut
Lihat Sumber

Abstrak

The autoencoder is an effective unsupervised learning model which is widely used in deep learning. It is well known that an autoencoder with a single fully-connected hidden layer, a linear activation function and a squared error cost function trains weights that span the same subspace as the one spanned by the principal component loading vectors, but that they are not identical to the loading vectors. In this paper, we show how to recover the loading vectors from the autoencoder weights.

Topik & Kata Kunci

Penulis (1)

E

Elad Plaut

Format Sitasi

Plaut, E. (2018). From Principal Subspaces to Principal Components with Linear Autoencoders. https://arxiv.org/abs/1804.10253

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2018
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓