Semantic Scholar Open Access 2021 703 sitasi

VideoGPT: Video Generation using VQ-VAE and Transformers

Wilson Yan Yunzhi Zhang P. Abbeel A. Srinivas

Abstrak

We present VideoGPT: a conceptually simple architecture for scaling likelihood based generative modeling to natural videos. VideoGPT uses VQ-VAE that learns downsampled discrete latent representations of a raw video by employing 3D convolutions and axial self-attention. A simple GPT-like architecture is then used to autoregressively model the discrete latents using spatio-temporal position encodings. Despite the simplicity in formulation and ease of training, our architecture is able to generate samples competitive with state-of-the-art GAN models for video generation on the BAIR Robot dataset, and generate high fidelity natural videos from UCF-101 and Tumbler GIF Dataset (TGIF). We hope our proposed architecture serves as a reproducible reference for a minimalistic implementation of transformer based video generation models. Samples and code are available at https://wilson1yan.github.io/videogpt/index.html

Topik & Kata Kunci

Penulis (4)

W

Wilson Yan

Y

Yunzhi Zhang

P

P. Abbeel

A

A. Srinivas

Format Sitasi

Yan, W., Zhang, Y., Abbeel, P., Srinivas, A. (2021). VideoGPT: Video Generation using VQ-VAE and Transformers. https://www.semanticscholar.org/paper/2d9ae4c167510ed78803735fc57ea67c3cc55a35

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2021
Bahasa
en
Total Sitasi
703×
Sumber Database
Semantic Scholar
Akses
Open Access ✓