AI Glossary/VAE (Variational Autoencoder)

What Is VAE (Variational Autoencoder)?

Definition

A VAE (Variational Autoencoder) is a generative model that learns to encode input data into a structured latent space distribution and decode samples from that space back into realistic data, enabling both data compression and generation of new samples.

How VAE (Variational Autoencoder) Works

VAEs consist of an encoder that maps input data to a probability distribution in latent space and a decoder that reconstructs data from points sampled in that space. Unlike regular autoencoders, VAEs learn a smooth, continuous latent space where nearby points correspond to similar outputs. This allows for meaningful interpolation between data points and generation of new samples by sampling from the latent distribution. VAEs are a key component of Stable Diffusion's architecture (the latent diffusion part) and are used in applications ranging from drug discovery to music generation. While they typically produce less sharp outputs than GANs, they offer more stable training and a principled probabilistic framework.

Real-World Examples

1

Stable Diffusion using a VAE to compress images into a smaller latent space where the diffusion process operates efficiently

2

A drug discovery system using a VAE to explore the latent space of molecular structures and generate novel drug candidates

3

A music generation tool using a VAE to interpolate smoothly between two different musical styles

Recommended Tools

Related Terms