Autoencoder

From Wasya Wiki
Revision as of 03:40, 19 January 2024 by Piousbox (Talk | contribs) (Created page with "Autoencoders are closely related to Principal Component Analysis (PCA). In fact, if the activation function used within the autoencoder is linear within each layer,...")

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Autoencoders are closely related to Principal Component Analysis (PCA). In fact, if the activation function used within the autoencoder is linear within each layer, the latent variables present at the bottleneck (the smallest layer in the network, aka. code) directly correspond to the principal components from PCA. Generally, the activation function used in autoencoders is non-linear, typical activation functions are ReLU (Rectified Linear Unit) and sigmoid .