Difference between revisions of "Autoencoder"
From Wasya Wiki
(Created page with "Autoencoders are closely related to Principal Component Analysis (PCA). In fact, if the activation function used within the autoencoder is linear within each layer,...") |
(No difference)
|
Latest revision as of 03:40, 19 January 2024
Autoencoders are closely related to Principal Component Analysis (PCA). In fact, if the activation function used within the autoencoder is linear within each layer, the latent variables present at the bottleneck (the smallest layer in the network, aka. code) directly correspond to the principal components from PCA. Generally, the activation function used in autoencoders is non-linear, typical activation functions are ReLU (Rectified Linear Unit) and sigmoid .