Autoencoders for Representation Learning
What is the primary objective of an autoencoder?
Which component of an autoencoder compresses the input into a lower-dimensional representation?
What is the bottleneck layer in an autoencoder?
In a denoising autoencoder, what is used as input during training?
What type of learning paradigm do autoencoders belong to?
What loss function is typically used to train autoencoders?
What is the size of each image in the Fashion-MNIST dataset?
What advantage does a 2-D latent space provide in autoencoders?
What happens if the bottleneck dimension is too small in an autoencoder?
What is the main advantage of using autoencoders over supervised learning methods for feature extraction?