Autoencoders for Representation Learning
After training, what does the encoder of an autoencoder produce?
In the denoising autoencoder experiment, what should the target output be?
What type of noise is added to images when training the denoising autoencoder?
If reconstruction loss decreases during training, it indicates:
What activation function is used in the final decoder layer?
When visualising the latent space, what does clustering of similar items indicate?
What does the bottleneck layer force the autoencoder to learn?
Why is batch normalisation used in the autoencoder architecture?
What does PSNR (Peak Signal-to-Noise Ratio) measure?
If the autoencoder reconstructs training images perfectly but performs poorly on test images, what problem is occurring?