Feedforward Neural Network (MLP)
1. What is the primary goal of a feedforward neural network (MLP)?
2. What do you call a layer that is neither input nor output?
3. What property lets MLPs model virtually any continuous function?
4. Which algorithm computes gradients by propagating errors backward?
5. What type of network has no feedback connections and only forward data flow?
6. What class of values strongly affect MLP performance and must be tuned?
7. Which rule is fundamental to computing gradients in backpropagation?
8. What characteristics of deep networks can make training difficult when gradients shrink?
9. What term describes a network where each node in layer i connects to all nodes in layer i+1?
10. What is the role of biases in each layer?