Perceptron
What type of decision boundary does a single-layer perceptron create?
What does it mean for a dataset to be linearly separable?
What is the XOR problem in the context of perceptrons?
In the perceptron learning rule, when are weights updated?
What happens to the weights when the perceptron makes a correct prediction?
What role does the learning rate play in perceptron training?
What does the bias term 'b' allow the perceptron to do?
Which logic gate CAN be solved by a single-layer perceptron?
What mathematical equation defines the decision boundary of a perceptron in 2D?
What fundamental limitation did the XOR problem reveal about single-layer perceptrons?