Step-wise Demonstration of XOR Function Using Artificial Neural Networks
Step 1: This function outputs 1 only when X1 = 1 and X2 = 0, making it
linearly separable. A straight line can separate the point
(1, 0) from all other input combinations, so a
single-layer perceptron can learn and classify this function using a
single decision boundary.
X1
Input Layer X1
X2
Input Layer X2
Z1
Hidden Layer Z1
= A·B'
X1
X2
A·B'
0
0
0
0
1
0
1
0
1
1
1
0
Step 2:This function outputs 1 only when X1 = 0 and X2= 1, making it
linearly separable. A straight line can separate the point
(0, 1) from all other input combinations, so a
single-layer perceptron can learn and correctly classify this function
using a single decision boundary.
X1
Input Layer X1
X2
Input Layer X2
Z2
Hidden Layer Z2
= A'·B
X1
X2
A'·B
0
0
0
0
1
1
1
0
0
1
1
0
Step 3:Basic Theory:
XOR outputs 1 when the inputs are different. It can be implemented by
combining the outputs of Part 1 (Z1 = A·B′) and Part 2
(Z2 = A′·B) using an OR operation. Why Two Neurons Are Needed:
XOR is not linearly separable, so a single neuron cannot classify it.
However, Z1 and Z2 are linearly separable, so the hidden layer
neurons compute them separately, allowing the network to implement
XOR.