Step-wise Demonstration of XOR Function Using Artificial Neural Networks

Step 1: This function outputs 1 only when X1 = 1 and X2 = 0, making it linearly separable. A straight line can separate the point (1, 0) from all other input combinations, so a single-layer perceptron can learn and classify this function using a single decision boundary.
X1
Input Layer X1
X2
Input Layer X2
Z1
Hidden Layer Z1
= A·B'
X1X2A·B'
000
010
101
110
Step 2:This function outputs 1 only when X1 = 0 and X2= 1, making it linearly separable. A straight line can separate the point (0, 1) from all other input combinations, so a single-layer perceptron can learn and correctly classify this function using a single decision boundary.
X1
Input Layer X1
X2
Input Layer X2
Z2
Hidden Layer Z2
= A'·B
X1X2A'·B
000
011
100
110
Step 3: Basic Theory:
XOR outputs 1 when the inputs are different. It can be implemented by combining the outputs of Part 1 (Z1 = A·B′) and Part 2 (Z2 = A′·B) using an OR operation.
Why Two Neurons Are Needed:
XOR is not linearly separable, so a single neuron cannot classify it. However, Z1 and Z2 are linearly separable, so the hidden layer neurons compute them separately, allowing the network to implement XOR.
Z1
Hidden Layer Z1
Z2
Hidden Layer Z2
Y
Output Y
OR
X1X2A·B'A'·BXOR
00000
01011
10101
11000