Backpropagation

A training algorithm used in neural networks that adjusts weights by propagating errors backward from the output layer to minimize loss.

Definition

An iterative optimization process where, after a forward pass computes the network’s predictions, the difference between predicted and true values (the loss) is propagated backwards—layer by layer—to compute gradients. These gradients inform weight updates via gradient descent, enabling deep networks to learn complex, hierarchical feature representations.

Real-World Example

In image classification, a convolutional neural network uses backpropagation on millions of labeled photos: after each batch, it tweaks millions of connection weights so that “cat” images produce higher activation in the correct output node and lower activation elsewhere, gradually reaching >95% accuracy on validation data.