Back to timeline

Learning representations by back-propagating errors

Rumelhart, Hinton, and Williams publish a widely influential account of backpropagation, accelerating practical training of neural networks.

Architecture

What Happened

In October 1986, David Rumelhart, Geoffrey Hinton, and Ronald Williams published “Learning representations by back-propagating errors” in Nature. The work described how error signals can be propagated backward through layers to adjust weights and learn internal representations.

Why It Matters

Backpropagation became a core method enabling training of multi-layer neural networks, and it underpins much of the practical deep learning stack that later scaled with larger datasets, better hardware, and improved architectures.

Technical Details

Backpropagation efficiently computes gradients of a loss function with respect to parameters through layered computation, making gradient-based optimization feasible for complex networks.