Back Propagation Neural Network: Explained With Simple Example

Before we learn Backpropagation, let's understand:

What is Artificial Neural Networks?

A neural network is a group of connected I/O units where each connection has a weight associated with its computer programs. It helps you to build predictive models from large databases. This model builds upon the human nervous system. It helps you to conduct image understanding, human learning, computer speech, etc.

What is Backpropagation?

Back-propagation is the essence of neural net training. It is the method of fine-tuning the weights of a neural net based on the error rate obtained in the previous epoch (i.e., iteration). Proper tuning of the weights allows you to reduce error rates and to make the model reliable by increasing its generalization.

Backpropagation is a short form for "backward propagation of errors." It is a standard method of training artificial neural networks. This method helps to calculate the gradient of a loss function with respects to all the weights in the network.

In this tutorial, you will learn:

How Backpropagation Works: Simple Algorithm

Consider the following diagram

How Backpropagation Works
  1. Inputs X, arrive through the preconnected path
  2. Input is modeled using real weights W. The weights are usually randomly selected.
  3. Calculate the output for every neuron from the input layer, to the hidden layers, to the output layer.
  4. Calculate the error in the outputs
ErrorB= Actual Output – Desired Output
  1. Travel back from the output layer to the hidden layer to adjust the weights such that the error is decreased.

Keep repeating the process until the desired output is achieved

Why We Need Backpropagation?

Most prominent advantages of Backpropagation are:

What is a Feed Forward Network?

A feedforward neural network is an artificial neural network where the nodes never form a cycle. This kind of neural network has an input layer, hidden layers, and an output layer. It is the first and simplest type of artificial neural network.

Types of Backpropagation Networks

Two Types of Backpropagation Networks are:

Static back-propagation:

It is one kind of backpropagation network which produces a mapping of a static input for static output. It is useful to solve static classification issues like optical character recognition.

Recurrent Backpropagation:

Recurrent backpropagation is fed forward until a fixed value is achieved. After that, the error is computed and propagated backward.

The main difference between both of these methods is: that the mapping is rapid in static back-propagation while it is nonstatic in recurrent backpropagation.

History of Backpropagation

Backpropagation Key Points

Best practice Backpropagation

Backpropagation can be explained with the help of "Shoe Lace" analogy

Too little tension =

Too much tension =

Pulling one lace more than other =

Disadvantages of using Backpropagation

Summary

 

YOU MIGHT LIKE: