The website is in Maintenance mode. We are in the process of adding more features.
Any new bookmarks, comments, or user profiles made during this time will not be saved.

AIML.com

Machine Learning Resources

What is a Perceptron? What is the role of bias in a perceptron (or neuron)?

Bookmark this question

Related Questions:
– What is Deep Learning? Discuss its key characteristics, working and applications
– Explain the basic architecture and training process of a Neural Network model
– What are the advantages and disadvantages of Deep Learning?

a perceptron
Perceptron, a fundamental building block in deep learning models

A ‘Perceptron’ is the basic building block, or single node, of a neural network inspired from the neurons that are found in the brain. It operates by taking in a set of inputs, calculating a weighted sum, adding a bias term, and then applying an activation function to this sum to produce an output. The inner working of a perceptron is as follows: 

  1. A vector of x1,……xm inputs is passed to the algorithm
  2. Weightings w1,……wm are applied to each element of the input vector and a bias is passed along with as represented by w0
  3. Summation of the input and bias terms is performed
    weighted sum + bias
  4. The above sum is passed to an activation function, g
  5. The activation function then returns a output, , based on which classification decision is taken
perceptron, a neuron
Perceptron, a neuron / node in neural network
Source:  MIT Deep Learning Course

In practice, single perceptrons are often used as building blocks for more complex models, such as multi-layer perceptrons (MLPs), which can handle a wider range of problems. The basic Perceptron model can only be used as a classification problem on classes that are linearly separable. 

Why do we add bias?

There are three main reasons:

  • Assists in achieving a better data fit and learning complex patterns
    The bias allows the perceptron to make adjustments to its output independently of the inputs. Bias is like the intercept added in a linear equation, which helps the model in a way that it can fit best for the given data. It allows the network to learn and represent more complex relationships between the input and output variables.

  • Handling zero inputs and mitigating the problem of vanishing gradient
    Activation functions, such as sigmoid, ReLU introduce non-linearity into the model. Bias terms are added to neural networks to help the models shift the activation function towards the positive or negative side. By adding a bias term, the model can train over points that do not pass through the origin, thereby ensuring that a neuron can activate even when all of its input values are zero.

how bias helps in neural network training
Title: Depicting how bias helps in neural network training
Source: Baeldung, annotated by AIML.com Research for better representation
  • Prevents underfitting and overfitting. Improves generalization
    Bias helps to strike a balance between underfitting and overfitting issues in neural networks. Bias allows the neural network to learn more complex patterns in the training data thereby increasing accuracy (reduce underfitting) and generalize better to unseen data (reduce overfitting)

Video Explanation

  • In this introductory lecture “The ‘MIT Introduction to Deep Learning” from MIT’s Deep Learning course, Prof. Alexander Amini explains the concept of Perceptron (forward the video to 14 min 30 sec to listen to the concept of Perceptron)

Leave your Comments and Suggestions below:

Please Login or Sign Up to leave a comment

Partner Ad  

Find out all the ways
that you can

Explore Questions by Topics

Partner Ad

Learn Data Science with Travis - your AI-powered tutor | LearnEngine.com