Previous | Next --- Slide 47 of 130
Back to Lecture Thumbnails
tpbui

Can you explain what bias terms are and why we need them in a neural networK?

motoole2

Earlier in this lecture, we introduced the perceptron, which worked by fitting a hyperplane through some N-dimensional space and labelling all points on one side +1 and all points on the other -1. The weights determined the orientation of the hyperplane. However, because this perceptron didn't have a bias, the hyperplane always had to pass through the origin.

The bias term provides a way to translate this hyperplane, which is a helpful property for perceptrons to make decisions (similar to when we discussed support vector machines at the end of lecture 14).