Could someone please remind me about how we get this label?
The previous slide seems to say that we are receiving x^(t) = (1,-1), and then this says the sign of the weight for t-1 and this observation is 1. I get that sign says that things which are >= 0 are 1, but what does that correspond to here? Is it because w^(t-1) is initialized to 0 from two slides ago or something like that?
And how does that make the label 1 and lead to the line graphed in the next slides?
Thanks!
motoole2
Let's clarify a couple of things here.
First, $\hat{y}$ represents the output of our perceptron. Because we initialized the network with $w = 0$, the output of this perceptron is $1$. That is, $sign(w^T x) = 1$.
Second, $y$ (without the hat) represents the label associated with our training data. That is, the value of $y = 1$ is given to us. Because our perceptron produces the same label, we don't need to do anything.
In the slide that follows however, when testing another piece of data, the perceptron gives a value of $1$, but the ground truth label is $y = -1$. In this case, we have to update the weights to get the perceptron to predict the correct label.
Could someone please remind me about how we get this label?
The previous slide seems to say that we are receiving x^(t) = (1,-1), and then this says the sign of the weight for t-1 and this observation is 1. I get that sign says that things which are >= 0 are 1, but what does that correspond to here? Is it because w^(t-1) is initialized to 0 from two slides ago or something like that?
And how does that make the label 1 and lead to the line graphed in the next slides?
Thanks!
Let's clarify a couple of things here.
First, $\hat{y}$ represents the output of our perceptron. Because we initialized the network with $w = 0$, the output of this perceptron is $1$. That is, $sign(w^T x) = 1$.
Second, $y$ (without the hat) represents the label associated with our training data. That is, the value of $y = 1$ is given to us. Because our perceptron produces the same label, we don't need to do anything.
In the slide that follows however, when testing another piece of data, the perceptron gives a value of $1$, but the ground truth label is $y = -1$. In this case, we have to update the weights to get the perceptron to predict the correct label.