User Tools

Site Tools


assignments:assignment2

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Last revision Both sides next revision
assignments:assignment2 [2016/09/06 09:54]
asa
assignments:assignment2 [2016/09/07 13:22]
asa
Line 61: Line 61:
 The variable $\eta$ plays the role of the learning rate $\eta$ employed in the perceptron algorithm and $\delta \alpha$ is the proposed magnitude of change in $\alpha_i$. ​ The variable $\eta$ plays the role of the learning rate $\eta$ employed in the perceptron algorithm and $\delta \alpha$ is the proposed magnitude of change in $\alpha_i$. ​
 We note that the adatron tries to maintain a //sparse// representation in terms of the training examples by keeping many $\alpha_i$ equal to zero.  The adatron converges to a special case of the SVM algorithm that we will learn later in the semester; this algorithm tries to maximize the margin with which each example is classified, which is captured by the variable $\gamma$ in the algorithm (notice that the magnitude of change proposed for each $\alpha_i$ becomes smaller as the margin increases towards 1). We note that the adatron tries to maintain a //sparse// representation in terms of the training examples by keeping many $\alpha_i$ equal to zero.  The adatron converges to a special case of the SVM algorithm that we will learn later in the semester; this algorithm tries to maximize the margin with which each example is classified, which is captured by the variable $\gamma$ in the algorithm (notice that the magnitude of change proposed for each $\alpha_i$ becomes smaller as the margin increases towards 1).
 +
 +**Note:** if you observe an overflow issues in running the adatron, add an upper bound on the value of $\alpha_i$.
  
 Here's what you need to do: Here's what you need to do:
assignments/assignment2.txt ยท Last modified: 2016/09/14 09:38 by asa