This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
assignments:assignment2 [2016/09/06 09:54] asa |
assignments:assignment2 [2016/09/14 09:38] asa [Assignment 2] |
||
---|---|---|---|
Line 3: | Line 3: | ||
====== Assignment 2 ====== | ====== Assignment 2 ====== | ||
- | Due date: Friday 9/16 at 11:59pm | + | Due date: Friday 9/17 at 11:59pm |
==== Datasets ==== | ==== Datasets ==== | ||
Line 61: | Line 61: | ||
The variable $\eta$ plays the role of the learning rate $\eta$ employed in the perceptron algorithm and $\delta \alpha$ is the proposed magnitude of change in $\alpha_i$. | The variable $\eta$ plays the role of the learning rate $\eta$ employed in the perceptron algorithm and $\delta \alpha$ is the proposed magnitude of change in $\alpha_i$. | ||
We note that the adatron tries to maintain a //sparse// representation in terms of the training examples by keeping many $\alpha_i$ equal to zero. The adatron converges to a special case of the SVM algorithm that we will learn later in the semester; this algorithm tries to maximize the margin with which each example is classified, which is captured by the variable $\gamma$ in the algorithm (notice that the magnitude of change proposed for each $\alpha_i$ becomes smaller as the margin increases towards 1). | We note that the adatron tries to maintain a //sparse// representation in terms of the training examples by keeping many $\alpha_i$ equal to zero. The adatron converges to a special case of the SVM algorithm that we will learn later in the semester; this algorithm tries to maximize the margin with which each example is classified, which is captured by the variable $\gamma$ in the algorithm (notice that the magnitude of change proposed for each $\alpha_i$ becomes smaller as the margin increases towards 1). | ||
+ | |||
+ | **Note:** if you observe an overflow issues in running the adatron, add an upper bound on the value of $\alpha_i$. | ||
Here's what you need to do: | Here's what you need to do: |