assignments:assignment4

This shows you the differences between two versions of the page.

Both sides previous revision Previous revision Next revision | Previous revision | ||

assignments:assignment4 [2016/10/05 11:41] asa [Part 3: Soft-margin SVM for separable data] |
assignments:assignment4 [2016/10/11 18:16] (current) asa [Part 2: leave-one-out error for linearly separable data] |
||
---|---|---|---|

Line 14: | Line 14: | ||

In this question we will explore the leave-one-out error for a hard-margin SVM for a linearly separable dataset. | In this question we will explore the leave-one-out error for a hard-margin SVM for a linearly separable dataset. | ||

- | First, we define a //key support vector// as a support vector whose removal from the dataset changes the maximum margin hyperplane. | + | First, we define a set of //key support vectors// as a subset of the support vectors such that removal of any one vector from the set changes the maximum margin hyperplane. |

* Consider the following statement: The set of all key support vectors is unique. Prove this, or show a counter-example. | * Consider the following statement: The set of all key support vectors is unique. Prove this, or show a counter-example. | ||

Line 30: | Line 30: | ||

Consider the following statement: | Consider the following statement: | ||

- | Since increasing the $\xi_i$ can only increase the objective of the primal problem (which | + | Since increasing the $\xi_i$ can only increase the cost function of the primal problem (which |

- | we are trying to minimize), at the solution to the primal problem, all the | + | we are trying to minimize), at the solution to the primal problem, i.e. the hyperplane that minimizes the primal cost function, all the |

training examples will have $\xi_i$ equal | training examples will have $\xi_i$ equal | ||

to zero. | to zero. | ||

Line 97: | Line 97: | ||

Next, we will compare the accuracy of an SVM with a Gaussian kernel on the raw data with accuracy obtained when the data is normalized to be unit vectors (the values of the features of each example are divided by its norm). | Next, we will compare the accuracy of an SVM with a Gaussian kernel on the raw data with accuracy obtained when the data is normalized to be unit vectors (the values of the features of each example are divided by its norm). | ||

This is different than standardization which operates at the level of individual features. Normalizing to unit vectors is more appropriate for this dataset as it is sparse, i.e. most of the features are zero. | This is different than standardization which operates at the level of individual features. Normalizing to unit vectors is more appropriate for this dataset as it is sparse, i.e. most of the features are zero. | ||

- | Perform your comparison by comparing the accuracy measured by the area under the ROC curve in five-fold cross validation. | + | Perform your comparison by comparing the accuracy measured by the area under the ROC curve in five-fold cross validation, where the classifier/kernel parameters are chosen by |

- | The optimal values of kernel parameters should be measured by cross-validation, where the optimal SVM/kernel parameters are chosen using grid search on the training set of each fold. | + | by nested cross-validation, i.e. using grid search on the training set of each fold. |

Use the scikit-learn [[http://scikit-learn.org/stable/tutorial/statistical_inference/model_selection.html | Use the scikit-learn [[http://scikit-learn.org/stable/tutorial/statistical_inference/model_selection.html | ||

| grid-search]] class for model selection. | | grid-search]] class for model selection. |

assignments/assignment4.1475689266.txt.gz · Last modified: 2016/10/05 11:41 by asa

Except where otherwise noted, content on this wiki is licensed under the following license: CC Attribution-Share Alike 4.0 International