assignments:assignment4

This shows you the differences between two versions of the page.

Both sides previous revision Previous revision Next revision | Previous revision Last revision Both sides next revision | ||

assignments:assignment4 [2016/10/03 10:01] asa [Part 3: Soft-margin SVM for separable data] |
assignments:assignment4 [2016/10/06 15:09] asa [Part 4: Using SVMs] |
||
---|---|---|---|

Line 30: | Line 30: | ||

Consider the following statement: | Consider the following statement: | ||

- | Since increasing the $\xi_i$ can only increase the objective of the primal problem (which | + | Since increasing the $\xi_i$ can only increase the cost function of the primal problem (which |

- | we are trying to minimize), at the optimal solution to the primal problem, all the | + | we are trying to minimize), at the solution to the primal problem, i.e. the hyperplane that minimizes the primal cost function, all the |

training examples will have $\xi_i$ equal | training examples will have $\xi_i$ equal | ||

to zero. | to zero. | ||

Line 97: | Line 97: | ||

Next, we will compare the accuracy of an SVM with a Gaussian kernel on the raw data with accuracy obtained when the data is normalized to be unit vectors (the values of the features of each example are divided by its norm). | Next, we will compare the accuracy of an SVM with a Gaussian kernel on the raw data with accuracy obtained when the data is normalized to be unit vectors (the values of the features of each example are divided by its norm). | ||

This is different than standardization which operates at the level of individual features. Normalizing to unit vectors is more appropriate for this dataset as it is sparse, i.e. most of the features are zero. | This is different than standardization which operates at the level of individual features. Normalizing to unit vectors is more appropriate for this dataset as it is sparse, i.e. most of the features are zero. | ||

- | Perform your comparison by comparing the accuracy measured by the area under the ROC curve in five-fold cross validation. | + | Perform your comparison by comparing the accuracy measured by the area under the ROC curve in five-fold cross validation, where the classifier/kernel parameters are chosen by |

- | The optimal values of kernel parameters should be measured by cross-validation, where the optimal SVM/kernel parameters are chosen using grid search on the training set of each fold. | + | by nested cross-validation, i.e. using grid search on the training set of each fold. |

Use the scikit-learn [[http://scikit-learn.org/stable/tutorial/statistical_inference/model_selection.html | Use the scikit-learn [[http://scikit-learn.org/stable/tutorial/statistical_inference/model_selection.html | ||

| grid-search]] class for model selection. | | grid-search]] class for model selection. |

assignments/assignment4.txt · Last modified: 2016/10/11 18:16 by asa

Except where otherwise noted, content on this wiki is licensed under the following license: CC Attribution-Share Alike 4.0 International