Warning: Declaration of action_plugin_tablewidth::register(&$controller) should be compatible with DokuWiki_Action_Plugin::register(Doku_Event_Handler $controller) in /s/bach/b/class/cs545/public_html/fall16/lib/plugins/tablewidth/action.php on line 93
assignments:assignment4 [CS545 fall 2016]

User Tools

Site Tools


assignments:assignment4

This is an old revision of the document!



Warning: Declaration of syntax_plugin_comment::handle($match, $state, $pos, &$handler) should be compatible with DokuWiki_Syntax_Plugin::handle($match, $state, $pos, Doku_Handler $handler) in /s/bach/b/class/cs545/public_html/fall16/lib/plugins/comment/syntax.php on line 19

Warning: Declaration of syntax_plugin_comment::render($mode, &$renderer, $data) should be compatible with DokuWiki_Syntax_Plugin::render($format, Doku_Renderer $renderer, $data) in /s/bach/b/class/cs545/public_html/fall16/lib/plugins/comment/syntax.php on line 19

Warning: Declaration of syntax_plugin_fontsize2::handle($match, $state, $pos, &$handler) should be compatible with DokuWiki_Syntax_Plugin::handle($match, $state, $pos, Doku_Handler $handler) in /s/bach/b/class/cs545/public_html/fall16/lib/plugins/fontsize2/syntax.php on line 116

Warning: Declaration of syntax_plugin_fontsize2::render($mode, &$renderer, $data) should be compatible with DokuWiki_Syntax_Plugin::render($format, Doku_Renderer $renderer, $data) in /s/bach/b/class/cs545/public_html/fall16/lib/plugins/fontsize2/syntax.php on line 116

Warning: Declaration of syntax_plugin_tablewidth::handle($match, $state, $pos, &$handler) should be compatible with DokuWiki_Syntax_Plugin::handle($match, $state, $pos, Doku_Handler $handler) in /s/bach/b/class/cs545/public_html/fall16/lib/plugins/tablewidth/syntax.php on line 57

Warning: Declaration of syntax_plugin_tablewidth::render($mode, &$renderer, $data) should be compatible with DokuWiki_Syntax_Plugin::render($format, Doku_Renderer $renderer, $data) in /s/bach/b/class/cs545/public_html/fall16/lib/plugins/tablewidth/syntax.php on line 57

Warning: Declaration of syntax_plugin_mathjax_protecttex::render($mode, &$renderer, $data) should be compatible with DokuWiki_Syntax_Plugin::render($format, Doku_Renderer $renderer, $data) in /s/bach/b/class/cs545/public_html/fall16/lib/plugins/mathjax/syntax/protecttex.php on line 157

Assignment 4: Support Vector Machines

Due: October 17th at 11:59pm

Part 1: SVM with no bias term

Formulate a soft-margin SVM without the bias term, i.e. one where the discriminant function is equal to $\mathbf{w}^{T} \mathbf{x}$. Derive the saddle point conditions, KKT conditions and the dual. Compare it to the standard SVM formulation that was derived in class.

Part 2: leave-one-out error for linearly separable data

In this question we will explore the leave-one-out error for a hard-margin SVM for a linearly separable dataset. First, we define a key support vector as a support vector whose removal from the dataset changes the maximum margin hyperplane.

  • Consider the following statement: The set of all key support vectors is unique. Prove this, or show a counter-example.
  • In class we argued that the fraction of examples that are support vectors provide a bound on the leave-one-out error. Using the definition of key support vectors prove a tighter bound on the leave-one-out cross validation error can be obtained:

$$ E_{cv} \leq \frac{\textrm{number of key support vectors}}{N}, $$ where $N$ is the number of training examples.

Part 3: Soft-margin SVM for separable data

Suppose you are given a linearly separable dataset, and you are training the soft-margin SVM, which uses slack variables with the soft-margin constant $C$ set to some positive value. Consider the following statement:

Since increasing the $\xi_i$ can only increase the objective of the primal problem (which we are trying to minimize), at the solution to the primal problem, all the training examples will have $\xi_i$ equal to zero.

Is this true or false? Explain!

Part 4: Using SVMs

The data for this question comes from a database called SCOP (structural classification of proteins), which classifies proteins into classes according to their structure (download it from here). The data is a two-class classification problem of distinguishing a particular class of proteins from a selection of examples sampled from the rest of the SCOP database using features derived from their sequence (a protein is a chain of amino acids, so as computer scientists, we can consider it as a sequence over the alphabet of the 20 amino acids). I chose to represent the proteins in terms of their motif composition. A sequence motif is a pattern of amino acids that is conserved in evolution. Motifs are usually associated with regions of the protein that are important for its function, and are therefore useful in differentiating between classes of proteins. A given protein will typically contain only a handful of motifs, and so the data is very sparse. Therefore, only the non-zero elements of the data are represented. Each line in the file describes a single example. Here's an example from the file:

d1scta_,a.1.1.2 31417:1.0 32645:1.0 39208:1.0 42164:1.0 ....

The first column is the ID of the protein, the second is the class it belongs to (the values for the class variable are a.1.1.2, which is the given class of proteins, and rest which is the negative class representing the rest of the database); the remainder consists of elements of the form feature_id:valuewhich provide an id of a feature and the value associated with it. This is an extension of the format used by LibSVM, that scikit-learn can read. See a discussion of this format and how to read it here.

We note that the data is very high dimensional since the number of conserved patterns in the space of all proteins is large. The data was constructed as part of the following analysis of detecting distant relationships between proteins:

In this part of the assignment we will explore the dependence of classifier accuracy on the kernel, kernel parameters, kernel normalization, and the SVM soft-margin parameter. In your implementation you can use the scikit-learn svm class.

In this question we will consider both the Gaussian and polynomial kernels: $$ K_{gauss}(\mathbf{x}, \mathbf{x'}) = \exp(-\gamma || \mathbf{x} - \mathbf{x}' ||^2) $$ and $$ K_{poly}(\mathbf{x}, \mathbf{x'}) = (\mathbf{x}^T \mathbf{x}' + 1) ^{p}. $$

Plot the accuracy of the SVM, measured using the area under the ROC curve as a function of both the soft-margin parameter of the SVM, and the free parameter of the kernel function. Accuracy should be measured in five-fold cross-validation. Show a couple of representative cross sections of this plot for a given value of the soft margin parameter, and for a given value of the kernel parameter. Comment on the results. When exploring the values of a continuous classifier/kernel parameter it is useful to use values that are distributed on an exponential grid, i.e. something like 0.01, 0.1, 1, 10, 100 (note that the degree of the polynomial kernel is not such a parameter).

Next, we will compare the accuracy of an SVM with a Gaussian kernel on the raw data with accuracy obtained when the data is normalized to be unit vectors (the values of the features of each example are divided by its norm). This is different than standardization which operates at the level of individual features. Normalizing to unit vectors is more appropriate for this dataset as it is sparse, i.e. most of the features are zero. Perform your comparison by comparing the accuracy measured by the area under the ROC curve in five-fold cross validation. The optimal values of kernel parameters should be measured by cross-validation, where the optimal SVM/kernel parameters are chosen using grid search on the training set of each fold. Use the scikit-learn grid-search class for model selection.

Finally, visualize the kernel matrix associated with the dataset. Explain the structure that you are seeing in the plot (it is more interesting when the data is normalized).

Submission

Submit your report via Canvas. Python code can be displayed in your report if it is short, and helps understand what you have done. The sample LaTex document provided in assignment 1 shows how to display Python code. Submit the Python code that was used to generate the results as a file called assignment3.py (you can split the code into several .py files; Canvas allows you to submit multiple files). Typing

$ python assignment4.py

should generate all the tables/plots used in your report.

Grading

A few general guidelines for this and future assignments in the course:

  • Your answers should be concise and to the point.
  • You need to use LaTex to write the report.
  • The report is well structured, the writing is clear, with good grammar and correct spelling; good formatting of math, code, figures and captions (every figure and table needs to have a caption that explains what is being shown).
  • Whenever you use information from the web or published papers, a reference should be provided. Failure to do so is considered plagiarism.
  • Always provide a description of the method you used to produce a given result in sufficient detail such that the reader can reproduce your results on the basis of the description. You can use a few lines of python code or pseudo-code.
  • You can provide results in the form of tables, figures or text - whatever form is most appropriate for a given problem. There are no rules about how much space each answer should take. BUT we will take off points if we have to wade through a lot of redundant data.
  • In any machine learning paper there is a discussion of the results. There is a similar expectation from your assignments that you reason about your results.

We will take off points if these guidelines are not followed.

Grading sheet for assignment 4

Part 1:  40 points.
( 5 points):  Primal SVM formulation is correct
(10 points):  Lagrangian found correctly
(10 points):  Derivation of saddle point equations
(15 points):  Derivation of the dual

Part 2:  15 points.

Part 2:  15 points.

Part 3:  30 points.
(15 points):  Accuracy as a function of parameters and discussion of the results
(10 points):  Comparison of normalized and non-normalized kernels and correct model selection
( 5 points):  Visualization of the kernel matrix and observations made about it
assignments/assignment4.1475689266.txt.gz · Last modified: 2016/10/05 11:41 by asa