{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Assignment 4: Support Vector Machines\n", "\n", "Due: October 19th at 11:59pm" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part 1: SVM with no bias term \n", "\n", "Formulate a soft-margin SVM without the bias term, i.e. one where the discriminant function is equal to $\\mathbf{w}^{T} \\mathbf{x}$.\n", "Derive the saddle point conditions, KKT conditions and the dual.\n", "Compare it to the standard SVM formulation that was derived in class.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Part 1 answer " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part 2: Soft-margin SVM for separable data \n", "\n", "Suppose you are given a linearly separable dataset, and you are training the soft-margin SVM, which uses slack variables with the soft-margin constant $C$ set \n", "to some positive value.\n", "Consider the following statement:\n", "\n", "Since increasing the $\\xi_i$ can only increase the cost function of the primal problem (which\n", "we are trying to minimize), at the solution to the primal problem, i.e. the hyperplane that minimizes the primal cost function, all the\n", "training examples will have $\\xi_i$ equal\n", "to zero. \n", "\n", "Is this true or false? Explain!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Your answer for part 2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part 3: SVMs in practice\n", "\n", "The data for this question comes from a database called SCOP (structural\n", "classification of proteins), which classifies proteins into classes\n", "according to their structure (download it from [here](http://www.cs.colostate.edu/~cs545/fall18/notebooks/scop_motif.data).\n", "The data is a two-class classification\n", "problem\n", "of distinguishing a particular class of proteins from a selection of\n", "examples sampled from the rest of the SCOP database\n", "using features derived from their sequence (a protein is a chain of amino acids, so as computer scientists, we can consider it as a sequence over the alphabet of the 20 amino acids).\n", "I chose to represent the proteins in terms of their [sequence motif](https://en.wikipedia.org/wiki/Sequence_motif) composition. A sequence motif is a\n", "pattern of amino acids (or DNA) that is conserved in evolution.\n", "Motifs are usually associated with regions of the protein that are\n", "important for its function, and are therefore useful in differentiating between classes of proteins.\n", "A given protein will typically contain only a handful of motifs, and\n", "so the data is very sparse.\n", "Therefore, only the non-zero elements of the data are represented.\n", "Each line in the file describes a single example. Here's an example from the file:\n", "\n", "```\n", "d1scta_,a.1.1.2 31417:1.0 32645:1.0 39208:1.0 42164:1.0 ....\n", "```\n", "\n", "The first column is the identifier of the protein, the second is the class it belongs to (the values for the class variable are ``a.1.1.2``, which is the given class of proteins, and ``rest`` which is the negative class representing the rest of the database); the remainder consists of tokens of the form ``feature_id:value`` which provide an id of a feature and the value associated with it.\n", "This is an extension of the format used by LibSVM, that scikit-learn can read.\n", "See a discussion of this format and how to read it [here](http://scikit-learn.org/stable/datasets/#datasets-in-svmlight-libsvm-format).\n", "\n", "We note that the data is very high dimensional since\n", "the number of conserved patterns in the space of all proteins is\n", "large.\n", "The data was constructed as part of the following analysis of detecting distant relationships between proteins:\n", "\n", " * A. Ben-Hur and D. Brutlag. [Remote homology detection: a motif based approach](http://bioinformatics.oxfordjournals.org/content/19/suppl_1/i26.abstract). In: Proceedings, eleventh international conference on intelligent systems for molecular biology. Bioinformatics 19(Suppl. 1): i26-i33, 2003.\n", "\n", "Your task is to explore the dependence of classifier accuracy on \n", "the kernel, kernel parameters, kernel normalization, and the SVM soft-margin parameter.\n", "In your implementation you can use the scikit-learn [svm](http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html) class.\n", "\n", "Use both the Gaussian and polynomial kernels:\n", "$$\n", "K_{gauss}(\\mathbf{x}, \\mathbf{x'}) = \\exp(-\\gamma || \\mathbf{x} - \\mathbf{x}' ||^2)\n", "$$\n", "and\n", "$$\n", "K_{poly}(\\mathbf{x}, \\mathbf{x'}) = (\\mathbf{x}^T \\mathbf{x}' + 1) ^{p}.\n", "$$\n", "\n", "When using scikit-learn make sure you use the version of the polynomial kernel shown above (the scikit-learn default is the homogeneous [polynomial kernel](https://en.wikipedia.org/wiki/Polynomial_kernel)).\n", "Plot the accuracy of the SVM, measured using the area under the ROC curve\n", "as a function of both the soft-margin parameter of the SVM, and the free parameter of the kernel function.\n", "Accuracy should be measured in five-fold cross-validation.\n", "Show a couple of representative cross sections of this plot for a given value\n", "of the soft margin parameter, and for a given value of the kernel parameter.\n", "Comment on the results. When exploring the values of a continuous\n", "classifier/kernel parameter it is\n", "useful to use values that are distributed on an exponential grid,\n", "i.e. something like 0.01, 0.1, 1, 10, 100 (note that the degree of the\n", "polynomial kernel is not such a parameter).\n", "\n", "Next, compare the accuracy of an SVM with a Gaussian kernel on the raw data with accuracy obtained when the data is normalized to be unit vectors (the values of the features of each example are divided by its norm).\n", "This is different than standardization which operates at the level of individual features. Normalizing to unit vectors is more appropriate for this dataset as it is sparse, i.e. most of the features are zero.\n", "Perform your comparison by comparing the accuracy measured by the area under the ROC curve using five-fold nested cross validation, where the classifier/kernel parameters are chosen using grid search.\n", "Use the scikit-learn [grid search](http://scikit-learn.org/stable/tutorial/statistical_inference/model_selection.html)\n", " class for model selection. As a reference, compare the results with those obtained using a linear SVM and with regularized logistic regression, where model selection us performed by nested cross-validation.\n", "\n", "Perform a similar comparison of Linear SVM, SVM with a Gaussian kernel, and Logistic regression on an additional dataset of your choice. Depending on the dataset chosen, your data may benefit from normalization to unit vectors or standardization.\n", "Comment on the results, and describe what you have learned from these comparisons.\n", "\n", "Your final task is to visualize the kernel matrix associated with the dataset.\n", "Use the kernel matrix associated with the linear kernel.\n", "Explain the structure that you are seeing in the plot (it is more\n", "interesting when the data is normalized).\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Your answer here." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Your Report\n", "\n", "Answer the questions in the cells reserved for that purpose.\n", "\n", "Mathematical equations should be written as LaTex equations; the assignment contains multiple examples of both inline formulas (such as the one exemplifying the notation for the norm of a vector $||\\mathbf{x}||$ and those that appear on separate lines, e.g.:\n", "\n", "$$\n", "||\\mathbf{x}|| = \\sqrt{\\mathbf{x}^T \\mathbf{x}}.\n", "$$\n", "\n", "\n", "\n", "### Submission\n", "\n", "Submit your report as a Jupyter notebook via Canvas. Running the notebook should generate all the plots and results in your notebook.\n", "\n", "\n", "### Grading \n", "\n", "Here is what the grade sheet will look like for this assignment. A few general guidelines for this and future assignments in the course:\n", "\n", " * Your answers should be concise and to the point. We will take off points if that is not the case.\n", " * Always provide a description of the method you used to produce a given result in sufficient detail such that the reader can reproduce your results on the basis of the description. You can use a few lines of python code or pseudo-code.\n", "\n", "\n", "Grading sheet for the assignment:\n", "\n", "```\n", "Part 1: 40 points.\n", "( 5 points): Primal SVM formulation is correct\n", "(10 points): Lagrangian found correctly\n", "(10 points): Derivation of saddle point equations\n", "(15 points): Derivation of the dual\n", "\n", "Part 2: 20 points.\n", "\n", "Part 3: 40 points.\n", "(20 points): Accuracy as a function of parameters and discussion of the results\n", "(15 points): Comparison of normalized and non-normalized kernels and correct model selection\n", "( 5 points): Visualization of the kernel matrix and observations made about it\n", "```\n", "\n", "Grading will be based on the following criteria:\n", "\n", " * Correctness of answers to math problems\n", " * Math is formatted as LaTex equations\n", " * Correct behavior of the required code\n", " * Easy to understand plots \n", " * Overall readability and organization of the notebook\n", " * Effort in making interesting observations where requested.\n", " * Conciseness. Points may be taken off if the notebook is overly \n", " " ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.5" } }, "nbformat": 4, "nbformat_minor": 1 }