{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## CS545 Assignment 1\n", "\n", "**Due date:** 9/7 at 11:59pm\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Preliminaries\n", "\n", "We'll start with a little bit of notation... In supervised learning we work with a dataset of $N$ labeled examples: $\\mathcal{D} = \\{ (\\mathbf{x}_i, y_i) \\}_{i=1}^N$, where $\\mathbf{x}_i$ is a $d$-dimensional vector (we always use boldface to denote vectors), and $y_i$ is the label associated with $\\mathbf{x}_i$. In a binary classification problem we'll usually use the values $\\pm 1$ to denote positive/negative examples.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part 1: measuring classifier error\n", "\n", "First let's recall that the estimate of a classifier's error is given by:\n", "\n", "$$E(h) = \\frac{1}{N}\\sum_{i=1}^N I(h(\\mathbf{x}_i) \\neq y_i),$$\n", "\n", "where $I(\\cdot)$ is the indicator function, and $h$ is the model/hypothesis we are trying to evaluate.\n", "\n", "Whenever training a classifier, we like to know how well it's performing. This is done by computing an estimate of the out of sample error: pick an independent test set that was not used during training and compute the error of your classifier on this dataset (the test set). You always want to know that your classifier is learning something, i.e. the error is smaller than what we would expect by chance, i.e. better than a model that simply guesses or picks a fixed answer. Consider the following classifier, which always classifies an example as belonging to the majority class, i.e. the class to which the largest number of training examples belong to. \n", "\n", "Answer the following:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " * Suppose you have data that is very imbalanced, and let's say for concreteness that we're working with a binary classification problem where the number of negative examples is much larger than the number of positive examples. What can you say about the estimated error of the majority classifier? What issue does that raise about evaluating classifiers using this measure? " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**your answer here**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To solve this issue, it has been suggested to assign different costs to different types of errors using a cost matrix $c(y_i, h(\\mathbf{x}_i))$, where $y_i$ is the actual class of example $i$, and $h(\\mathbf{x}_i)$ is the the predicted class. For a binary classification problem this is a $2 x 2$ matrix, and we'll assume there is no cost associated with a correct classification, which leaves two components to be determined:\n", "\n", " * $c_r = c(+1, -1)$, which is the reject cost (the cost of a false negative)\n", " * $c_a = c(-1, +1)$, which is the accept cost (the cost of a false positive).\n", "\n", "Incorporating the cost matrix into computing classifier error.\n", " \n", "The regular error \n", "$$E(h) = \\frac{1}{N}\\sum_{i=1}^N I(h(\\mathbf{x}_i) \\neq y_i)$$\n", "is now replaced with:\n", "$$E_{cost}(h) = \\frac{1}{N}\\sum_{i=1}^N c(y_i, h(\\mathbf{x}_i)) \\cdot I(h(\\mathbf{x}_i) \\neq y_i)$$\n", "\n", "With these definitions, answer the following:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " * How should we choose $c_r$ and $c_a$ such that the majority classifier and the minority classifier both have an error of 0.5? (The minority classifier is analogous to the majority classifier, except that it classifies everything as positive, since we assumed the positive class has fewer representatives). Section 1.4.1 in the book has a brief discussion of error measures." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**your answer here**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part 2: The nearest centroid classifier\n", "\n", "The [closest centroid classifier](https://en.wikipedia.org/wiki/Nearest_centroid_classifier) classifies a data point $\\mathbf{x}$ according to the class of the nearest centroid. More formally, let $C_k$ be the set of examples that belong to class $k$, and let \n", "$$\\mu_k = \\frac{1}{|C_k|} \\sum_{i \\in C_k} \\mathbf{x}_i,$$ \n", "where $|C_k|$ is the cardinality of the set $C_k$. The closest centroid classifier predicts the class of a point $\\mathbf{x}$ as:\n", "\n", "\n", "$$h(\\mathbf{x}) = \\textrm{argmin}_k ||\\mathbf{x} - \\mu_k||,$$ \n", "where $||\\mathbf{x}||$ is the [Euclidean norm](https://en.wikipedia.org/wiki/Norm_(mathematics)#Euclidean_norm) of $\\mathbf{x}$. \n", "\n", "Show that for a binary classification problem where the number of positive examples equals the number of negative examples the nearest centroid classifier can be expressed as a linear classifier with the weight vector \n", "$$\\mathbf{w} = \\frac{1}{N}\\sum_{i=1}^N y_i \\mathbf{x}_i.$$\n", "Hint: consider the vector that connects the centroids of the two classes and draw a figure in two dimensions to help you think about the problem. Also note that this form only holds if the two classes have equal number of examples, so we'll assume that is the case.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**your answer here**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part 3: Are my features/variables/attributes useful?\n", "\n", "In order to obtain an accurate classifier you need good features. But what does that mean? In this task we will explore that, and how to visually inspect a dataset to identify useful features.\n", "\n", "First we need some data... the UCI machine learning repository contains a large selection of datasets that we can experiment with. In this exercise we'll focus on the\n", "[Heart disease diagnosis dataset](http://archive.ics.uci.edu/ml/datasets/Heart+Disease).\n", "This dataset has several data files associated with it. The easiest would be to use [this file](http://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/processed.cleveland.data), where categorical variables have been replaced with numerical vaues. The last column in the file contains the label associated with each example. In the processed file, a label `0` corresponds to a healthy individual; other values correspond to varying levels of heart disease. In your experiments focus on the binary classification problem of trying to distinguish between healthy and non-healthy individuals. The repository also contains [this file](http://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/cleve.mod) which has the data with categorical rather than numerical variabels.\n", "\n", "Most data will come as a data matrix in csv or related formats.\n", "Each row in the file corresponds to a training example. \n", "Note that this dataset contains both numerical variables and categorical variables, and you will be asked treat those differently.\n", "\n", "The difference between categorical and numerical variables: \n", "\n", "**Categorical variables** are variables whose values fall into discrete categories. For example:\n", "\n", " * Gender (\"male\", \"female\")\n", " * Degree program of a student (\"computer science\", \"math\", \"statistics\", ...).\n", " \n", "**Numerical variables** are variables that have values that are numerical, e.g. age, grayscale level in an image, blood pressure, etc. Note that numerical variables can either be **discrete** or **continuous**. Age, when measured in years would be considered a discrete value, whereas if you were to measure age in seconds, and allowing for fractions, that would be considered a continuous value.\n", "\n", "To read a data matrix you can use numpy's [genfromtxt](http://docs.scipy.org/doc/numpy/user/basics.io.genfromtxt.html) function.\n", "For example, to read the heart dataset you can use the following command:\n", "\n", "```python\n", ">>> import numpy as np\n", ">>> data=np.genfromtxt(\"processed.cleveland.data\", delimiter=\",\")\n", "```\n", "\n", "And note that since the file contains both the labels and data points, they need to be separated out. \n", "As an alternative you can use the ''usecols'' option of ''genfromtxt'' to directly read the columns you are interested in.\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Your task is to visualize the usefulness of the features that make up the dataset.\n", "We will use a different way of visualizing categorical and numerical features.\n", "For a numerical feature, generate two histograms of its values: one for the positive examples, and one for the negative examples. Use matplotlib's [hist](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.hist) function to generate the histogram and use the ''normed=True'' option to generated a histogram normalized to be a distribution.\n", "Another option is to use a [boxplot](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.boxplot.html).\n", "For categorical variables, [barplots](https://matplotlib.org/gallery/lines_bars_and_markers/bar_stacked.html) are a good choice." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* What does this kind of visualization tell us about the usefulness of a feature for classifying a dataset? Demonstrate this idea using a dataset from the UCI repository: plot histograms for four features, two of which you think are going to be useful, and two that have a more limited usefulness in your opinion -- Explain!!\n", "In plotting, create a single plot composed of four [subplots](http://matplotlib.org/examples/pylab_examples/subplots_demo.html), one for each feature. This is a convenient way of grouping together related plots.\n", "When choosing which features to display, simply use your judgement as to which ones to show.\n", "Would you consider the variable `ca` a categorical variable?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**your answer here**" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "# it should include relevant plots and the Python code to generate them" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Your Report\n", "\n", "Answer the questions in the cells reserved for that purpose.\n", "\n", "Mathematical equations should be written as LaTex equations; the assignment contains multiple examples of both inline formulas (such as the one exemplifying the notation for the norm of a vector $||\\mathbf{x}||$ and those that appear on separate lines, e.g.:\n", "\n", "$$\n", "||\\mathbf{x}|| = \\sqrt{\\mathbf{x}^T \\mathbf{x}}.\n", "$$\n", "\n", "\n", "\n", "### Submission\n", "\n", "Submit your report as a Jupyter notebook via Canvas. Python code can be displayed in your report if it is short, and helps understand what you have done. Running the notebook should generate all the plots in your notebook. If your code is very long, you can submit an additional file called `assignment1.py` that gets imported from your notebook.\n", "\n", "\n", "### Grading \n", "\n", "Here is what the grade sheet will look like for this assignment. A few general guidelines for this and future assignments in the course:\n", "\n", " * Your answers should be concise and to the point. We will take off points if that is not the case.\n", " * Always provide a description of the method you used to produce a given result in sufficient detail such that the reader can reproduce your results on the basis of the description. You can use a few lines of python code or pseudo-code.\n", "\n", "```\n", "Grading sheet for assignment 1\n", "\n", "Part 1: 30 points.\n", "\n", "Part 2: 35 points.\n", "\n", "Part 3: 35 points.\n", "(20 points): Histograms of informative/non-informative features.\n", "(15 points): Discussion of the plots.\n", "```\n", "\n", "Grading will be based on the following criteria:\n", "\n", " * Correctness of answers to math problems\n", " * Math is formatted as LaTex equations\n", " * Correct behavior of the required code\n", " * Easy to understand plots \n", " * Overall readability and organization of the notebook\n", " * Effort in making interesting observations where requested.\n", " * Please make your notebooks as concise as possible.\n", " " ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.2" } }, "nbformat": 4, "nbformat_minor": 1 }