Main.Projects History

Hide minor edits - Show changes to markup

Changed line 29 from:

Here are the links to the datasets you will use in your experiments:

to:

Here are the links to the datasets you will use in your feature selection experiments:

Added lines 28-33:

Here are the links to the datasets you will use in your experiments:

  • Gisette
  • Dexter
  • Arcene
Changed lines 17-18 from:
  • Hui Zou and Trevor Hastie. Regularization and Variable Selection via the Elastic Net. JRSSB (2005) 67(2) 301-320. An R package elasticnet is available from CRAN.
to:
  • Hui Zou and Trevor Hastie. Regularization and Variable Selection via the Elastic Net. JRSSB (2005) 67(2) 301-320. An R package elasticnet is available from CRAN. (Majdi)
Changed line 23 from:
  • T. Joachims. A Support Vector Method for Multivariate Performance Measures. Proceedings of the International Conference on Machine Learning (ICML), 2005. code. Can be used for feature selection like SVMs are used in the RFE method. (Majdi)
to:
  • T. Joachims. A Support Vector Method for Multivariate Performance Measures. Proceedings of the International Conference on Machine Learning (ICML), 2005. code. Can be used for feature selection like SVMs are used in the RFE method.
Changed line 27 from:
  • Christoph H. Lampert. Maximum Margin Multi-Label Structured Prediction. Neural Information Processing Systems (NIPS), 2011.
to:
  • Wei Bi and James Kwok. Multi-Label Classification on Tree- and DAG-Structured Hierarchies. International Conference on Machine Learning (ICML-11), 2011. (Indika)
Changed line 23 from:
  • T. Joachims. A Support Vector Method for Multivariate Performance Measures. Proceedings of the International Conference on Machine Learning (ICML), 2005. code. Can be used for feature selection like SVMs are used in the RFE method.
to:
  • T. Joachims. A Support Vector Method for Multivariate Performance Measures. Proceedings of the International Conference on Machine Learning (ICML), 2005. code. Can be used for feature selection like SVMs are used in the RFE method. (Majdi)
Changed line 11 from:
  • Yeung K, Bumgarner R. Multiclass classification of microarray data with repeated measurements: application to cancer. Genome Biology. 4:R83, 2003.
to:
  • Yeung K, Bumgarner R. Multiclass classification of microarray data with repeated measurements: application to cancer. Genome Biology. 4:R83, 2003. (Simon)
Changed line 13 from:
  • Y. Sun, S. Todorovic, and S. Goodison. Local Learning Based Feature Selection for High Dimensional Data Analysis. IEEE Trans. on Pattern Analysis and Machine Intelligence (TPAMI), vol. 32, no. 9, pp. 1610-1626, 2010.
to:
  • Y. Sun, S. Todorovic, and S. Goodison. Local Learning Based Feature Selection for High Dimensional Data Analysis. IEEE Trans. on Pattern Analysis and Machine Intelligence (TPAMI), vol. 32, no. 9, pp. 1610-1626, 2010. (Nand)
Changed line 19 from:
  • Ji Zhu, Saharon Rosset, Trevor Hastie and Rob Tibshirani. 1-norm support vector machines. In: Neural Information Processing Systems (NIPS) 16, 2004. 1-norm classifiers create very sparse representations, so are useful for feature selection.
to:
  • Ji Zhu, Saharon Rosset, Trevor Hastie and Rob Tibshirani. 1-norm support vector machines. In: Neural Information Processing Systems (NIPS) 16, 2004. 1-norm classifiers create very sparse representations, so are useful for feature selection. (Prathamesh)
Changed line 21 from:
  • Feiping Nie, Heng Huang, Cai Xiao, Chris Ding. Efficient and Robust Feature Selection via Joint L2,1-Norms Minimization. NIPS 2010.
to:
  • Feiping Nie, Heng Huang, Cai Xiao, Chris Ding. Efficient and Robust Feature Selection via Joint L2,1-Norms Minimization. NIPS 2010. (Rehab)
Changed line 9 from:
  • Chris Ding and Hanchuan Peng. Minimum redundancy feature selection from microarray gene expression data. Journal of Bioinformatics and Computational Biology, Vol. 3, No. 2, pp.185-205, 2005.
to:
  • Chris Ding and Hanchuan Peng. Minimum redundancy feature selection from microarray gene expression data. Journal of Bioinformatics and Computational Biology, Vol. 3, No. 2, pp.185-205, 2005. (Joel)
Changed line 23 from:
  • T. Joachims. A Support Vector Method for Multivariate Performance Measures. Proceedings of the International Conference on Machine Learning (ICML), 2005. code. Can be used for feature selection like the RFE method.
to:
  • T. Joachims. A Support Vector Method for Multivariate Performance Measures. Proceedings of the International Conference on Machine Learning (ICML), 2005. code. Can be used for feature selection like SVMs are used in the RFE method.
Changed line 23 from:
  • T. Joachims. A Support Vector Method for Multivariate Performance Measures. Proceedings of the International Conference on Machine Learning (ICML), 2005. code. Can be used for feature selection like the RFE method.
to:
  • T. Joachims. A Support Vector Method for Multivariate Performance Measures. Proceedings of the International Conference on Machine Learning (ICML), 2005. code. Can be used for feature selection like the RFE method.
Changed line 19 from:
  • Ji Zhu, Saharon Rosset, Trevor Hastie and Rob Tibshirani. 1-norm support vector machines. In: Neural Information Processing Systems (NIPS) 16, 2004.
to:
  • Ji Zhu, Saharon Rosset, Trevor Hastie and Rob Tibshirani. 1-norm support vector machines. In: Neural Information Processing Systems (NIPS) 16, 2004. 1-norm classifiers create very sparse representations, so are useful for feature selection.
Changed line 27 from:
  • Christoph H. Lampert. [[http://books.nips.cc/papers/files/nips24/NIPS2011_0207.pdf | Maximum Margin Multi-Label Structured Prediction. Neural Information Processing Systems(NIPS), 2011.
to:
  • Christoph H. Lampert. Maximum Margin Multi-Label Structured Prediction. Neural Information Processing Systems (NIPS), 2011.
Added lines 25-27:

Prediction of protein function

  • Christoph H. Lampert. [[http://books.nips.cc/papers/files/nips24/NIPS2011_0207.pdf | Maximum Margin Multi-Label Structured Prediction. Neural Information Processing Systems(NIPS), 2011.
Changed lines 23-24 from:
  • A. Rakotomamonjy. Optimizing AUC with SVMs. Proceedings of European Conference on Artificial Intelligence Workshop on ROC Curve and AI, Valencia, 2004. code. Can be used for feature selection like the RFE method.
to:
  • T. Joachims. A Support Vector Method for Multivariate Performance Measures. Proceedings of the International Conference on Machine Learning (ICML), 2005. code. Can be used for feature selection like the RFE method.
Changed line 23 from:
  • A. Rakotomamonjy. Optimizing AUC with SVMs. Proceedings of European Conference on Artificial Intelligence Workshop on ROC Curve and AI, Valencia, 2004.
to:
  • A. Rakotomamonjy. Optimizing AUC with SVMs. Proceedings of European Conference on Artificial Intelligence Workshop on ROC Curve and AI, Valencia, 2004. code. Can be used for feature selection like the RFE method.
Added line 23:
  • A. Rakotomamonjy. Optimizing AUC with SVMs. Proceedings of European Conference on Artificial Intelligence Workshop on ROC Curve and AI, Valencia, 2004.
Changed lines 13-14 from:
  • Y. Sun, S. Todorovic, and S. Goodison. [[http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.163.2272
 | Local Learning Based Feature Selection for High Dimensional Data Analysis]].  IEEE Trans. on Pattern Analysis and Machine Intelligence (TPAMI), vol. 32, no. 9, pp. 1610-1626, 2010.
to:
  • Y. Sun, S. Todorovic, and S. Goodison. Local Learning Based Feature Selection for High Dimensional Data Analysis. IEEE Trans. on Pattern Analysis and Machine Intelligence (TPAMI), vol. 32, no. 9, pp. 1610-1626, 2010.
Added lines 12-14:
  • Y. Sun, S. Todorovic, and S. Goodison. [[http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.163.2272
 | Local Learning Based Feature Selection for High Dimensional Data Analysis]].  IEEE Trans. on Pattern Analysis and Machine Intelligence (TPAMI), vol. 32, no. 9, pp. 1610-1626, 2010.
Changed lines 11-14 from:
  • Yeung K, Bumgarner R. [[http://genomebiology.com/2003/4/12/r83
 | Multiclass classification of microarray data

with repeated measurements: application to cancer]]. Genome Biol. 4:R83, 2003.

to:
  • Yeung K, Bumgarner R. Multiclass classification of microarray data with repeated measurements: application to cancer. Genome Biology. 4:R83, 2003.
Added lines 11-14:
  • Yeung K, Bumgarner R. [[http://genomebiology.com/2003/4/12/r83
 | Multiclass classification of microarray data

with repeated measurements: application to cancer]]. Genome Biol. 4:R83, 2003.

Changed lines 16-17 from:
  • Ji Zhu, Saharon Rosset, Trevor Hastie and Rob Tibshirani. 1-norm support vector machines. In: Neural Information

Processing Systems (NIPS) 16, 2004.

to:
  • Ji Zhu, Saharon Rosset, Trevor Hastie and Rob Tibshirani. 1-norm support vector machines. In: Neural Information Processing Systems (NIPS) 16, 2004.
Changed lines 16-17 from:
  • Ji Zhu, Saharon Rosset, Trevor Hastie and Rob Tibshirani. [[http://www.stat.lsa.umich.edu/~jizhu/pubs/Zhu-NIPS04.pdf | 1-norm

support vector machines]]. In: Neural Information

to:
  • Ji Zhu, Saharon Rosset, Trevor Hastie and Rob Tibshirani. 1-norm support vector machines. In: Neural Information
Changed lines 14-15 from:
  • Hui Zou and Trevor Hastie. [[http://onlinelibrary.wiley.com/doi/10.1111/j.1467-9868.2005.00503.x/full | Regularization and Variable Selection via

the Elastic Net]]. JRSSB (2005) 67(2) 301-320. An R package elasticnet is available from CRAN.

to:
  • Hui Zou and Trevor Hastie. Regularization and Variable Selection via the Elastic Net. JRSSB (2005) 67(2) 301-320. An R package elasticnet is available from CRAN.
Changed line 14 from:
  • Hui Zou and Trevor Hastie. [[http://www.stanford.edu/~hastie/Papers/B67.2%20(2005)%20301-320%20Zou%20&%20Hastie.pdf | Regularization and Variable Selection via
to:
  • Hui Zou and Trevor Hastie. [[http://onlinelibrary.wiley.com/doi/10.1111/j.1467-9868.2005.00503.x/full | Regularization and Variable Selection via
Changed line 21 from:
  • Feiping Nie, Heng Huang, Cai Xiao, Chris Ding. Efficient and Robust Feature Selection via Joint L2,1-Norms Minimization. NIPS 2010.
to:
  • Feiping Nie, Heng Huang, Cai Xiao, Chris Ding. Efficient and Robust Feature Selection via Joint L2,1-Norms Minimization. NIPS 2010.
Changed lines 9-10 from:
  • Chris Ding and Hanchuan Peng. Minimum redundancy feature selection from microarray gene expression data.

Journal of Bioinformatics and Computational Biology, Vol. 3, No. 2, pp.185-205, 2005.

to:
  • Chris Ding and Hanchuan Peng. Minimum redundancy feature selection from microarray gene expression data. Journal of Bioinformatics and Computational Biology, Vol. 3, No. 2, pp.185-205, 2005.
Changed lines 3-23 from:

Coming soon!

to:

There are two topics available for projects: feature selection and prediction of protein function. Your first step is to choose a paper whose method you will use/implement.

Feature selection

Filter methods:

  • Chris Ding and Hanchuan Peng. Minimum redundancy feature selection from microarray gene expression data.

Journal of Bioinformatics and Computational Biology, Vol. 3, No. 2, pp.185-205, 2005.

Embedded feature selection methods:

  • Hui Zou and Trevor Hastie. [[http://www.stanford.edu/~hastie/Papers/B67.2%20(2005)%20301-320%20Zou%20&%20Hastie.pdf | Regularization and Variable Selection via

the Elastic Net]]. JRSSB (2005) 67(2) 301-320. An R package elasticnet is available from CRAN.

  • Ji Zhu, Saharon Rosset, Trevor Hastie and Rob Tibshirani. [[http://www.stat.lsa.umich.edu/~jizhu/pubs/Zhu-NIPS04.pdf | 1-norm

support vector machines]]. In: Neural Information Processing Systems (NIPS) 16, 2004.

  • Feiping Nie, Heng Huang, Cai Xiao, Chris Ding. Efficient and Robust Feature Selection via Joint L2,1-Norms Minimization. NIPS 2010.
Added lines 1-3:

Course projects

Coming soon!