Evaluating feature selection for SVMs in high dimensions

Roland Nilsson*, José M. Peña, Johan Björkegren, Jesper Tegner

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

15 Scopus citations

Abstract

We perform a systematic evaluation of feature selection (FS) methods for support vector machines (SVMs) using simulated high-dimensional data (up to 5000 dimensions). Several findings previously reported at low dimensions do not apply in high dimensions. For example, none of the FS methods investigated improved SVM accuracy, indicating that the SVM built-in regularization is sufficient. These results were also validated using microarray data. Moreover, all FS methods tend to discard many relevant features. This is a problem for applications such as microarray data analysis, where identifying all biologically important features is a major objective.

Original languageEnglish (US)
Title of host publicationMachine Learning
Subtitle of host publicationECML 2006 - 17th European Conference on Machine Learning, Proceedings
PublisherSpringer Verlag
Pages719-726
Number of pages8
ISBN (Print)354045375X, 9783540453758
StatePublished - Jan 1 2006
Event17th European Conference on Machine Learning, ECML 2006 - Berlin, Germany
Duration: Sep 18 2006Sep 22 2006

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume4212 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other17th European Conference on Machine Learning, ECML 2006
CountryGermany
CityBerlin
Period09/18/0609/22/06

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Fingerprint Dive into the research topics of 'Evaluating feature selection for SVMs in high dimensions'. Together they form a unique fingerprint.

Cite this