Kernel-matching pursuits with arbitrary loss functions

Jason R. Stack, Gerald J. Dobeck, Xuejun Liao, Lawrence Carin

Research output: Contribution to journalArticlepeer-review

5 Scopus citations


The purpose of this research is to develop a classifier capable of state-of-the-art performance in both computational efficiency and generalization ability while allowing the algorithm designer to choose arbitrary loss functions as appropriate for a give problem domain. This is critical in applications involving heavily imbalanced, noisy, or non-Gaussian distributed data. To achieve this goal, a kernel-matching pursuit (KMP) framework is formulated where the objective is margin maximization rather than the standard error minimization. This approach enables excellent performance and computational savings in the presence of large, imbalanced training data sets and facilitates the development of two general algorithms. These algorithms support the use of arbitrary loss functions allowing the algorithm designer to control the degree to which outliers are penalized and the manner in which non-Gaussian distributed data is handled. Example loss functions are provided and algorithm performance is illustrated in two groups of experimental results. The first group demonstrates that the proposed algorithms perform equivalent to several state-of-the-art machine learning algorithms on well-published, balanced data. The second group of results illustrates superior performance by the proposed algorithms on imbalanced, non-Gaussian data achieved by employing loss functions appropriate for the data characteristics and problem domain. © 2009 IEEE.
Original languageEnglish (US)
Pages (from-to)395-405
Number of pages11
JournalIEEE Transactions on Neural Networks
Issue number3
StatePublished - Jan 1 2009
Externally publishedYes


Dive into the research topics of 'Kernel-matching pursuits with arbitrary loss functions'. Together they form a unique fingerprint.

Cite this