Linear discriminant analysis (LDA) is a popular technique for classification that is built on the assumption that data follow a Gaussian mixture model with a common covariance matrix. A major challenge towards the use of LDA in practice is that the classifier depends on the mean parameters and the inverse of covariance matrix of the Gaussian mixture model that need to be estimated from training data. Several estimators for the inverse of the covariance matrix can be used. The most common ones are estimators based on the regularization approach, giving the name regularized LDA (R-LDA) to the corresponding classifier. The main advantage of such estimators is their resilience to the sampling noise, making them suitable to high dimensional settings. In this paper, we propose a new estimator that is shown to yield better classification performance than the classical R-LDA. The main principle of our proposed method is the design of an optimized inverse covariance matrix estimator based on the assumption that the true covariance matrix is a low-rank perturbation of a scaled identity matrix. We show that not only the proposed classifier is easier to implement but also, as evidenced by numerical experiments, it outperforms the LDA and R-LDA classifiers for both real and synthetic data.
|Original language||English (US)|
|Journal||2018 IEEE 19th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC)|
|State||Published - Aug 28 2018|