TY - GEN

T1 - A Model of Double Descent for High-Dimensional Logistic Regression

AU - Deng, Zeyu

AU - Kammoun, Abla

AU - Thrampoulidis, Christos

N1 - KAUST Repository Item: Exported on 2020-10-01

PY - 2020

Y1 - 2020

N2 - We consider a model for logistic regression where only a subset of features of size p is used for training a linear classifier over n training samples. The classifier is obtained by running gradient-descent (GD) on the logistic-loss. For this model, we investigate the dependence of the classification error on the overparameterization ratio κ = p/n. First, building on known deterministic results on convergence properties of the GD, we uncover a phase-transition phenomenon for the case of Gaussian features: the classification error of GD is the same as that of the maximum-likelihood (ML) solution when κ < κ⋆, and that of the max-margin (SVM) solution when κ > κ⋆. Next, using the convex Gaussian min-max theorem (CGMT), we sharply characterize the performance of both the ML and SVM solutions. Combining these results, we obtain curves that explicitly characterize the test error of GD for varying values of κ. The numerical results validate the theoretical predictions and unveil "double-descent" phenomena that complement similar recent observations in linear regression settings.

AB - We consider a model for logistic regression where only a subset of features of size p is used for training a linear classifier over n training samples. The classifier is obtained by running gradient-descent (GD) on the logistic-loss. For this model, we investigate the dependence of the classification error on the overparameterization ratio κ = p/n. First, building on known deterministic results on convergence properties of the GD, we uncover a phase-transition phenomenon for the case of Gaussian features: the classification error of GD is the same as that of the maximum-likelihood (ML) solution when κ < κ⋆, and that of the max-margin (SVM) solution when κ > κ⋆. Next, using the convex Gaussian min-max theorem (CGMT), we sharply characterize the performance of both the ML and SVM solutions. Combining these results, we obtain curves that explicitly characterize the test error of GD for varying values of κ. The numerical results validate the theoretical predictions and unveil "double-descent" phenomena that complement similar recent observations in linear regression settings.

UR - http://hdl.handle.net/10754/662742

UR - https://ieeexplore.ieee.org/document/9053524/

U2 - 10.1109/ICASSP40776.2020.9053524

DO - 10.1109/ICASSP40776.2020.9053524

M3 - Conference contribution

SN - 978-1-5090-6632-2

BT - ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)

PB - IEEE

ER -