An Empirical Study of the Distributed Ellipsoidal Trust Region Method for Large Batch Training

  • Ali Alnasser

Student thesis: Master's Thesis

Abstract

Neural networks optimizers are dominated by rst-order methods, due to their inexpensive computational cost per iteration. However, it has been shown that rstorder optimization is prone to reaching sharp minima when trained with large batch sizes. As the batch size increases, the statistical stability of the problem increases, a regime that is well suited for second-order optimization methods. In this thesis, we study a distributed ellipsoidal trust region model for neural networks. We use a block diagonal approximation of the Hessian, assigning consecutive layers of the network to each process. We solve in parallel for the update direction of each subset of the parameters. We show that our optimizer is t for large batch training as well as increasing number of processes.
Date of AwardFeb 10 2021
Original languageEnglish (US)
SupervisorDavid Keyes (Supervisor)

Keywords

  • optimization
  • trust region
  • distributed computing
  • deep learning
  • machine learning

Cite this

'