Stochastic gradient descent for risk optimization

André Gustavo Carlon, André Jacomel Torii, Rafael Holdorf Lopez, José Eduardo Souza de Cursi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper presents an approach for the use of stochastic gradient descent methods for the solution of risk optimization problems. The first challenge is to avoid the high-cost evaluation of the failure probability and its gradient at each iteration of the optimization process. We propose here that it is accomplished by employing a stochastic gradient descent algorithm for the minimization of the Chernoff bound of the limit state function associated with the probabilistic constraint. The employed stochastic gradient descent algorithm, the Adam algorithm, is a robust method used in machine learning training. A numerical example is presented to illustrate the advantages and potential drawbacks of the proposed approach.
Original languageEnglish (US)
Title of host publicationLecture Notes in Mechanical Engineering
PublisherSpringer International Publishing
Pages424-435
Number of pages12
ISBN (Print)9783030536688
DOIs
StatePublished - Aug 19 2020

Fingerprint Dive into the research topics of 'Stochastic gradient descent for risk optimization'. Together they form a unique fingerprint.

Cite this