Résumé
We devise a learning algorithm for possibly nonsmooth deep neural networks featuring inertia and Newtonian directional intelligence only by means of a backpropagation oracle. Our algorithm, called INDIAN, has an appealing mechanical interpretation, making the role of its two hyperparameters transparent. An elementary phase space lifting allows both for its implementation and its theoretical study under very general assumptions. We handle in particular a stochastic version of our method (which encompasses usual mini-batch approaches) for nonsmooth activation functions (such as ReLU). Our algorithm shows high efficiency and reaches state of the art on image classification problems.
Référence
Jérôme Bolte, Camille Castera, Edouard Pauwels et Cédric Févotte, « An Inertial Newton Algorithm for Deep Learning », TSE Working Paper, n° 19-1043, octobre 2019.
Voir aussi
Publié dans
TSE Working Paper, n° 19-1043, octobre 2019