Gradient Descent on Infinitely Wide Neural Networks: Global Convergence and Generalization - INRIA - Institut National de Recherche en Informatique et en Automatique Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Gradient Descent on Infinitely Wide Neural Networks: Global Convergence and Generalization

Résumé

Many supervised machine learning methods are naturally cast as optimization problems. For prediction models which are linear in their parameters, this often leads to convex problems for which many mathematical guarantees exist. Models which are non-linear in their parameters such as neural networks lead to non-convex optimization problems for which guarantees are harder to obtain. In this review paper, we consider two-layer neural networks with homogeneous activation functions where the number of hidden neurons tends to infinity, and show how qualitative convergence guarantees may be derived.
Fichier principal
Vignette du fichier
ICM-Bach-Chizat-HAL.pdf (1.72 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03379011 , version 1 (15-10-2021)

Identifiants

Citer

Francis Bach, Lénaïc Chizat. Gradient Descent on Infinitely Wide Neural Networks: Global Convergence and Generalization. International Congress of Mathematicians, Jul 2022, Saint-Petersbourg, Russia. ⟨hal-03379011⟩
135 Consultations
1084 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More