DP-SGD Without Clipping: The Lipschitz Neural Network Way - IRT Saint Exupéry - Institut de Recherche Technologique Access content directly
Preprints, Working Papers, ... Year : 2023

DP-SGD Without Clipping: The Lipschitz Neural Network Way


State-of-the-art approaches for training Differentially Private (DP) Deep Neural Networks (DNN) faces difficulties to estimate tight bounds on the sensitivity of the network's layers, and instead rely on a process of per-sample gradient clipping. This clipping process not only biases the direction of gradients but also proves costly both in memory consumption and in computation. To provide sensitivity bounds and bypass the drawbacks of the clipping process, our theoretical analysis of Lipschitz constrained networks reveals an unexplored link between the Lipschitz constant with respect to their input and the one with respect to their parameters. By bounding the Lipschitz constant of each layer with respect to its parameters we guarantee DP training of these networks. This analysis not only allows the computation of the aforementioned sensitivities at scale but also provides leads on to how maximize the gradient-to-noise ratio for fixed privacy guarantees. To facilitate the application of Lipschitz networks and foster robust and certifiable learning under privacy guarantees, we provide a Python package that implements building blocks allowing the construction and private training of such networks.
Fichier principal
Vignette du fichier
2305.16202.pdf (6.28 Mo) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-04130913 , version 1 (16-06-2023)



Louis Béthune, Thomas Masséna, Thibaut Boissin, Corentin Friedrich, Franck Mamalet, et al.. DP-SGD Without Clipping: The Lipschitz Neural Network Way. 2023. ⟨hal-04130913⟩
17 View
25 Download



Gmail Facebook X LinkedIn More