High-Dimensional Private Empirical Risk Minimization by Greedy Coordinate Descent - LIRMM - Laboratoire d’Informatique, de Robotique et de Microélectronique de Montpellier Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

High-Dimensional Private Empirical Risk Minimization by Greedy Coordinate Descent

Résumé

In this paper, we study differentially private empirical risk minimization (DP-ERM). It has been shown that the worst-case utility of DP-ERM reduces polynomially as the dimension increases. This is a major obstacle to privately learning large machine learning models. In high dimension, it is common for some model's parameters to carry more information than others. To exploit this, we propose a differentially private greedy coordinate descent (DP-GCD) algorithm. At each iteration, DP-GCD privately performs a coordinate-wise gradient step along the gradients' (approximately) greatest entry. We show theoretically that DP-GCD can achieve a logarithmic dependence on the dimension for a wide range of problems by naturally exploiting their structural properties (such as quasi-sparse solutions). We illustrate this behavior numerically, both on synthetic and real datasets.
Fichier principal
Vignette du fichier
paper.pdf (572.72 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03714465 , version 1 (05-07-2022)
hal-03714465 , version 2 (21-10-2022)
hal-03714465 , version 3 (09-04-2023)

Identifiants

  • HAL Id : hal-03714465 , version 3

Citer

Paul Mangold, Aurélien Bellet, Joseph Salmon, Marc Tommasi. High-Dimensional Private Empirical Risk Minimization by Greedy Coordinate Descent. AISTATS 2023 - International Conference on Artificial Intelligence and Statistics, Apr 2023, Valencia, Spain. ⟨hal-03714465v3⟩
148 Consultations
77 Téléchargements

Partager

Gmail Mastodon Facebook X LinkedIn More