FLoCoRA: FEDERATED LEARNING COMPRESSION WITH LOW-RANK ADAPTATION - Equipe Algorithm Architecture Interactions
Communication Dans Un Congrès Année : 2024

FLoCoRA: FEDERATED LEARNING COMPRESSION WITH LOW-RANK ADAPTATION

Résumé

Low-Rank Adaptation (LoRA) methods have gained popularity in efficient parameter fine-tuning of models containing hundreds of billions of parameters. In this work, instead, we demonstrate the application of LoRA methods to train small-vision models in Federated Learning (FL) from scratch. We first propose an aggregation-agnostic method to integrate LoRA within FL, named FLoCoRA, showing that the method is capable of reducing communication costs by 4.8 times, while having less than 1% accuracy degradation, for a CIFAR-10 classification task with a ResNet-8. Next, we show that the same method can be extended with an affine quantization scheme, dividing the communication cost by 18.6 times, while comparing it with the standard method, with still less than 1% of accuracy loss, tested with on a ResNet-18 model. Our formulation represents a strong baseline for message size reduction, even when compared to conventional model compression works, while also reducing the training memory requirements due to the low-rank adaptation.
Fichier principal
Vignette du fichier
main.pdf (259.35 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04617632 , version 1 (19-06-2024)

Identifiants

Citer

Lucas Grativol, Mathieu Leonardon, Guillaume Muller, Virginie Fresse, Matthieu Arzel. FLoCoRA: FEDERATED LEARNING COMPRESSION WITH LOW-RANK ADAPTATION. 32nd European Signal Processing Conference EUSIPCO, Aug 2024, Lyon, France. ⟨hal-04617632⟩
128 Consultations
118 Téléchargements

Altmetric

Partager

More