AEDFL: Efficient Asynchronous Decentralized Federated Learning with Heterogeneous Devices
Abstract
Federated Learning (FL) has achieved significant achievements re- cently, enabling collaborative model training on distributed data over edge devices. Iterative gradient or model exchanges between devices and the centralized server in the standard FL paradigm suf- fer from severe efficiency bottlenecks on the server. While enabling collaborative training without a central server, existing decentral- ized FL approaches either focus on the synchronous mechanism that deteriorates FL convergence or ignore device staleness with an asynchronous mechanism, resulting in inferior FL accuracy. In this paper, we propose an Asynchronous Efficient Decentralized FL framework, i.e., AEDFL, in heterogeneous environments with three unique contributions. First, we propose an asynchronous FL sys- tem model with an efficient model aggregation method for improv- ing the FL convergence. Second, we propose a dynamic staleness- aware model update approach to achieve superior accuracy. Third, we propose an adaptive sparse training method to reduce commu- nication and computation costs without significant accuracy degra- dation. Extensive experimentation on four public datasets and four models demonstrates the strength of AEDFL in terms of accuracy (up to 16.3% higher), efficiency (up to 92.9% faster), and computa- tion costs (up to 42.3% lower).
Origin | Files produced by the author(s) |
---|