Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

BiPruneFL: Computation and Communication Efficient Federated Learning With Binary Quantization and Pruningopen access

Authors
Lee, SangminJang, Hyeryung
Issue Date
2025
Publisher
IEEE
Keywords
Computational modeling; Costs; Quantization (signal); Servers; Federated learning; Computational efficiency; Accuracy; Training; Neural networks; Data models; Federated learning (FL); Internet of Things; neural network pruning; quantization; lottery tickets
Citation
IEEE Access, v.13, pp 42441 - 42456
Pages
16
Indexed
SCIE
SCOPUS
Journal Title
IEEE Access
Volume
13
Start Page
42441
End Page
42456
URI
https://scholarworks.dongguk.edu/handle/sw.dongguk/58014
DOI
10.1109/ACCESS.2025.3547627
ISSN
2169-3536
2169-3536
Abstract
Federated learning (FL) is a decentralized learning framework that allows a central server and multiple devices, referred to as clients, to collaboratively train a shared model without transmitting their private data to a central server. This approach helps to preserve data privacy and reduce the risk of information leakage. However, FL systems often face significant communication and computational overhead due to frequent exchanges of model parameters and the intensive local training required on resource-constrained clients. Existing solutions typically apply compression techniques such as quantization or pruning but only to a limited extent, constrained by the trade-off between model accuracy and compression efficiency. To address these challenges, we propose BiPruneFL, a communication- and computation-efficient FL framework that combines quantization and pruning while maintaining competitive accuracy. By leveraging recent advances in neural network pruning, BiPruneFL identifies subnetworks within binary neural networks without significantly compromising accuracy. Additionally, we employ communication compression strategies to enable efficient model updates and computationally lightweight local training. Through experiments, we demonstrate that BiPruneFL significantly outperforms other baselines, achieving up to 88.1x and 80.8x more efficient communication costs during upstream and downstream phases, respectively, and reducing computation costs by 3.9x to 34.9x depending on the degree of quantization. Despite these efficiency gains, BiPruneFL achieves accuracy comparable to, and in some cases surpassing, that of uncompressed federated learning models.
Files in This Item
There are no files associated with this item.
Appears in
Collections
ETC > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Jang, Hye Ryung photo

Jang, Hye Ryung
College of Advanced Convergence Engineering (Department of Computer Science and Artificial Intelligence)
Read more

Altmetrics

Total Views & Downloads

BROWSE