Cited 0 time in
BiPruneFL: Computation and Communication Efficient Federated Learning With Binary Quantization and Pruning
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Lee, Sangmin | - |
| dc.contributor.author | Jang, Hyeryung | - |
| dc.date.accessioned | 2025-03-24T08:00:11Z | - |
| dc.date.available | 2025-03-24T08:00:11Z | - |
| dc.date.issued | 2025 | - |
| dc.identifier.issn | 2169-3536 | - |
| dc.identifier.issn | 2169-3536 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/58014 | - |
| dc.description.abstract | Federated learning (FL) is a decentralized learning framework that allows a central server and multiple devices, referred to as clients, to collaboratively train a shared model without transmitting their private data to a central server. This approach helps to preserve data privacy and reduce the risk of information leakage. However, FL systems often face significant communication and computational overhead due to frequent exchanges of model parameters and the intensive local training required on resource-constrained clients. Existing solutions typically apply compression techniques such as quantization or pruning but only to a limited extent, constrained by the trade-off between model accuracy and compression efficiency. To address these challenges, we propose BiPruneFL, a communication- and computation-efficient FL framework that combines quantization and pruning while maintaining competitive accuracy. By leveraging recent advances in neural network pruning, BiPruneFL identifies subnetworks within binary neural networks without significantly compromising accuracy. Additionally, we employ communication compression strategies to enable efficient model updates and computationally lightweight local training. Through experiments, we demonstrate that BiPruneFL significantly outperforms other baselines, achieving up to 88.1x and 80.8x more efficient communication costs during upstream and downstream phases, respectively, and reducing computation costs by 3.9x to 34.9x depending on the degree of quantization. Despite these efficiency gains, BiPruneFL achieves accuracy comparable to, and in some cases surpassing, that of uncompressed federated learning models. | - |
| dc.format.extent | 16 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | IEEE | - |
| dc.title | BiPruneFL: Computation and Communication Efficient Federated Learning With Binary Quantization and Pruning | - |
| dc.type | Article | - |
| dc.publisher.location | 미국 | - |
| dc.identifier.doi | 10.1109/ACCESS.2025.3547627 | - |
| dc.identifier.scopusid | 2-s2.0-105001064417 | - |
| dc.identifier.wosid | 001442889800026 | - |
| dc.identifier.bibliographicCitation | IEEE Access, v.13, pp 42441 - 42456 | - |
| dc.citation.title | IEEE Access | - |
| dc.citation.volume | 13 | - |
| dc.citation.startPage | 42441 | - |
| dc.citation.endPage | 42456 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalResearchArea | Telecommunications | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
| dc.relation.journalWebOfScienceCategory | Telecommunications | - |
| dc.subject.keywordAuthor | Computational modeling | - |
| dc.subject.keywordAuthor | Costs | - |
| dc.subject.keywordAuthor | Quantization (signal) | - |
| dc.subject.keywordAuthor | Servers | - |
| dc.subject.keywordAuthor | Federated learning | - |
| dc.subject.keywordAuthor | Computational efficiency | - |
| dc.subject.keywordAuthor | Accuracy | - |
| dc.subject.keywordAuthor | Training | - |
| dc.subject.keywordAuthor | Neural networks | - |
| dc.subject.keywordAuthor | Data models | - |
| dc.subject.keywordAuthor | Federated learning (FL) | - |
| dc.subject.keywordAuthor | Internet of Things | - |
| dc.subject.keywordAuthor | neural network pruning | - |
| dc.subject.keywordAuthor | quantization | - |
| dc.subject.keywordAuthor | lottery tickets | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
