Mostrar el registro sencillo del ítem
PriPrune: Quantifying and Preserving Privacy in Pruned Federated Learning
dc.contributor.author | Chu, Tianyue | |
dc.contributor.author | Yang, Mengwei | |
dc.contributor.author | Laoutaris, Nikolaos | |
dc.contributor.author | Markopoulou, Athina | |
dc.date.accessioned | 2025-01-15T13:53:15Z | |
dc.date.available | 2025-01-15T13:53:15Z | |
dc.date.issued | 2024-11-02 | |
dc.identifier.uri | https://hdl.handle.net/20.500.12761/1896 | |
dc.description.abstract | Model pruning has been proposed as a technique for reducing the size and complexity of Federated learning (FL) models. By making local models coarser, pruning is intuitively expected to improve protection against privacy attacks. However, the level of this expected privacy protection has not been previously characterized, or optimized jointly with utility. In this paper, we first characterize the privacy offered by pruning. We establish information-theoretic upper bounds on the information leakage from pruned FL and we experimentally validate them under state-of-the-art privacy attacks across different FL pruning schemes. Second, we introduce PriPrune – a privacy-aware algorithm for pruning in FL. PriPrune uses defense pruning masks, which can be applied locally after any pruning algorithm, and adapts the defense pruning rate to jointly optimize privacy and accuracy. Another key idea in the design of PriPrune is Pseudo-Pruning: it undergoes defense pruning within the local model and only sends the pruned model to the server; while the weights pruned out by defense mask are withheld locally for future local training rather than being removed. We show that PriPrune significantly improves the privacy-accuracy tradeoff compared to state-of-the-art pruned FL schemes. For example, on the FEMNIST dataset, PriPrune improves the privacy of PruneFL by 45.5% without reducing accuracy. | es |
dc.description.sponsorship | Ministry of Economic Affairs and Digital Transformation, European Union NextGeneration-EU REGAGE22e00052829516 | es |
dc.language.iso | eng | es |
dc.title | PriPrune: Quantifying and Preserving Privacy in Pruned Federated Learning | es |
dc.type | journal article | es |
dc.journal.title | ACM Transactions on Modeling and Performance Evaluation of Computing Systems | es |
dc.rights.accessRights | open access | es |
dc.identifier.doi | 10.1145/3702241 | es |
dc.relation.projectName | MLEDGE: Cloud and Edge Machine Learning | es |
dc.description.refereed | TRUE | es |
dc.description.status | pub | es |