Information-Theoretical Bounds on Privacy Leakage in Pruned Federated Learning
dc.contributor.author | Chu, Tianyue | |
dc.contributor.author | Yang, Mengwei | |
dc.contributor.author | Laoutaris, Nikolaos | |
dc.contributor.author | Markopoulou, Athina | |
dc.date.accessioned | 2024-07-16T11:37:31Z | |
dc.date.available | 2024-07-16T11:37:31Z | |
dc.date.issued | 2024-07-07 | |
dc.identifier.uri | https://hdl.handle.net/20.500.12761/1834 | |
dc.description.abstract | Model pruning has been proposed as a technique for reducing the size and complexity of Federated learning (FL) models. By making local models coarser, pruning is intuitively expected to improve protection against privacy attacks. However, the level of this expected privacy protection has not been previously characterized, or optimised jointly with utility. In this paper, we investigate for the first time the privacy impact of model pruning in FL. We establish information-theoretic upper bounds on the information leakage from pruned FL and we experimentally validate them under state-of-the-art privacy attacks across different FL pruning schemes. This evaluation provides valuable insights into the choices and parameters that can affect the privacy protection provided by pruning. | es |
dc.language.iso | eng | es |
dc.title | Information-Theoretical Bounds on Privacy Leakage in Pruned Federated Learning | es |
dc.type | conference object | es |
dc.conference.date | 7 July 2024 | es |
dc.conference.place | Athens, Greece | es |
dc.conference.title | ISIT 2024 Workshop on Information-Theoretic Methods for Trustworthy Machine Learning | * |
dc.event.type | workshop | es |
dc.pres.type | paper | es |
dc.rights.accessRights | open access | es |
dc.description.refereed | TRUE | es |
dc.description.status | pub | es |
Files in this item
Files | Size | Format | View |
---|---|---|---|
There are no files associated with this item. |