Show simple item record

dc.contributor.authorChu, Tianyue 
dc.contributor.authorYang, Mengwei
dc.contributor.authorLaoutaris, Nikolaos 
dc.contributor.authorMarkopoulou, Athina
dc.date.accessioned2024-07-16T11:37:31Z
dc.date.available2024-07-16T11:37:31Z
dc.date.issued2024-07-07
dc.identifier.urihttps://hdl.handle.net/20.500.12761/1834
dc.description.abstractModel pruning has been proposed as a technique for reducing the size and complexity of Federated learning (FL) models. By making local models coarser, pruning is intuitively expected to improve protection against privacy attacks. However, the level of this expected privacy protection has not been previously characterized, or optimised jointly with utility. In this paper, we investigate for the first time the privacy impact of model pruning in FL. We establish information-theoretic upper bounds on the information leakage from pruned FL and we experimentally validate them under state-of-the-art privacy attacks across different FL pruning schemes. This evaluation provides valuable insights into the choices and parameters that can affect the privacy protection provided by pruning.es
dc.language.isoenges
dc.titleInformation-Theoretical Bounds on Privacy Leakage in Pruned Federated Learninges
dc.typeconference objectes
dc.conference.date7 July 2024es
dc.conference.placeAthens, Greecees
dc.conference.titleISIT 2024 Workshop on Information-Theoretic Methods for Trustworthy Machine Learning*
dc.event.typeworkshopes
dc.pres.typepaperes
dc.rights.accessRightsopen accesses
dc.description.refereedTRUEes
dc.description.statuspubes


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record