Information-Theoretical Bounds on Privacy Leakage in Pruned Federated Learning
Date
2024-07-07Abstract
Model pruning has been proposed as a technique for reducing the size and complexity of Federated learning (FL) models. By making local models coarser, pruning is intuitively expected to improve protection against privacy attacks. However, the level of this expected privacy protection has not been previously characterized, or optimised jointly with utility. In this paper, we investigate for the first time the privacy impact of model pruning in FL. We establish information-theoretic upper bounds on the information leakage from pruned FL and we experimentally validate them under state-of-the-art privacy attacks across different FL pruning schemes.
This evaluation provides valuable insights into the choices and parameters that can affect the privacy protection provided by pruning.