Mostrar el registro sencillo del ítem

dc.contributor.authorWang, Rui
dc.contributor.authorWang, Xingkai
dc.contributor.authorChen, Huanhuan
dc.contributor.authorDecouchant, Jérémie
dc.contributor.authorPicek, Stjepan
dc.contributor.authorLaoutaris, Nikolaos 
dc.contributor.authorLiang, Kaitai
dc.date.accessioned2024-10-09T16:15:55Z
dc.date.available2024-10-09T16:15:55Z
dc.date.issued2025-06-09
dc.identifier.urihttps://hdl.handle.net/20.500.12761/1865
dc.description.abstractByzantine-robust Federated Learning (FL) aims to counter malicious clients and train an accurate global model while maintaining an extremely low attack success rate. Most existing systems, however, are only robust when most of the clients are honest. \texttt{FLTrust} (NDSS '21) and \texttt{Zeno++} (ICML '20) do not make such an honest majority assumption but can only be applied to scenarios where the server is provided with an auxiliary dataset used to filter malicious updates. \texttt{FLAME} (USENIX '22) and \texttt{EIFFeL} (CCS '22) maintain the semi-honest majority assumption to guarantee robustness and the confidentiality of updates. It is, therefore, currently impossible to ensure Byzantine robustness and confidentiality of updates without assuming a semi-honest majority. To tackle this problem, we propose a novel Byzantine-robust and privacy-preserving FL system, called \texttt{MUDGUARD}, to capture malicious minority and majority for server and client sides, respectively. Our experimental results demonstrate that the accuracy of \texttt{MUDGUARD} is practically close to the FL baseline using FedAvg without attacks ($\approx$0.8\% gap on average). Meanwhile, the attack success rate is around 0\%-5\% even under an adaptive attack tailored to \texttt{MUDGUARD}. We further optimize our design by using binary secret sharing and polynomial transformation, leading to communication overhead and runtime decreases of 67\%-89.17\% and 66.05\%-68.75\%, respectively.es
dc.description.sponsorshipMLEDGEes
dc.language.isoenges
dc.titleMUDGUARD: Taming Malicious Majorities in Federated Learning using Privacy-Preserving Byzantine-Robust Clusteringes
dc.typeconference objectes
dc.conference.date9-13 June 2025es
dc.conference.placeStony Brook, New York, USAes
dc.conference.titleACM SIGMETRICS*
dc.event.typeconferencees
dc.pres.typepaperes
dc.rights.accessRightsopen accesses
dc.description.refereedTRUEes
dc.description.statusinpresses


Ficheros en el ítem

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem