Turbo-Aggregate: Breaking the Quadratic Aggregation Barrier in Secure Federated Learning
作者:Jinhyun So,Basak Guler,A. Salman Avestimehr
期刊: IEEE Journal on Selected Areas in Information Theory ( Volume: 2, Issue: 1, March 2021)
Abstract:
Federated learning is a distributed framework for training machine learning models over the data residing at mobile devices, while protecting the privacy of individual users. A major bottleneck in scaling federated learning to a large number of users is the overhead of secure model aggregation across many users. In particular, the overhead of the state-of-the-art protocols for secure model aggregation grows quadratically with the number of users. In this article, we propose the first secure aggregation framework, named Turbo-Aggregate, that in a network with N users achieves a secure aggregation overhead of O(NlogN), as opposed to O(N 2 ), while tolerating up to a user dropout rate of 50%. Turbo-Aggregate employs a multi-group circular strategy for efficient model aggregation, and leverages additive secret sharing and novel coding techniques for injecting aggregation redundancy in order to handle user dropouts while guaranteeing user privacy. We experimentally demonstrate that Turbo-Aggregate achieves a total running time that grows almost linear in the number of users, and provides up to 40× speedup over the state-of-the-art protocols with up to N=200 users. Our experiments also demonstrate the impact of model size and bandwidth on the performance of Turbo-Aggregate.
keyword:Federated learning,privacy-preserving, machine learning,secure aggregation
摘要:联邦学习是一种分布式框架,用于在移动设备上训练机器学习模型,同时保护个人用户的隐私。将联邦学习扩展到大量用户的一个主要瓶颈是跨多个用户的安全模型聚合的开销。特别地,现有安全模型聚合协议的开销随用户数量呈二次方增长。本文提出了第一个安全聚合框架Turbo - Aggregate,在N个用户的网络中实现了O( N log N)的安全聚合开销,而不是O ( N2 ),同时可以容忍高达50 %的用户丢包率。Turbo - Aggregate采用多组循环策略进行高效的模型聚合,并利用加性秘密共享和新颖的编码技术注入聚合冗余,在保证用户隐私的同时处理用户丢包。我们的实验表明,Turbo - Aggregate实现了总运行时间随用户数量几乎线性增长,并在N = 200个用户的情况下提供了高达40倍的加速比。我们的实验还展示了模型大小和带宽对Turbo - Aggregate性能的影响。
关键字:联邦学习,隐私保护,机器学习,安全数据聚集
DOI: 10.1109/JSAIT.2021.3054610
全文链接:https://arxiv.org/pdf/2002.04156.pdf
注意:欢迎转载,转载时请注明来源