Communication-Efficient Federated Learning via Quantized Compres

1年前 ⋅ 444 阅读

Communication-Efficient Federated Learning via Quantized Compressed Sensing

作者: Yongjeong Oh, Namyoon Lee, Yo-Seb Jeon, and H. Vincent Poor
Author affiliations:

  1. Department of Electrical Engineering, POSTECH, Gyeongbuk, Pohang; 37673, Korea, Republic of
  2. Department of Electrical and Computer Engineering, Princeton University, Princeton; NJ; 08544, United States

Abstract: In this paper, we present a communication-efficient federated learning framework inspired by quantized compressed sensing. The presented framework consists of gradient compression for wireless devices and gradient reconstruction for a parameter server (PS). Our strategy for gradient compression is to sequentially perform block sparsification, dimensional reduction, and quantization. Thanks to gradient sparsification and quantization, our strategy can achieve a higher compression ratio than one-bit gradient compression. For accurate aggregation of the local gradients from the compressed signals at the PS, we put forth an approximate minimum mean square error (MMSE) approach for gradient reconstruction using the expectation-maximization generalized-approximate-message-passing (EM-GAMP) algorithm.Assuming Bernoulli Gaussian-mixture prior, this algorithm iteratively updates the posterior mean and variance of local gradients from the compressed signals. We also present a low-complexity approach for the gradient reconstruction. In this approach, we use the Bussgang theorem to aggregate local gradients from the compressed signals, then compute an approximate MMSE estimate of the aggregated gradient using the EM-GAMP algorithm. We also provide a convergence rate analysis of the presented framework. Using the MNIST dataset, we demonstrate that the presented framework achieves almost identical performance with the case that performs no compression, while significantly reducing communication overhead for federated learning.

keywords: Federated Learning,Communication-Efficient,Quantized Compressed Sensing

摘要:在本文中,我们受到量化压缩感知的启发,提出了一个通信高效的联邦学习框架。该框架由无线设备的梯度压缩和参数服务器( PS )的梯度重建组成。我们的梯度压缩策略是依次进行块稀疏化、降维和量化。得益于梯度稀疏化和量化,我们的策略可以实现比单比特梯度压缩更高的压缩比。为了从PS处的压缩信号中准确地聚合局部梯度,我们提出了一种近似最小均方误差( MMSE )方法,用于使用期望最大化广义近似消息传递( EM-GAMP )算法进行梯度重建。该算法假设伯努利高斯混合先验,从压缩信号中迭代更新局部梯度的后验均值和方差。我们还提出了一种低复杂度的梯度重建方法。在该方法中,我们利用巴斯冈定理从压缩信号中聚合局部梯度,然后利用EM - GAMP算法计算聚合梯度的近似MMSE估计。我们还提供了所提出框架的收敛速度分析。使用MNIST数据集,我们证明了所提出的框架实现了与不执行压缩的情况几乎相同的性能,同时显著降低了联邦学习的通信开销。

关键字: 联邦学习,通信效率,量化压缩感知

Accession number:20210404822
Publisher:arXiv
E-ISSN:23318422
原文链接:Communication-Efficient Federated Learning via Quantized Compressed Sensing