基于费舍尔信息矩阵的动态差分隐私个性化联邦学习

Dynamic Differentially Private Personalized Federated Learning with Fisher Information Matrix

  • 摘要: 为了解决现有差分隐私个性化联邦学习中静态参数划分无法适应数据异构性变化,以及噪声注入阻碍模型收敛的问题,本文提出了一种基于费舍尔信息矩阵的动态差分隐私个性化联邦学习方案。该方法利用费舍尔信息矩阵量化模型参数的信息含量,将高信息值参数保留在本地,低信息值参数上传聚合,以此缓解全局模型性能下降;同时引入渐进机制,在训练过程中逐步提高本地保留参数比例,减少全局噪声注入并加速收敛。实验结果表明,在CIFAR-10、CIFAR-100、EMNIST和Purchase-100数据集上,所提方法在相同隐私预算下的全局测试准确率均优于基线方法,特别是在CIFAR-10和CIFAR-100数据集上,准确率较现有最优方法(CENTAUR)分别提升了7.66%和6.06%。该研究表明动态参数划分策略结合渐进机制,能够有效平衡隐私保护与模型效用,显著增强模型在非独立同分布数据环境下的适应能力与收敛效率。

     

    Abstract: To address the issues in existing differentially private personalized federated learning where static parameter partitioning fails to adapt to changes in data heterogeneity and noise injection hinders model convergence, this paper proposes a dynamic differentially private personalized federated learning scheme based on the Fisher Information Matrix The proposed method employs the Fisher Information Matrix to quantify the informativeness of model parameters. High-information parameters are dynamically retained locally, while low-information parameters are uploaded for aggregation, thereby mitigating global model performance degradation. Furthermore, a progressive mechanism is introduced to gradually increase the proportion of locally retained parameters during training, reducing global noise injection and accelerating convergence. Experimental results indicate that our method achieves superior global test accuracy compared to baselines across the CIFAR-10, CIFAR-100, EMNIST, and Purchase-100 datasets under identical privacy budgets. Specifically, it surpasses the state-of-the-art method (CENTAUR) on CIFAR-10 and CIFAR-100 with accuracy gains of 7.66% and 6.06%, respectively. This study indicates that the combination of dynamic parameter partitioning and the progressive mechanism effectively balances privacy protection and model utility, significantly enhancing model adaptability and convergence efficiency in non-independent and identically distributed data environments.

     

/

返回文章
返回