Abstract:
To address the issues in existing differentially private personalized federated learning where static parameter partitioning fails to adapt to changes in data heterogeneity and noise injection hinders model convergence, this paper proposes a dynamic differentially private personalized federated learning scheme based on the Fisher Information Matrix The proposed method employs the Fisher Information Matrix to quantify the informativeness of model parameters. High-information parameters are dynamically retained locally, while low-information parameters are uploaded for aggregation, thereby mitigating global model performance degradation. Furthermore, a progressive mechanism is introduced to gradually increase the proportion of locally retained parameters during training, reducing global noise injection and accelerating convergence. Experimental results indicate that our method achieves superior global test accuracy compared to baselines across the CIFAR-10, CIFAR-100, EMNIST, and Purchase-100 datasets under identical privacy budgets. Specifically, it surpasses the state-of-the-art method (CENTAUR) on CIFAR-10 and CIFAR-100 with accuracy gains of 7.66% and 6.06%, respectively. This study indicates that the combination of dynamic parameter partitioning and the progressive mechanism effectively balances privacy protection and model utility, significantly enhancing model adaptability and convergence efficiency in non-independent and identically distributed data environments.