Abstract
Federated learning provides a decentralized learning without data exchange. Among them, the Federated Average (FedAVG) framework is the most likely to be implemented in real world application due to its low communication overhead. However, this architecture can easily affect the efficiency of global model convergence when there are differences data distribution in individual user. Therefore, in this paper, we propose an aggregation strategy called significant Weighted feature aggregation method, in which the features with large variation are appropriately weighted at the server side to improve the model convergence speed even in not identically and independently distributed (non-iid) environments. As shown in our experiments, our approach had over 10% of improvements compared to the FedAVG.
Keywords
deep learning, distribution system, federated learning