Distributed Multi-Agent Learning is More Effectively than Single-Agent
Abstract Interpretable distributed group intelligence techniques have emerged as an essential topic in artificial intelligence. The mathematical interpretability of prediction outcomes is critical for improving the reliability of machine learning, especially in random scenes. Although some experimental results published so far show that the prediction of group intelligence is better than individual intelligence, establishing a mathematical foundation for the superiority of distributed group intelligence is still a challenging problem for enhancing the interpretability of learning systems. Through the Radermacher complexity principle, we proved mathematically that the learning quality of group machine intelligence is better than its subset machine intelligence with a high probability, significantly better than any individual among them if the number of individuals in the group is large enough. We proposed a multi-agent distributed learning method for time series forecasting by incorporating multi-agent cooperation in cognitive processes into machine learning. In addition, since the way of cooperative interaction between multi-agent affects the training effect of the model, we provide a generalized interaction approach and prove its convergence. We conduct sufficient experiments on predicting time series for classically chaotic systems, and the results indicate that distributed group intelligence significantly improves the prediction accuracy of individual intelligence. The experiments result shows that the prediction error reduces substantially as the number of agents increases, confirming the theoretical accuracy and the model's validity. This work provides new ideas for theoretically exploring how group intelligence emerges.