Convergence analysis of distributed multi-penalty regularized pairwise learning

2019 ◽  
Vol 18 (01) ◽  
pp. 109-127
Author(s):  
Ting Hu ◽  
Jun Fan ◽  
Dao-Hong Xiang

In this paper, we establish the error analysis for distributed pairwise learning with multi-penalty regularization, based on a divide-and-conquer strategy. We demonstrate with [Formula: see text]-error bound that the learning performance of this distributed learning scheme is as good as that of a single machine which could process the whole data. With semi-supervised data, we can relax the restriction of the number of local machines and enlarge the range of the target function to guarantee the optimal learning rate. As a concrete example, we show that the work in this paper can apply to the distributed pairwise learning algorithm with manifold regularization.

Author(s):  
Alexander Driyarkoro ◽  
Nurain Silalahi ◽  
Joko Haryatno

Prediksi lokasi user pada mobile network merupakan hal sangat penting, karena routing panggilan pada mobile station (MS) bergantung pada posisi MS saat itu. Mobilitas MS yang cukup tinggi, terutama di daerah perkotaan, menyebabkan pencarian (tracking) MS akan berpengaruh pada kinerja sistem mobile network, khususnya dalam hal efisiensi kanal kontrol pada air interface. Salah satu bentuk pencarian adalah dengan mengetahui perilaku gerakan yang menentukan posisi MS. Dari MSC/VLR dapat diketahui posisi MS pada waktu tertentu. Karena location area suatu MS selalu unik dari waktu ke waktu, dan hal itu merupakan perilaku (behaviour) MS, maka dapat dibuat profil pergerakannya. Dengan menggunakan Neural Network (NN) akan diperoleh location area MS pada masa yang akan datang. Model NN yang digunakan pada penelitian ini adalah Propagasi Balik. Beberapa parameter NN yang diteliti dalam mempengaruhi kinerja prediksi lokasi user meliputi noise factor, momentum dan learning rate. Pada penelitian ini diperoleh nilai optimal learning rate = 0,5 dan noise factor = 1.


Energies ◽  
2021 ◽  
Vol 14 (12) ◽  
pp. 3654
Author(s):  
Nastaran Gholizadeh ◽  
Petr Musilek

In recent years, machine learning methods have found numerous applications in power systems for load forecasting, voltage control, power quality monitoring, anomaly detection, etc. Distributed learning is a subfield of machine learning and a descendant of the multi-agent systems field. Distributed learning is a collaboratively decentralized machine learning algorithm designed to handle large data sizes, solve complex learning problems, and increase privacy. Moreover, it can reduce the risk of a single point of failure compared to fully centralized approaches and lower the bandwidth and central storage requirements. This paper introduces three existing distributed learning frameworks and reviews the applications that have been proposed for them in power systems so far. It summarizes the methods, benefits, and challenges of distributed learning frameworks in power systems and identifies the gaps in the literature for future studies.


1994 ◽  
Vol 05 (02) ◽  
pp. 115-122
Author(s):  
MOSTEFA GOLEA

We describe an Hebb-type algorithm for learning unions of nonoverlapping perceptrons with binary weights. Two perceptrons are said to be nonoverlapping if they do not share any input variables. The learning algorithm is able to find both the network architecture and the weight values necessary to represent the target function. Moreover, the algorithm is local, homogeneous, and simple enough to be biologically plausible. We investigate the average behavior of this algorithm as a function of the size of the training set. We find that, as the size of the training set increases, the hypothesis network built by the algorithm “converges” to the target network, both in terms of the number of perceptrons and the connectivity. Moreover, the generalization rate converges exponentially to perfect generalization as a function of the number of training examples. The analytic expressions are in excellent agreement with the numerical simulations. To our knowledge, this is the first average case analysis of an algorithm that finds both the weight values and the network connectivity.


Sensors ◽  
2019 ◽  
Vol 19 (7) ◽  
pp. 1576 ◽  
Author(s):  
Xiaomao Zhou ◽  
Tao Bai ◽  
Yanbin Gao ◽  
Yuntao Han

Extensive studies have shown that many animals’ capability of forming spatial representations for self-localization, path planning, and navigation relies on the functionalities of place and head-direction (HD) cells in the hippocampus. Although there are numerous hippocampal modeling approaches, only a few span the wide functionalities ranging from processing raw sensory signals to planning and action generation. This paper presents a vision-based navigation system that involves generating place and HD cells through learning from visual images, building topological maps based on learned cell representations and performing navigation using hierarchical reinforcement learning. First, place and HD cells are trained from sequences of visual stimuli in an unsupervised learning fashion. A modified Slow Feature Analysis (SFA) algorithm is proposed to learn different cell types in an intentional way by restricting their learning to separate phases of the spatial exploration. Then, to extract the encoded metric information from these unsupervised learning representations, a self-organized learning algorithm is adopted to learn over the emerged cell activities and to generate topological maps that reveal the topology of the environment and information about a robot’s head direction, respectively. This enables the robot to perform self-localization and orientation detection based on the generated maps. Finally, goal-directed navigation is performed using reinforcement learning in continuous state spaces which are represented by the population activities of place cells. In particular, considering that the topological map provides a natural hierarchical representation of the environment, hierarchical reinforcement learning (HRL) is used to exploit this hierarchy to accelerate learning. The HRL works on different spatial scales, where a high-level policy learns to select subgoals and a low-level policy learns over primitive actions to specialize on the selected subgoals. Experimental results demonstrate that our system is able to navigate a robot to the desired position effectively, and the HRL shows a much better learning performance than the standard RL in solving our navigation tasks.


Author(s):  
Stefan Bosse

Ubiquitous computing and The Internet-of-Things (IoT) grow rapidly in today's life and evolving to Self-organizing systems (SoS). A unified and scalable information processing and communication methodology is required. In this work, mobile agents are used to merge the IoT with Mobile and Cloud environments seamless. A portable and scalable Agent Processing Platform (APP) provides an enabling technology that is central for the deployment of Multi-Agent Systems (MAS) in strong heterogeneous networks including the Internet. A large-scale use-case deploying Multi-agent systems in a distributed heterogeneous seismic sensor and geodetic network is used to demonstrate the suitability of the MAS and platform approach. The MAS is used for earthquake monitoring based on a new incremental distributed learning algorithm applied to seismic station data, which can be extended by ubiquitous sensing devices like smart phones. Different (mobile) agents perform sensor sensing, aggregation, local learning and prediction, global voting and decision making, and the application.


Sign in / Sign up

Export Citation Format

Share Document