competitive learning
Recently Published Documents


TOTAL DOCUMENTS

631
(FIVE YEARS 49)

H-INDEX

36
(FIVE YEARS 3)

2021 ◽  
Vol 7 ◽  
pp. e763
Author(s):  
Xingsi Xue ◽  
Haolin Wang ◽  
Wenyu Liu

Sensor ontologies formally model the core concepts in the sensor domain and their relationships, which facilitates the trusted communication and collaboration of Artificial Intelligence of Things (AIoT). However, due to the subjectivity of the ontology building process, sensor ontologies might be defined by different terms, leading to the problem of heterogeneity. In order to integrate the knowledge of two heterogeneous sensor ontologies, it is necessary to determine the correspondence between two heterogeneous concepts, which is the so-called ontology matching. Recently, more and more neural networks have been considered as an effective approach to address the ontology heterogeneity problem, but they require a large number of manually labelled training samples to train the network, which poses an open challenge. In order to improve the quality of the sensor ontology alignment, an unsupervised neural network model is proposed in this work. It first models the ontology matching problem as a binary classification problem, and then uses a competitive learning strategy to efficiently cluster the ontologies to be matched, which does not require the labelled training samples. The experiment utilizes the benchmark track provided by the Ontology Alignment Evaluation Initiative (OAEI) and multiple real sensor ontology alignment tasks to test our proposal’s performance. The experimental results show that the proposed approach is able to determine higher quality alignment results compared to other matching strategies under different domain knowledge such as bibliographic and real sensor ontologies.


2021 ◽  
Vol 3 (11) ◽  
pp. 2170079
Author(s):  
Houji Zhou ◽  
Jia Chen ◽  
Yinan Wang ◽  
Sen Liu ◽  
Yi Li ◽  
...  

2021 ◽  
pp. 2100114
Author(s):  
Houji Zhou ◽  
Jia Chen ◽  
Yinan Wang ◽  
Sen Liu ◽  
Yi Li ◽  
...  

2021 ◽  
Author(s):  
Pietro Barbiero ◽  
Gabriele Ciravegna ◽  
Vincenzo Randazzo ◽  
Eros Pasero ◽  
Giansalvo Cirrincione

Electronics ◽  
2021 ◽  
Vol 10 (13) ◽  
pp. 1566
Author(s):  
Manuel Palomo-Duarte ◽  
Antonio García-Domínguez ◽  
Antonio Balderas

Competitions are being widely used to motivate students in diverse learning processes, including those in computer programming. This paper presents a methodology for designing and assessing competitive learning scenarios that allow students to develop three different coding skills: the ability to compete against unknown competitors, the ability to compete against known competitors and the ability to compete against refined versions of known competitors. The proposal is based on peer code review, implemented as an improvement cycle after the dissemination of the code among participants. A case study evaluating the methodology was conducted with two cohorts of students in an undergraduate course. The analysis of the obtained grades suggests that while performance after our assistance was improved, students could still fail or succeed independently of the assistance. Complementary data from student questionnaires and supervisor observations are aligned with this finding. As a conclusion, the evidence supports the validity of the methodology. Additionally, several guidelines based on the experience are provided to transfer the proposal to other environments.


2021 ◽  
Author(s):  
Alexander G. Ororbia

In this article, we propose a novel form of unsupervised learning that we call continual competitive memory (CCM) as well as a simple framework to unify related neural models that operate under the principles of competition. The resulting neural system, which takes inspiration from adaptive resonance theory, is shown to offer a rather simple yet effective approach for combating catastrophic forgetting in continual classification problems. We compare our approach to several other forms of competitive learning and find that: 1) competitive learning, in general, offers a promising pathway towards acquiring sparse representations that reduce neural cross-talk, and, 2) our proposed variant, the CCM, which is designed with task streams in mind, is needed to prevent the overriding of old information. CCM yields promising results on continual learning benchmarks including Split MNIST and Split NotMNIST.


Sign in / Sign up

Export Citation Format

Share Document