An Approach for Semantic Web Discovery Using Unsupervised Learning Algorithms

Author(s):  
Yan Shen ◽  
Fangfang Liu
1998 ◽  
Vol 10 (6) ◽  
pp. 1567-1586 ◽  
Author(s):  
Terence David Sanger

This article proposes a new method for interpreting computations performed by populations of spiking neurons. Neural firing is modeled as a rate-modulated random process for which the behavior of a neuron in response to external input can be completely described by its tuning function. I show that under certain conditions, cells with any desired tuning functions can be approximated using only spike coincidence detectors and linear operations on the spike output of existing cells. I show examples of adaptive algorithms based on only spike data that cause the underlying cell-tuning curves to converge according to standard supervised and unsupervised learning algorithms. Unsupervised learning based on principal components analysis leads to independent cell spike trains. These results suggest a duality relationship between the random discrete behavior of spiking cells and the deterministic smooth behavior of their tuning functions. Classical neural network approximation methods and learning algorithms based on continuous variables can thus be implemented within networks of spiking neurons without the need to make numerical estimates of the intermediate cell firing rates.


Author(s):  
Deepali Virmani ◽  
Nikita Jain ◽  
Ketan Parikh ◽  
Shefali Upadhyaya ◽  
Abhishek Srivastav

This article describes how data is relevant and if it can be organized, linked with other data and grouped into a cluster. Clustering is the process of organizing a given set of objects into a set of disjoint groups called clusters. There are a number of clustering algorithms like k-means, k-medoids, normalized k-means, etc. So, the focus remains on efficiency and accuracy of algorithms. The focus is also on the time it takes for clustering and reducing overlapping between clusters. K-means is one of the simplest unsupervised learning algorithms that solves the well-known clustering problem. The k-means algorithm partitions data into K clusters and the centroids are randomly chosen resulting numeric values prohibits it from being used to cluster real world data containing categorical values. Poor selection of initial centroids can result in poor clustering. This article deals with a proposed algorithm which is a variant of k-means with some modifications resulting in better clustering, reduced overlapping and lesser time required for clustering by selecting initial centres in k-means and normalizing the data.


2020 ◽  
Vol 167 ◽  
pp. 1849-1860 ◽  
Author(s):  
Prakhar Shrivastava ◽  
Kapil Kumar Soni ◽  
Akhtar Rasool

2020 ◽  
Vol 175 ◽  
pp. 677-682
Author(s):  
Amelec Viloria ◽  
Nelson Alberto Lizardo Zelaya ◽  
Noel Varela

2021 ◽  
Vol 2079 (1) ◽  
pp. 012028
Author(s):  
Xiaoqing Peng ◽  
Yong Shuai ◽  
Yaxi Gan ◽  
Yaokai Chen

Abstract Aiming at the problem that the current feature selection algorithm can not adapt to both supervised learning data and unsupervised learning data, and had poor feature interpretability, this paper proposed a hybrid feature selection model based on machine learning and knowledge graph. By the idea of hybridization, this model used supervised learning algorithms, unsupervised learning algorithms and knowledge graph technology to model from the perspective of data features and text features. Firstly, the data-based feature weights were obtained through the machine learning model, and then the text-based weights were obtained by using the knowledge graph technology, and the weight sets are combined to obtain a feature matrix with good explanatory properties that meets both the data and text features. Finally, the case analysis proves that the method proposed in this paper has good effects and interpretability.


Sign in / Sign up

Export Citation Format

Share Document