Modified clustering algorithm for projective ART neural network

Author(s):  
Roman Krakovsky ◽  
Radoslav Forgac ◽  
Igor Mokris
Water ◽  
2021 ◽  
Vol 13 (15) ◽  
pp. 2011
Author(s):  
Pablo Páliz Larrea ◽  
Xavier Zapata Ríos ◽  
Lenin Campozano Parra

Despite the importance of dams for water distribution of various uses, adequate forecasting on a day-to-day scale is still in great need of intensive study worldwide. Machine learning models have had a wide application in water resource studies and have shown satisfactory results, including the time series forecasting of water levels and dam flows. In this study, neural network models (NN) and adaptive neuro-fuzzy inference systems (ANFIS) models were generated to forecast the water level of the Salve Faccha reservoir, which supplies water to Quito, the Capital of Ecuador. For NN, a non-linear input–output net with a maximum delay of 13 days was used with variation in the number of nodes and hidden layers. For ANFIS, after up to four days of delay, the subtractive clustering algorithm was used with a hyperparameter variation from 0.5 to 0.8. The results indicate that precipitation was not influencing input in the prediction of the reservoir water level. The best neural network and ANFIS models showed high performance, with a r > 0.95, a Nash index > 0.95, and a RMSE < 0.1. The best the neural network model was t + 4, and the best ANFIS model was model t + 6.


This paper presents brain tumor detection and segmentation using image processing techniques. Convolutional neural networks can be applied for medical research in brain tumor analysis. The tumor in the MRI scans is segmented using the K-means clustering algorithm which is applied of every scan and the feed it to the convolutional neural network for training and testing. In our CNN we propose to use ReLU and Sigmoid activation functions to determine our end result. The training is done only using the CPU power and no GPU is used. The research is done in two phases, image processing and applying neural network.


2012 ◽  
Vol 263-266 ◽  
pp. 2173-2178
Author(s):  
Xin Guang Li ◽  
Min Feng Yao ◽  
Li Rui Jian ◽  
Zhen Jiang Li

A probabilistic neural network (PNN) speech recognition model based on the partition clustering algorithm is proposed in this paper. The most important advantage of PNN is that training is easy and instantaneous. Therefore, PNN is capable of dealing with real time speech recognition. Besides, in order to increase the performance of PNN, the selection of data set is one of the most important issues. In this paper, using the partition clustering algorithm to select data is proposed. The proposed model is tested on two data sets from the field of spoken Arabic numbers, with promising results. The performance of the proposed model is compared to single back propagation neural network and integrated back propagation neural network. The final comparison result shows that the proposed model performs better than the other two neural networks, and has an accuracy rate of 92.41%.


Author(s):  
T. G.B. Amaral ◽  
M. M. Crisostomo ◽  
V. Fernao Pires

This chapter describes the application of a general regression neural network (GRNN) to control the flight of a helicopter. This GRNN is an adaptive network that provides estimates of continuous variables and is a one-pass learning algorithm with a highly parallel structure. Even with sparse data in a multidimensional measurement space, the algorithm provides smooth transitions from one observed value to another. An important reason for using the GRNN as a controller is the fast learning capability and its non-iterative process. The disadvantage of this neural network is the amount of computation required to produce an estimate, which can become large if many training instances are gathered. To overcome this problem, it is described as a clustering algorithm to produce representative exemplars from a group of training instances that are close to one another reducing the computation amount to obtain an estimate. The reduction of training data used by the GRNN can make it possible to separate the obtained representative exemplars, for example, in two data sets for the coarse and fine control. Experiments are performed to determine the degradation of the performance of the clustering algorithm with less training data. In the control flight system, data training is also reduced to obtain faster controllers, maintaining the desired performance.


2017 ◽  
pp. 1427-1436
Author(s):  
Gaurav Vivek Bhalerao ◽  
Niranjana Sampathila

The corpus callosum is the largest white matter structure in the brain, which connects the two cerebral hemispheres and facilitates the inter-hemispheric communication. Abnormal anatomy of corpus callosum has been revealed for various brain related diseases. Being an important biomarker, Magnetic Resonance Imaging of the brain followed by corpus callosum segmentation and feature extraction has found to be important for the diagnosis of many neurological diseases. This paper focuses on classification of T1-weighted mid-sagittal MR images of brain for dementia patients. The corpus callosum is segmented using K-means clustering algorithm and corresponding shape based measurements are used as features. Based on these shape based measurements, a back-propagation neural network is trained separately for male and female dataset. The input data consists of 54 female and 31 male patients. This paper reports classification accuracy up to 92% for female patients and 94% for male patients using neural network classifier.


2019 ◽  
Vol 9 (19) ◽  
pp. 4036 ◽  
Author(s):  
You ◽  
Wu ◽  
Lee ◽  
Liu

Multi-class classification is a very important technique in engineering applications, e.g., mechanical systems, mechanics and design innovations, applied materials in nanotechnologies, etc. A large amount of research is done for single-label classification where objects are associated with a single category. However, in many application domains, an object can belong to two or more categories, and multi-label classification is needed. Traditionally, statistical methods were used; recently, machine learning techniques, in particular neural networks, have been proposed to solve the multi-class classification problem. In this paper, we develop radial basis function (RBF)-based neural network schemes for single-label and multi-label classification, respectively. The number of hidden nodes and the parameters involved with the basis functions are determined automatically by applying an iterative self-constructing clustering algorithm to the given training dataset, and biases and weights are derived optimally by least squares. Dimensionality reduction techniques are adopted and integrated to help reduce the overfitting problem associated with the RBF networks. Experimental results from benchmark datasets are presented to show the effectiveness of the proposed schemes.


2020 ◽  
Vol 13 (3) ◽  
pp. 261-282
Author(s):  
Mohammad Khalid Pandit ◽  
Roohie Naaz Mir ◽  
Mohammad Ahsan Chishti

PurposeThe intelligence in the Internet of Things (IoT) can be embedded by analyzing the huge volumes of data generated by it in an ultralow latency environment. The computational latency incurred by the cloud-only solution can be significantly brought down by the fog computing layer, which offers a computing infrastructure to minimize the latency in service delivery and execution. For this purpose, a task scheduling policy based on reinforcement learning (RL) is developed that can achieve the optimal resource utilization as well as minimum time to execute tasks and significantly reduce the communication costs during distributed execution.Design/methodology/approachTo realize this, the authors proposed a two-level neural network (NN)-based task scheduling system, where the first-level NN (feed-forward neural network/convolutional neural network [FFNN/CNN]) determines whether the data stream could be analyzed (executed) in the resource-constrained environment (edge/fog) or be directly forwarded to the cloud. The second-level NN ( RL module) schedules all the tasks sent by level 1 NN to fog layer, among the available fog devices. This real-time task assignment policy is used to minimize the total computational latency (makespan) as well as communication costs.FindingsExperimental results indicated that the RL technique works better than the computationally infeasible greedy approach for task scheduling and the combination of RL and task clustering algorithm reduces the communication costs significantly.Originality/valueThe proposed algorithm fundamentally solves the problem of task scheduling in real-time fog-based IoT with best resource utilization, minimum makespan and minimum communication cost between the tasks.


Sign in / Sign up

Export Citation Format

Share Document