Accelerating FCM neural network classifier using graphics processing units with CUDA

2013 ◽  
Vol 40 (1) ◽  
pp. 143-153 ◽  
Author(s):  
Lin Wang ◽  
Bo Yang ◽  
Yuehui Chen ◽  
Zhenxiang Chen ◽  
Hongwei Sun
2020 ◽  
Vol 2 (1) ◽  
pp. 29-36
Author(s):  
M. I. Zghoba ◽  
◽  
Yu. I. Hrytsiuk ◽  

The peculiarities of neural network training for forecasting taxi passenger demand using graphics processing units are considered, which allowed to speed up the training procedure for different sets of input data, hardware configurations, and its power. It has been found that taxi services are becoming more accessible to a wide range of people. The most important task for any transportation company and taxi driver is to minimize the waiting time for new orders and to minimize the distance from drivers to passengers on order receiving. Understanding and assessing the geographical passenger demand that depends on many factors is crucial to achieve this goal. This paper describes an example of neural network training for predicting taxi passenger demand. It shows the importance of a large input dataset for the accuracy of the neural network. Since the training of a neural network is a lengthy process, parallel training was used to speed up the training. The neural network for forecasting taxi passenger demand was trained using different hardware configurations, such as one CPU, one GPU, and two GPUs. The training times of one epoch were compared along with these configurations. The impact of different hardware configurations on training time was analyzed in this work. The network was trained using a dataset containing 4.5 million trips within one city. The results of this study show that the training with GPU accelerators doesn't necessarily improve the training time. The training time depends on many factors, such as input dataset size, splitting of the entire dataset into smaller subsets, as well as hardware and power characteristics.


Electronics ◽  
2019 ◽  
Vol 8 (12) ◽  
pp. 1479 ◽  
Author(s):  
Michael Losh ◽  
Daniel Llamocca

Modern massively-parallel Graphics Processing Units (GPUs) and Machine Learning (ML) frameworks enable neural network implementations of unprecedented performance and sophistication. However, state-of-the-art GPU hardware platforms are extremely power-hungry, while microprocessors cannot achieve the performance requirements. Biologically-inspired Spiking Neural Networks (SNN) have inherent characteristics that lead to lower power consumption. We thus present a bit-serial SNN-like hardware architecture. By using counters, comparators, and an indexing scheme, the design effectively implements the sum-of-products inherent in neurons. In addition, we experimented with various strength-reduction methods to lower neural network resource usage. The proposed Spiking Hybrid Network (SHiNe), validated on an FPGA, has been found to achieve reasonable performance with a low resource utilization, with some trade-off with respect to hardware throughput and signal representation.


1997 ◽  
Vol 36 (04/05) ◽  
pp. 349-351
Author(s):  
H. Mizuta ◽  
K. Kawachi ◽  
H. Yoshida ◽  
K. Iida ◽  
Y. Okubo ◽  
...  

Abstract:This paper compares two classifiers: Pseudo Bayesian and Neural Network for assisting in making diagnoses of psychiatric patients based on a simple yes/no questionnaire which is provided at the outpatient’s first visit to the hospital. The classifiers categorize patients into three most commonly seen ICD classes, i.e. schizophrenic, emotional and neurotic disorders. One hundred completed questionnaires were utilized for constructing and evaluating the classifiers. Average correct decision rates were 73.3% for the Pseudo Bayesian Classifier and 77.3% for the Neural Network classifier. These rates were higher than the rate which an experienced psychiatrist achieved based on the same restricted data as the classifiers utilized. These classifiers may be effectively utilized for assisting psychiatrists in making their final diagnoses.


Author(s):  
M. Madhumalini ◽  
T. Meera Devi

The article has been withdrawn on the request of the authors and the editor of the journal Current Signal Transduction Therapy. Bentham Science apologizes to the readers of the journal for any inconvenience this may have caused. BENTHAM SCIENCE DISCLAIMER: It is a condition of publication that manuscripts submitted to this journal have not been published and will not be simultaneously submitted or published elsewhere. Furthermore, any data, illustration, structure or table that has been published elsewhere must be reported, and copyright permission for reproduction must be obtained. Plagiarism is strictly forbidden, and by submitting the article for publication the authors agree that the publishers have the legal right to take appropriate action against the authors, if plagiarism or fabricated information is discovered. By submitting a manuscript the authors agree that the copyright of their article is transferred to the publishers, if and when the article is accepted for publication.


Sign in / Sign up

Export Citation Format

Share Document