Design of Neural Networks Based on Wave-Parallel Computing Technique

Author(s):  
Yasushi Yuminaka ◽  
Yoshisato Sasaki ◽  
Takafumi Aoki ◽  
Tatsuo Higuchi
2004 ◽  
Vol 46 (4) ◽  
Author(s):  
Jürgen Becker

SummaryThe paper addresses people from information technology, electrical engineering, computer science, and related areas. It gives an introduction and classification to fine-, coarse-, as well as multi-grain reconfigurable architectures. This data-stream-based and transport-triggered parallel computing technique in combination with dynamical and partial reconfiguration features demonstrates promising perspectives for future CMOS-based microelectronic solutions in multimedia and infotainment, mobile communication, as well as automotive application domains, among others.


2012 ◽  
Vol 2012 ◽  
pp. 1-13
Author(s):  
Chao Dong ◽  
Lianfang Tian

Benefiting from the kernel skill and the sparse property, the relevance vector machine (RVM) could acquire a sparse solution, with an equivalent generalization ability compared with the support vector machine. The sparse property requires much less time in the prediction, making RVM potential in classifying the large-scale hyperspectral image. However, RVM is not widespread influenced by its slow training procedure. To solve the problem, the classification of the hyperspectral image using RVM is accelerated by the parallel computing technique in this paper. The parallelization is revealed from the aspects of the multiclass strategy, the ensemble of multiple weak classifiers, and the matrix operations. The parallel RVMs are implemented using the C language plus the parallel functions of the linear algebra packages and the message passing interface library. The proposed methods are evaluated by the AVIRIS Indian Pines data set on the Beowulf cluster and the multicore platforms. It shows that the parallel RVMs accelerate the training procedure obviously.


Processes ◽  
2020 ◽  
Vol 8 (10) ◽  
pp. 1281
Author(s):  
Xiu Yin ◽  
Xiyu Liu

In biological neural networks, neurons transmit chemical signals through synapses, and there are multiple ion channels during transmission. Moreover, synapses are divided into inhibitory synapses and excitatory synapses. The firing mechanism of previous spiking neural P (SNP) systems and their variants is basically the same as excitatory synapses, but the function of inhibitory synapses is rarely reflected in these systems. In order to more fully simulate the characteristics of neurons communicating through synapses, this paper proposes a dynamic threshold neural P system with inhibitory rules and multiple channels (DTNP-MCIR systems). DTNP-MCIR systems represent a distributed parallel computing model. We prove that DTNP-MCIR systems are Turing universal as number generating/accepting devices. In addition, we design a small universal DTNP-MCIR system with 73 neurons as function computing devices.


2012 ◽  
Vol 24 (9) ◽  
pp. 2225-2229
Author(s):  
彭凯 Peng kai ◽  
夏蒙重 Xia Mengzhong ◽  
刘大刚 Liu Dagang ◽  
周俊 Zhou Jun

2011 ◽  
Vol 271-273 ◽  
pp. 1023-1028
Author(s):  
Xi Huang ◽  
Ping Wang ◽  
Zong Huang Weng ◽  
Xiao Zhang ◽  
Wei Han Zhong

In this paper a pipelined array of neurons based on the micro-program controller is proposed as the BP network control circuit implementations, without changing the hardware circuit under the premise of the way by increasing the instruction to meet the BP neural network parallel computing applications to enhance the flexibility of hardware.


2013 ◽  
Vol 380-384 ◽  
pp. 1571-1575
Author(s):  
Hong Chen ◽  
Hu Xing Zhou ◽  
Juan Meng

To solve the problem that the central guidance system takes too long time to calculate the shortest routes between all node pairs of network which can not meet the real-time demand of central guidance, this paper presents a central guidance parallel route optimization method based on parallel computing technique involving both route optimization time and travelers preferences by means of researching three parts: network data storage based on an array, multi-level network decomposition with travelers preferences considered and parallel shortest route computing of deque based on messages transfer. And based on the actual traffic network data of Guangzhou city, the suggested method is verified on three parallel computing platforms including ordinary PC cluster, Lenovo server cluster and HP workstations cluster. The results show that above three clusters finish the optimization of 21.4 million routes between 5631 nodes of Guangzhou city traffic network in 215, 189 and 177 seconds with the presented method respectively, which can completely meet the real-time demand of the central guidance.


Sign in / Sign up

Export Citation Format

Share Document