scholarly journals Development of a Genetic Method for X-ray Images Analysis based on a Neural Network Model

2021 ◽  
Vol 14 (1) ◽  
pp. 51-62
Author(s):  
Ievgen Fedorchenko ◽  
Andrii Oliinyk ◽  
Alexander Stepanenko ◽  
Tetiana Fedoronchak ◽  
Anastasiia Kharchenko ◽  
...  

Background: Modern medicine depends on technical advances in the field of medical instrumentation and the development of medical software. One of the most important tasks for doctors is determination of the exact boundaries of tumors and other abnormal formations in the tissues of the human body. Objective: The paper considers the problems and methods of machine classification and recognition of radiographic images, as well as the improvement of artificial neural networks used to increase the quality and accuracy of detection of abnormal structures on chest radiographs. Methods: A modified genetic method for the optimization of parameters of the model on the basis of a convolutional neural network was developed to solve the problem of recognition of diagnostically significant signs of pneumonia on an X-ray of the lungs. The fundamental difference between the proposed genetic method and existing analogs is in the use of a special mutation operator in the form of an additive convolution of two mutation operators, which reduces neural network training time and also identifies "oneighborhood of solutions" that is most suitable for investigation. Results: A comparative evaluation of the effectiveness of the proposed method and known methods was given. It showed an improvement in accuracy of solving the problem of finding signs of pathology on an X-ray of the lungs. Conclusion: Practical use of the developed method will reduce complexity, increase reliability of search, accelerate the process of diagnosis of diseases and reduce a part of errors and repeated inspections of patients.

2017 ◽  
Vol 109 (1) ◽  
pp. 29-38 ◽  
Author(s):  
Valentin Deyringer ◽  
Alexander Fraser ◽  
Helmut Schmid ◽  
Tsuyoshi Okita

Abstract Neural Networks are prevalent in todays NLP research. Despite their success for different tasks, training time is relatively long. We use Hogwild! to counteract this phenomenon and show that it is a suitable method to speed up training Neural Networks of different architectures and complexity. For POS tagging and translation we report considerable speedups of training, especially for the latter. We show that Hogwild! can be an important tool for training complex NLP architectures.


2013 ◽  
Vol 380-384 ◽  
pp. 2915-2919 ◽  
Author(s):  
Jian Ming Cui ◽  
Yan Xin Ye

Traditional massive data mining with BP neural network algorithm, resource constraints of the ordinary stand-alone platform and scalability bottlenecks and classification process serialization due to classification inefficient results, and also have an impact on the classification accuracy. In this paper, the Detailed description of the flow of execution of the BP neural network parallel algorithm in Hadoop's MapReduce programming model.Experimental results show that: the BP neural network under the cloud computing platform can greatly shorten the network training time, better parallel efficiency and good scalability.


Author(s):  
Mohamed Elgendi ◽  
Rich Fletcher ◽  
Newton Howard ◽  
Carlo Menon ◽  
Rabab Ward

High-resolution computed tomography radiology is a critical tool in the diagnosis and management of COVID-19 infection; however, in smaller clinics around the world, there is a shortage of radiologists available to analyze these images. In this paper, we compare the performance of 16 available deep learning algorithms to help identify COVID19. We utilize an already existing diagnostic technology (X-ray) and an already existing neural network (ResNet-50) to diagnose COVID-19. Our approach eliminates the extra time and resources needed to develop new technology and associated algorithm, thus aiding the front-line in the race against the COVID-19 pandemic. Results show that ResNet-50 is the optimal pretrained neural network for the detection of COVID-19, using three different cross-validation ratios, based on training time, accuracy, and network size. We also present a custom visualization of the results that can be used to highlight important visual biomarkers of the disease and disease progression.


2020 ◽  
Vol 2 (1) ◽  
pp. 29-36
Author(s):  
M. I. Zghoba ◽  
◽  
Yu. I. Hrytsiuk ◽  

The peculiarities of neural network training for forecasting taxi passenger demand using graphics processing units are considered, which allowed to speed up the training procedure for different sets of input data, hardware configurations, and its power. It has been found that taxi services are becoming more accessible to a wide range of people. The most important task for any transportation company and taxi driver is to minimize the waiting time for new orders and to minimize the distance from drivers to passengers on order receiving. Understanding and assessing the geographical passenger demand that depends on many factors is crucial to achieve this goal. This paper describes an example of neural network training for predicting taxi passenger demand. It shows the importance of a large input dataset for the accuracy of the neural network. Since the training of a neural network is a lengthy process, parallel training was used to speed up the training. The neural network for forecasting taxi passenger demand was trained using different hardware configurations, such as one CPU, one GPU, and two GPUs. The training times of one epoch were compared along with these configurations. The impact of different hardware configurations on training time was analyzed in this work. The network was trained using a dataset containing 4.5 million trips within one city. The results of this study show that the training with GPU accelerators doesn't necessarily improve the training time. The training time depends on many factors, such as input dataset size, splitting of the entire dataset into smaller subsets, as well as hardware and power characteristics.


Author(s):  
Jim Torresen ◽  
Shin-ichiro Mori ◽  
Hiroshi Nakashima ◽  
Shinji Tomita ◽  
Olav Landsverk

Author(s):  
Yasufumi Sakai ◽  
Yutaka Tamiya

AbstractRecent advances in deep neural networks have achieved higher accuracy with more complex models. Nevertheless, they require much longer training time. To reduce the training time, training methods using quantized weight, activation, and gradient have been proposed. Neural network calculation by integer format improves the energy efficiency of hardware for deep learning models. Therefore, training methods for deep neural networks with fixed point format have been proposed. However, the narrow data representation range of the fixed point format degrades neural network accuracy. In this work, we propose a new fixed point format named shifted dynamic fixed point (S-DFP) to prevent accuracy degradation in quantized neural networks training. S-DFP can change the data representation range of dynamic fixed point format by adding bias to the exponent. We evaluated the effectiveness of S-DFP for quantized neural network training on the ImageNet task using ResNet-34, ResNet-50, ResNet-101 and ResNet-152. For example, the accuracy of quantized ResNet-152 is improved from 76.6% with conventional 8-bit DFP to 77.6% with 8-bit S-DFP.


2011 ◽  
Vol 135-136 ◽  
pp. 126-131 ◽  
Author(s):  
Hong Ke Xu ◽  
Wei Song Yang ◽  
Jian Wu Fang ◽  
Chang Bao Wen ◽  
Wei Sun

The current self-organizing feature map (SOFM) neural network algorithm used for image compression, of which a large amount of network training time and the blocking effect in the reconstructed image existed in codebook design vector calculation. Based on the above issue, this paper proposed an improved SOFM. The new SOFM introduced normalized distance between the sum of input vectors and the sum of the codeword vectors as a constraint in the process of searching for the winning neuron, which can remove redundant Euclidean distance calculation in the competitive process. Furthermore, this paper has done image compression by combining wavelet transform with the improved SOFM (WT & improved SOFM). The method firstly conducted wavelet decomposition for the image, retained low-frequency sub-band, then put the high-frequency sub-band into improved SOFM network, and achieved the purpose of compression. Experimental results showed that this algorithm can greatly reduce the network training time and enhance the learning efficiency of neural network, while effectively improve the PSNR (increased 0.6dB) of reconstructed.


Author(s):  
Hesam Karim ◽  
Sharareh R. Niakan ◽  
Reza Safdari

<span lang="EN-US">Heart disease is the first cause of death in different countries. Artificial neural network (ANN) technique can be used to predict or classification patients getting a heart disease. There are different training algorithms for ANN. We compared eight neural network training algorithms for classification of heart disease data from UCI repository containing 303 samples. Performance measures of each algorithm containing the speed of training, the number of epochs, accuracy, and mean square error (MSE) were obtained and analyzed. Our results showed that training time for gradient descent algorithms was longer than other training algorithms (8-10 seconds). In contrast, Quasi-Newton algorithms were faster than others (&lt;=0 second). MSE for all algorithms was between 0.117 and 0.228. While there was a significant association between training algorithms and training time (p&lt;0.05), the number of neurons in hidden layer had not any significant effect on the MSE and/or accuracy of the models (p&gt;0.05). Based on our findings, for development an ANN classification model for heart diseases, it is best to use Quasi-Newton training algorithms because of the best speed and accuracy.</span>


2014 ◽  
pp. 9-19
Author(s):  
V. Turchenko ◽  
C. Triki ◽  
Lucio Grandinetti ◽  
Anatoly Sachenko

The main feature of neural network using for accuracy improvement of physical quantities (for example, temperature, humidity, pressure etc.) measurement by data acquisition systems is insufficient volume of input data for predicting neural network training at an initial exploitation period of sensors. The authors have proposed the technique of data volume increasing for predicting neural network training using integration of historical data method. In this paper we have proposed enhanced integration historical data method with its simulation results on mathematical models of sensor drift using single-layer and multi-layer perceptrons. We also considered a parallelization technique of enhanced integration historical data method in order to decrease its working time. A modified coarse-grain parallel algorithm with dynamic mapping on processors of parallel computing system using neural network training time as mapping criterion is considered. Fulfilled experiments have showed that modified parallel algorithm is more efficient than basic parallel algorithm with dynamic mapping, which does not use any mapping criterion.


Sign in / Sign up

Export Citation Format

Share Document