scholarly journals Automated selection of myocardial inversion time with a convolutional neural network: Spatial temporal ensemble myocardium inversion network (STEMI-NET)

2019 ◽  
Vol 81 (5) ◽  
pp. 3283-3291 ◽  
Author(s):  
Naeim Bahrami ◽  
Tara Retson ◽  
Kevin Blansit ◽  
Kang Wang ◽  
Albert Hsiao
2021 ◽  
Vol 11 (11) ◽  
pp. 5235
Author(s):  
Nikita Andriyanov

The article is devoted to the study of convolutional neural network inference in the task of image processing under the influence of visual attacks. Attacks of four different types were considered: simple, involving the addition of white Gaussian noise, impulse action on one pixel of an image, and attacks that change brightness values within a rectangular area. MNIST and Kaggle dogs vs. cats datasets were chosen. Recognition characteristics were obtained for the accuracy, depending on the number of images subjected to attacks and the types of attacks used in the training. The study was based on well-known convolutional neural network architectures used in pattern recognition tasks, such as VGG-16 and Inception_v3. The dependencies of the recognition accuracy on the parameters of visual attacks were obtained. Original methods were proposed to prevent visual attacks. Such methods are based on the selection of “incomprehensible” classes for the recognizer, and their subsequent correction based on neural network inference with reduced image sizes. As a result of applying these methods, gains in the accuracy metric by a factor of 1.3 were obtained after iteration by discarding incomprehensible images, and reducing the amount of uncertainty by 4–5% after iteration by applying the integration of the results of image analyses in reduced dimensions.


2019 ◽  
Author(s):  
Nguyen P. Nguyen ◽  
Jacob Gotberg ◽  
Ilker Ersoy ◽  
Filiz Bunyak ◽  
Tommi White

AbstractSelection of individual protein particles in cryo-electron micrographs is an important step in single particle analysis. In this study, we developed a deep learning-based method to automatically detect particle centers from cryoEM micrographs. This is a challenging task because of the low signal-to-noise ratio of cryoEM micrographs and the size, shape, and grayscale-level variations in particles. We propose a double convolutional neural network (CNN) cascade for automated detection of particles in cryo-electron micrographs. Particles are detected by the first network, a fully convolutional regression network (FCRN), which maps the particle image to a continuous distance map that acts like a probability density function of particle centers. Particles identified by FCRN are further refined (or classified) to reduce false particle detections by the second CNN. This approach, entitled Deep Regression Picker Network or “DRPnet”, is simple but very effective in recognizing different grayscale patterns corresponding to 2D views of 3D particles. Our experiments showed that DRPnet’s first CNN pretrained with one dataset can be used to detect particles from a different datasets without retraining. The performance of this network can be further improved by re-training the network using specific particle datasets. The second network, a classification convolutional neural network, is used to refine detection results by identifying false detections. The proposed fully automated “deep regression” system, DRPnet, pretrained with TRPV1 (EMPIAR-10005) [1], and tested on β-galactosidase (EMPIAR-10017) [2] and β-galactosidase (EMPIAR-10061) [3], was then compared to RELION’s interactive particle picking. Preliminary experiments resulted in comparable or better particle picking performance with drastically reduced user interactions and improved processing time.


2018 ◽  
Vol 14 (10) ◽  
pp. 155014771880594 ◽  
Author(s):  
Xu Kang ◽  
Bin Song ◽  
Jie Guo ◽  
Xiaojiang Du ◽  
Mohsen Guizani

Vehicle tracking task plays an important role on the Internet of vehicles and intelligent transportation system. Beyond the traditional Global Positioning System sensor, the image sensor can capture different kinds of vehicles, analyze their driving situation, and can interact with them. Aiming at the problem that the traditional convolutional neural network is vulnerable to background interference, this article proposes vehicle tracking method based on human attention mechanism for self-selection of deep features with an inter-channel fully connected layer. It mainly includes the following contents: (1) a fully convolutional neural network fused attention mechanism with the selection of the deep features for convolution; (2) a separation method for template and semantic background region to separate target vehicles from the background in the initial frame adaptively; (3) a two-stage method for model training using our traffic dataset. The experimental results show that the proposed method improves the tracking accuracy without an increase in tracking time. Meanwhile, it strengthens the robustness of algorithm under the condition of the complex background region. The success rate of the proposed method in overall traffic datasets is higher than Siamese network by about 10%, and the overall precision is higher than Siamese network by 8%.


Author(s):  
Alfita Rakhmandasari ◽  
Wayan Firdaus Mahmudy ◽  
Titiek Yulianti

<span>Kenaf plant is a fibre plant whose stem bark is taken to be used as raw material for making geo-textile, particleboard, pulp, fiber drain, fiber board, and paper. The presence of plant pests and diseases that attack causes crop production to decrease. The detection of pests and diseases by farmers may be a challenging task. The detection can be done using artificial intelligence-based method. Convolutional neural networks (CNNs) are one of the most popular neural network architectures and have been successfully implemented for image classification. However, the CNN method is still considered a long time in the process, so this method was developed into namely faster regional based convolution neural network (RCNN). As the selection of the input features largely determines the accuracy of the results, a pre-processing procedure is developed to transform the kenaf plant image into input features of faster RCNN. A computational experiment proves that the faster RCNN has a very short computation time by completing 10000 iterations in 3 hours compared to convolutional neural network (CNN) completing 100 iterations at the same time. Furthermore, Faster RCNN gets 77.50% detection accuracy and bounding box accuracy 96.74% while CNN gets 72.96% detection accuracy at 400 epochs. The results also prove that the selection of input features and its pre-processing procedure could produce a high accuracy of detection. </span>


2020 ◽  
Vol 9 (4) ◽  
pp. 1
Author(s):  
Arman I. Mohammed ◽  
Ahmed AK. Tahir

A new optimization algorithm called Adam Meged with AMSgrad (AMAMSgrad) is modified and used for training a convolutional neural network type Wide Residual Neural Network, Wide ResNet (WRN), for image classification purpose. The modification includes the use of the second moment as in AMSgrad and the use of Adam updating rule but with and (2) as the power of the denominator. The main aim is to improve the performance of the AMAMSgrad optimizer by a proper selection of and the power of the denominator. The implementation of AMAMSgrad and the two known methods (Adam and AMSgrad) on the Wide ResNet using CIFAR-10 dataset for image classification reveals that WRN performs better with AMAMSgrad optimizer compared to its performance with Adam and AMSgrad optimizers. The accuracies of training, validation and testing are improved with AMAMSgrad over Adam and AMSgrad. AMAMSgrad needs less number of epochs to reach maximum performance compared to Adam and AMSgrad. With AMAMSgrad, the training accuracies are (90.45%, 97.79%, 99.98%, 99.99%) respectively at epoch (60, 120, 160, 200), while validation accuracy for the same epoch numbers are (84.89%, 91.53%, 95.05%, 95.23). For testing, the WRN with AMAMSgrad provided an overall accuracy of 94.8%. All these accuracies outrages those provided by WRN with Adam and AMSgrad. The classification metric measures indicate that the given architecture of WRN with the three optimizers performs significantly well and with high confidentiality, especially with AMAMSgrad optimizer.


2021 ◽  
Vol 5 (4) ◽  
pp. 563
Author(s):  
Muhammad Rizky Firdaus

Fertile chicken eggs are eggs that can hatch because these eggs have a development in the form of dots of blood and blood vessels or can be called an embryo, while infertile chicken eggs are a type of egg that cannot be hatched because there is no embryo development in the hatching process. Inspection of infertile chicken eggs must be carried out especially for breeders who will carry out the selection and transfer of fertile chicken eggs and infertile chicken eggs. However, currently, the selection of fertile and infertile chicken eggs is still using a less effective way, namely only by looking at the egg shell or called candling, this process is certainly less accurate to classify which eggs are fertile and infertile eggs because not all breeders are able to see the results of the eggs properly. candling so that the possibility of prediction errors. Therefore, in this study, a classification of fertile chicken eggs and infertile chicken eggs will be carried out based on candling results using the Convolutional Neural Network method. From the results of the classification carried out, the percentage of accuracy obtained for the classification of fertile and infertile chicken eggs is 98% and an error of 5%.


Sign in / Sign up

Export Citation Format

Share Document