scholarly journals Robust cell particle detection to dense regions and subjective training samples based on prediction of particle center using convolutional neural network

PLoS ONE ◽  
2018 ◽  
Vol 13 (10) ◽  
pp. e0203646 ◽  
Author(s):  
Kenshiro Nishida ◽  
Kazuhiro Hotta
2021 ◽  
Vol 38 (1) ◽  
pp. 61-71
Author(s):  
Xianrong Zhang ◽  
Gang Chen

Facing the image detection of dense small rigid targets, the main bottleneck of convolutional neural network (CNN)-based algorithms is the lack of massive correctly labeled training images. To make up for the lack, this paper proposes an automatic end-to-end synthesis algorithm to generate a huge amount of labeled training samples. The synthetic image set was adopted to train the network progressively and iteratively, realizing the detection of dense small rigid targets based on the CNN and synthetic images. Specifically, the standard images of the target classes and the typical background mages were imported, and the color, brightness, position, orientation, and perspective of real images were simulated by image processing algorithm, creating a sufficiently large initial training set with correctly labeled images. Then, the network was preliminarily trained on this set. After that, a few real images were compiled into the test set. Taking the missed and incorrectly detected target images as inputs, the initial training set was progressively expanded, and then used to iteratively train the network. The results show that our method can automatically generate a training set that fully substitutes manually labeled dataset for network training, eliminating the dependence on massive manually labeled images. The research opens a new way to implement the tasks similar to the detection of dense small rigid targets, and provides a good reference for solving similar problems through deep learning (DL).


2019 ◽  
Vol 11 (5) ◽  
pp. 484 ◽  
Author(s):  
Jie Feng ◽  
Lin Wang ◽  
Haipeng Yu ◽  
Licheng Jiao ◽  
Xiangrong Zhang

Convolutional neural network (CNN) is well-known for its powerful capability on image classification. In hyperspectral images (HSIs), fixed-size spatial window is generally used as the input of CNN for pixel-wise classification. However, single fixed-size spatial architecture hinders the excellent performance of CNN due to the neglect of various land-cover distributions in HSIs. Moreover, insufficient samples in HSIs may cause the overfitting problem. To address these problems, a novel divide-and-conquer dual-architecture CNN (DDCNN) method is proposed for HSI classification. In DDCNN, a novel regional division strategy based on local and non-local decisions is devised to distinguish homogeneous and heterogeneous regions. Then, for homogeneous regions, a multi-scale CNN architecture with larger spatial window inputs is constructed to learn joint spectral-spatial features. For heterogeneous regions, a fine-grained CNN architecture with smaller spatial window inputs is constructed to learn hierarchical spectral features. Moreover, to alleviate the problem of insufficient training samples, unlabeled samples with high confidences are pre-labeled under adaptively spatial constraint. Experimental results on HSIs demonstrate that the proposed method provides encouraging classification performance, especially region uniformity and edge preservation with limited training samples.


Author(s):  
Luoting Fu ◽  
Levent Burak Kara

Hand-drawn sketches are powerful cognitive devices for the efficient exploration, visualization and communication of emerging ideas in engineering design. It is desirable that CAD/CAE tools be able to recognize the back-of-the-envelope sketches and extract the intended engineering models. Yet this is a nontrivial task for freehand sketches. Here we present a novel, neural network-based approach designed for the recognition of network-like sketches. Our approach leverages a trainable, detector/recognizer and an autonomous procedure for the generation of training samples. Prior to deployment, a Convolutional Neural Network is trained on a few labeled prototypical sketches and learns the definitions of the visual objects. When deployed, the trained network scans the input sketch at different resolutions with a fixed-size sliding window, detects instances of defined symbols and outputs an engineering model. We demonstrate the effectiveness of the proposed approach in different engineering domains with different types of sketching inputs.


2021 ◽  
Vol 504 (1) ◽  
pp. 372-392
Author(s):  
Robert W Bickley ◽  
Connor Bottrell ◽  
Maan H Hani ◽  
Sara L Ellison ◽  
Hossen Teimoorinia ◽  
...  

ABSTRACT The Canada–France Imaging Survey (CFIS) will consist of deep, high-resolution r-band imaging over ∼5000 deg2 of the sky, representing a first-rate opportunity to identify recently merged galaxies. Because of the large number of galaxies in CFIS, we investigate the use of a convolutional neural network (CNN) for automated merger classification. Training samples of post-merger and isolated galaxy images are generated from the IllustrisTNG simulation processed with the observational realism code RealSim. The CNN’s overall classification accuracy is 88 per cent, remaining stable over a wide range of intrinsic and environmental parameters. We generate a mock galaxy survey from IllustrisTNG in order to explore the expected purity of post-merger samples identified by the CNN. Despite the CNN’s good performance in training, the intrinsic rarity of post-mergers leads to a sample that is only ∼6 per cent pure when the default decision threshold is used. We investigate trade-offs in purity and completeness with a variable decision threshold and find that we recover the statistical distribution of merger-induced star formation rate enhancements. Finally, the performance of the CNN is compared with both traditional automated methods and human classifiers. The CNN is shown to outperform Gini–M20 and asymmetry methods by an order of magnitude in post-merger sample purity on the mock survey data. Although the CNN outperforms the human classifiers on sample completeness, the purity of the post-merger sample identified by humans is frequently higher, indicating that a hybrid approach to classifications may be an effective solution to merger classifications in large surveys.


2021 ◽  
Author(s):  
Yu Hao ◽  
Biao Zhang ◽  
Xiaohua Wan ◽  
Rui Yan ◽  
Zhiyong Liu ◽  
...  

Motivation: Cryo-electron tomography (Cryo-ET) with sub-tomogram averaging (STA) is indispensable when studying macromolecule structures and functions in their native environments. However, current tomographic reconstructions suffer the low signal-to-noise (SNR) ratio and the missing wedge artifacts. Hence, automatic and accurate macromolecule localization and classification become the bottleneck problem for structural determination by STA. Here, we propose a 3D multi-scale dense convolutional neural network (MSDNet) for voxel-wise annotations of tomograms. Weighted focal loss is adopted as a loss function to solve the class imbalance. The proposed network combines 3D hybrid dilated convolutions (HDC) and dense connectivity to ensure an accurate performance with relatively few trainable parameters. 3D HDC expands the receptive field without losing resolution or learning extra parameters. Dense connectivity facilitates the re-use of feature maps to generate fewer intermediate feature maps and trainable parameters. Then, we design a 3D MSDNet based approach for fully automatic macromolecule localization and classification, called VP-Detector (Voxel-wise Particle Detector). VP-Detector is efficient because classification performs on the pre-calculated coordinates instead of a sliding window. Results: We evaluated the VP-Detector on simulated tomograms. Compared to the state-of-the-art methods, our method achieved a competitive performance on localization with the highest F1-score. We also demonstrated that the weighted focal loss improves the classification of hard classes. We trained the network on a part of training sets to prove the availability of training on relatively small datasets. Moreover, the experiment shows that VP-Detector has a fast particle detection speed, which costs less than 14 minutes on a test tomogram.


2021 ◽  
Vol 34 (4) ◽  
pp. 130-141
Author(s):  
Atheel Sabih Shaker

     The brain's magnetic resonance imaging (MRI) is tasked with finding the pixels or voxels that establish where the brain is in a medical image The Convolutional Neural Network (CNN) can process curved baselines that frequently occur in scanned documents. Next, the lines are separated into characters. In the Convolutional Neural Network (CNN) can process curved baselines that frequently occur in scanned documents case of fonts with a fixed MRI width, the gaps are analyzed and split. Otherwise, a limited region above the baseline is analyzed, separated, and classified. The words with the lowest recognition score are split into further characters x until the result improves. If this does not improve the recognition score, contours are merged and classified again to check the change in the recognition score. The features for classification are extracted from small fixed-size patches over neighboring contours and matched against the trained deep learning representations this approach enables Tesseract to easily handle MRI sample results broken into multiple parts, which is impossible if each contour is processed separately Hard to read! Try to split sentences. The CNN inception network seem to be a suitable choice for the evaluation of the synthetic MRI samples with 3000 features, and 12000 samples of images as data augmentation capacities favors data which is similar to the original training set and thus unlikely to contain new information content with an accuracy of 98.68%. The error is only 1.32% with the increasing the number of training samples, but the most significant impact in reducing the error can be made by increasing the number of samples.


Sign in / Sign up

Export Citation Format

Share Document