scholarly journals Training instance segmentation neural network with synthetic datasets for crop seed phenotyping

2020 ◽  
Vol 3 (1) ◽  
Author(s):  
Yosuke Toda ◽  
Fumio Okura ◽  
Jun Ito ◽  
Satoshi Okada ◽  
Toshinori Kinoshita ◽  
...  
Author(s):  
Zhiyong Gao ◽  
Jianhong Xiang

Background: While detecting the object directly from the 3D point cloud, the natural 3D patterns and invariance of 3D data are often obscure. Objective: In this work, we aimed at studying the 3D object detection from discrete, disordered and sparse 3D point clouds. Methods: The CNN is composed of the frustum sequence module, 3D instance segmentation module S-NET, 3D point cloud transformation module T-NET, and 3D boundary box estimation module E-NET. The search space of the object is determined by the frustum sequence module. The instance segmentation of the point cloud is performed by the 3D instance segmentation module. The 3D coordinates of the object are confirmed by the transformation module and the 3D bounding box estimation module. Results: Evaluated on KITTI benchmark dataset, our method outperforms the state of the art by remarkable margins while having real-time capability. Conclusion: We achieve real-time 3D object detection by proposing an improved convolutional neural network (CNN) based on image-driven point clouds.


2021 ◽  
Author(s):  
Hussain Mohammed Dipu Kabir ◽  
Moloud Abdar ◽  
Abbas Khosravi ◽  
Darius Nahavandi ◽  
Shady Mohamed ◽  
...  

Abstract In this paper, we propose ten synthetic datasets for point prediction and numeric uncertainty quantification (UQ). These datasets are split into train, validation, and test sets for model benchmarking. Equations and the description of each dataset are provided in detail. We also present representative shallow neural network (NN) training and Random Vector Functional Link (RVFL) training examples. Both training train models for the point prediction. We perform uncertainty quantification with the consideration of a gaussian and homoscedastic distribution. As the distribution consideration and models are rudimental, much room exists for further explorations and improvements. The dataset and scripts are available at the following link: https://github.com/dipuk0506/UQ-Data


2020 ◽  
Vol 12 (20) ◽  
pp. 3274
Author(s):  
Keke Geng ◽  
Ge Dong ◽  
Guodong Yin ◽  
Jingyu Hu

Recent advancements in environmental perception for autonomous vehicles have been driven by deep learning-based approaches. However, effective traffic target detection in complex environments remains a challenging task. This paper presents a novel dual-modal instance segmentation deep neural network (DM-ISDNN) by merging camera and LIDAR data, which can be used to deal with the problem of target detection in complex environments efficiently based on multi-sensor data fusion. Due to the sparseness of the LIDAR point cloud data, we propose a weight assignment function that assigns different weight coefficients to different feature pyramid convolutional layers for the LIDAR sub-network. We compare and analyze the adaptations of early-, middle-, and late-stage fusion architectures in depth. By comprehensively considering the detection accuracy and detection speed, the middle-stage fusion architecture with a weight assignment mechanism, with the best performance, is selected. This work has great significance for exploring the best feature fusion scheme of a multi-modal neural network. In addition, we apply a mask distribution function to improve the quality of the predicted mask. A dual-modal traffic object instance segmentation dataset is established using a 7481 camera and LIDAR data pairs from the KITTI dataset, with 79,118 manually annotated instance masks. To the best of our knowledge, there is no existing instance annotation for the KITTI dataset with such quality and volume. A novel dual-modal dataset, composed of 14,652 camera and LIDAR data pairs, is collected using our own developed autonomous vehicle under different environmental conditions in real driving scenarios, for which a total of 62,579 instance masks are obtained using semi-automatic annotation method. This dataset can be used to validate the detection performance under complex environmental conditions of instance segmentation networks. Experimental results on the dual-modal KITTI Benchmark demonstrate that DM-ISDNN using middle-stage data fusion and the weight assignment mechanism has better detection performance than single- and dual-modal networks with other data fusion strategies, which validates the robustness and effectiveness of the proposed method. Meanwhile, compared to the state-of-the-art instance segmentation networks, our method shows much better detection performance, in terms of AP and F1 score, on the dual-modal dataset collected under complex environmental conditions, which further validates the superiority of our method.


2019 ◽  
Vol 490 (3) ◽  
pp. 3952-3965 ◽  
Author(s):  
Colin J Burke ◽  
Patrick D Aleo ◽  
Yu-Ching Chen ◽  
Xin Liu ◽  
John R Peterson ◽  
...  

ABSTRACT We apply a new deep learning technique to detect, classify, and deblend sources in multiband astronomical images. We train and evaluate the performance of an artificial neural network built on the Mask Region-based Convolutional Neural Network image processing framework, a general code for efficient object detection, classification, and instance segmentation. After evaluating the performance of our network against simulated ground truth images for star and galaxy classes, we find a precision of 92 per cent at 80 per cent recall for stars and a precision of 98 per cent at 80 per cent recall for galaxies in a typical field with ∼30 galaxies arcmin−2. We investigate the deblending capability of our code, and find that clean deblends are handled robustly during object masking, even for significantly blended sources. This technique, or extensions using similar network architectures, may be applied to current and future deep imaging surveys such as Large Synoptic Survey Telescope and Wide-Field Infrared Survey Telescope. Our code, astro r-cnn, is publicly available at https://github.com/burke86/astro_rcnn.


2020 ◽  
Vol 17 (3) ◽  
pp. 172988142092528
Author(s):  
Haitao Xiong ◽  
Jiaqing Wu ◽  
Qing Liu ◽  
Yuanyuan Cai

As an information carrier with rich semantics, image plays an increasingly important role in real-time monitoring of logistics management. Abnormal objects are typically closely related to the specific region. Detecting abnormal objects in the specific region is conducive to improving the accuracy of detection and analysis, thereby improving the level of logistics management. Motivated by these observations, we design the method called abnormal object detection in a specific region based on Mask R-convolutional neural network: Abnormal Object Detection in Specific Region. In this method, the initial instance segmentation model is obtained by the traditional Mask R-convolutional neural network method, then the region overlap of the specific region is calculated and the overlapping ratio of each instance is determined, and these two parts of information are fused to predict the exceptional object. Finally, the abnormal object is restored and detected in the original image. Experimental results demonstrate that our proposed Abnormal Object Detection in Specific Region can effectively identify abnormal objects in a specific region and significantly outperforms the state-of-the-art methods.


2018 ◽  
Author(s):  
Yuta Tokuoka ◽  
Takahiro G Yamada ◽  
Noriko F Hiroi ◽  
Tetsuya J Kobayashi ◽  
Kazuo Yamagata ◽  
...  

AbstractIn embryology, image processing methods such as segmentation are applied to acquiring quantitative criteria from time-series three-dimensional microscopic images. When used to segment cells or intracellular organelles, several current deep learning techniques outperform traditional image processing algorithms. However, segmentation algorithms still have unsolved problems, especially in bioimage processing. The most critical issue is that the existing deep learning-based algorithms for bioimages can perform only semantic segmentation, which distinguishes whether a pixel is within an object (for example, nucleus) or not. In this study, we implemented a novel segmentation algorithm, based on deep learning, which segments each nucleus and adds different labels to the detected objects. This segmentation algorithm is called instance segmentation. Our instance segmentation algorithm, implemented as a neural network, which we named QCA Net, substantially outperformed 3D U-Net, which is the best semantic segmentation algorithm that uses deep learning. Using QCA Net, we quantified the nuclear number, volume, surface area, and center of gravity coordinates during the development of mouse embryos. In particular, QCA Net distinguished nuclei of embryonic cells from those of polar bodies formed in meiosis. We consider that QCA Net can greatly contribute to bioimage segmentation in embryology by generating quantitative criteria from segmented images.


2021 ◽  
pp. 116403
Author(s):  
Si Yang ◽  
Lihua Zheng ◽  
Huijun Yang ◽  
Man Zhang ◽  
Tingting Wu ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document