scholarly journals Road Segmentation based on Deep Learning with Post-Processing Probability Layer

Author(s):  
Weibin Chen
Author(s):  
Tobias M. Rasse ◽  
Réka Hollandi ◽  
Péter Horváth

AbstractVarious pre-trained deep learning models for the segmentation of bioimages have been made available as ‘developer-to-end-user’ solutions. They usually require neither knowledge of machine learning nor coding skills, are optimized for ease of use, and deployability on laptops. However, testing these tools individually is tedious and success is uncertain.Here, we present the ‘Op’en ‘Se’gmentation ‘F’ramework (OpSeF), a Python framework for deep learning-based instance segmentation. OpSeF aims at facilitating the collaboration of biomedical users with experienced image analysts. It builds on the analysts’ knowledge in Python, machine learning, and workflow design to solve complex analysis tasks at any scale in a reproducible, well-documented way. OpSeF defines standard inputs and outputs, thereby facilitating modular workflow design and interoperability with other software. Users play an important role in problem definition, quality control, and manual refinement of results. All analyst tasks are optimized for deployment on Linux workstations or GPU clusters, all user tasks may be performed on any laptop in ImageJ.OpSeF semi-automates preprocessing, convolutional neural network (CNN)-based segmentation in 2D or 3D, and post-processing. It facilitates benchmarking of multiple models in parallel. OpSeF streamlines the optimization of parameters for pre- and post-processing such, that an available model may frequently be used without retraining. Even if sufficiently good results are not achievable with this approach, intermediate results can inform the analysts in the selection of the most promising CNN-architecture in which the biomedical user might invest the effort of manually labeling training data.We provide Jupyter notebooks that document sample workflows based on various image collections. Analysts may find these notebooks useful to illustrate common segmentation challenges, as they prepare the advanced user for gradually taking over some of their tasks and completing their projects independently. The notebooks may also be used to explore the analysis options available within OpSeF in an interactive way and to document and share final workflows.Currently, three mechanistically distinct CNN-based segmentation methods, the U-Net implementation used in Cellprofiler 3.0, StarDist, and Cellpose have been integrated within OpSeF. The addition of new networks requires little, the addition of new models requires no coding skills. Thus, OpSeF might soon become both an interactive model repository, in which pre-trained models might be shared, evaluated, and reused with ease.


Author(s):  
Elena Morotti ◽  
Davide Evangelista ◽  
Elena Loli Piccolomini

Deep Learning is developing interesting tools which are of great interest for inverse imaging applications. In this work, we consider a medical imaging reconstruction task from subsampled measurements, which is an active research field where Convolutional Neural Networks have already revealed their great potential. However, the commonly used architectures are very deep and, hence, prone to overfitting and unfeasible for clinical usages. Inspired by the ideas of the green-AI literature, we here propose a shallow neural network to perform an efficient Learned Post-Processing on images roughly reconstructed by the filtered backprojection algorithm. The results obtained on images from the training set and on unseen images, using both the non-expensive network and the widely used very deep ResUNet show that the proposed network computes images of comparable or higher quality in about one fourth of time.


2019 ◽  
Vol 18 (1) ◽  
Author(s):  
Meng Dai ◽  
Shuying Li ◽  
Yuanyuan Wang ◽  
Qi Zhang ◽  
Jinhua Yu

Abstract Background Improving imaging quality is a fundamental problem in ultrasound contrast agent imaging (UCAI) research. Plane wave imaging (PWI) has been deemed as a potential method for UCAI due to its’ high frame rate and low mechanical index. High frame rate can improve the temporal resolution of UCAI. Meanwhile, low mechanical index is essential to UCAI since microbubbles can be easily broken under high mechanical index conditions. However, the clinical practice of ultrasound contrast agent plane wave imaging (UCPWI) is still limited by poor imaging quality for lack of transmit focus. The purpose of this study was to propose and validate a new post-processing method that combined with deep learning to improve the imaging quality of UCPWI. The proposed method consists of three stages: (1) first, a deep learning approach based on U-net was trained to differentiate the microbubble and tissue radio frequency (RF) signals; (2) then, to eliminate the remaining tissue RF signals, the bubble approximated wavelet transform (BAWT) combined with maximum eigenvalue threshold was employed. BAWT can enhance the UCA area brightness, and eigenvalue threshold can be set to eliminate the interference areas due to the large difference of maximum eigenvalue between UCA and tissue areas; (3) finally, the accurate microbubble imaging were obtained through eigenspace-based minimum variance (ESBMV). Results The proposed method was validated by both phantom and in vivo rabbit experiment results. Compared with UCPWI based on delay and sum (DAS), the imaging contrast-to-tissue ratio (CTR) and contrast-to-noise ratio (CNR) was improved by 21.3 dB and 10.4 dB in the phantom experiment, and the corresponding improvements were 22.3 dB and 42.8 dB in the rabbit experiment. Conclusions Our method illustrates superior imaging performance and high reproducibility, and thus is promising in improving the contrast image quality and the clinical value of UCPWI.


2020 ◽  
Vol 11 (1) ◽  
pp. 148-160 ◽  
Author(s):  
Weicong Kong ◽  
Zhao Yang Dong ◽  
Bo Wang ◽  
Junhua Zhao ◽  
Jie Huang

2019 ◽  
Vol 10 (4) ◽  
pp. 381-390 ◽  
Author(s):  
Ye Li ◽  
Lele Xu ◽  
Jun Rao ◽  
Lili Guo ◽  
Zhen Yan ◽  
...  

Author(s):  
Michele Bici ◽  
Saber Seyed Mohammadi ◽  
Francesca Campana

Abstract Reverse Engineering (RE) may help tolerance inspection during production by digitalization of analyzed components and their comparison with design requirements. RE techniques are already applied for geometrical and tolerance shape control. Plastic injection molding is one of the fields where it may be applied, in particular for die set-up of multi-cavities, since no severe accuracy is required for the acquisition system. In this field, RE techniques integrated with Computer-Aided tools for tolerancing and inspection may contribute to the so-called “Smart Manufacturing”. Their integration with PLM and suppliers’ incoming components may set the information necessary to evaluate each component and die. Intensive application of shape digitalization has to front several issues: accuracy of data acquisition hardware and software; automation of experimental and post-processing steps; update of industrial protocol and workers knowledge among others. Concerning post-processing automation, many advantages arise from computer vision, considering that it is based on the same concepts developed in a RE post-processing (detection, segmentation and classification). Recently, deep learning has been applied to classify point clouds, considering object and/or feature recognition. This can be made in two ways: with a 3D voxel grid, increasing regularity, before feeding data to a deep net architecture; or acting directly on point cloud. Literature data demonstrate high accuracy according to net training quality. In this paper, a preliminary study about CNN for 3D points segmentation is provided. Their characteristics have been compared to an automatic approach that has been already implemented by the authors in the past. VoxNet and PointNet architectures have been compared according to the specific task of feature recognition for tolerance inspection and some investigations on test cases are discussed to understand their performance.


Sign in / Sign up

Export Citation Format

Share Document