scholarly journals A Deep Learning Pipeline for Nucleus Segmentation

2020 ◽  
Author(s):  
George Zaki ◽  
Prabhakar R. Gudla ◽  
Kyunghun Lee ◽  
Justin Kim ◽  
Laurent Ozbun ◽  
...  

AbstractDeep learning is rapidly becoming the technique of choice for automated segmentation of nuclei in biological image analysis workflows. In order to evaluate the feasibility of training nuclear segmentation models on small, custom annotated image datasets that have been augmented, we have designed a computational pipeline to systematically compare different nuclear segmentation model architectures and model training strategies. Using this approach, we demonstrate that transfer learning and tuning of training parameters, such as the composition, size and pre-processing of the training image dataset, can lead to robust nuclear segmentation models, which match, and often exceed, the performance of existing, off-the-shelf deep learning models pre-trained on large image datasets. We envision a practical scenario where deep learning nuclear segmentation models trained in this way can be shared across a laboratory, facility, or institution, and continuously improved by training them on progressively larger and varied image datasets. Our work provides computational tools and a practical framework for deep learning-based biological image segmentation using small annotated image datasets.

Materials ◽  
2021 ◽  
Vol 14 (21) ◽  
pp. 6311
Author(s):  
Woldeamanuel Minwuye Mesfin ◽  
Soojin Cho ◽  
Jeongmin Lee ◽  
Hyeong-Ki Kim ◽  
Taehoon Kim

The objective of this study is to evaluate the feasibility of deep-learning-based segmentation of the area covered by fresh and young concrete in the images of construction sites. The RGB images of construction sites under various actual situations were used as an input into several types of convolutional neural network (CNN)–based segmentation models, which were trained using training image sets. Various ranges of threshold values were applied for the classification, and their accuracy and recall capacity were quantified. The trained models could segment the concrete area overall although they were not able to judge the difference between concrete of different ages as professionals can. By increasing the threshold values for the softmax classifier, the cases of incorrect prediction as concrete became almost zero, while some areas of concrete became segmented as not concrete.


2021 ◽  
Vol 11 (15) ◽  
pp. 7046
Author(s):  
Jorge Francisco Ciprián-Sánchez ◽  
Gilberto Ochoa-Ruiz ◽  
Lucile Rossi ◽  
Frédéric Morandini

Wildfires stand as one of the most relevant natural disasters worldwide, particularly more so due to the effect of climate change and its impact on various societal and environmental levels. In this regard, a significant amount of research has been done in order to address this issue, deploying a wide variety of technologies and following a multi-disciplinary approach. Notably, computer vision has played a fundamental role in this regard. It can be used to extract and combine information from several imaging modalities in regard to fire detection, characterization and wildfire spread forecasting. In recent years, there has been work pertaining to Deep Learning (DL)-based fire segmentation, showing very promising results. However, it is currently unclear whether the architecture of a model, its loss function, or the image type employed (visible, infrared, or fused) has the most impact on the fire segmentation results. In the present work, we evaluate different combinations of state-of-the-art (SOTA) DL architectures, loss functions, and types of images to identify the parameters most relevant to improve the segmentation results. We benchmark them to identify the top-performing ones and compare them to traditional fire segmentation techniques. Finally, we evaluate if the addition of attention modules on the best performing architecture can further improve the segmentation results. To the best of our knowledge, this is the first work that evaluates the impact of the architecture, loss function, and image type in the performance of DL-based wildfire segmentation models.


Author(s):  
Wenjia Cai ◽  
Jie Xu ◽  
Ke Wang ◽  
Xiaohong Liu ◽  
Wenqin Xu ◽  
...  

Abstract Anterior segment eye diseases account for a significant proportion of presentations to eye clinics worldwide, including diseases associated with corneal pathologies, anterior chamber abnormalities (e.g. blood or inflammation) and lens diseases. The construction of an automatic tool for the segmentation of anterior segment eye lesions will greatly improve the efficiency of clinical care. With research on artificial intelligence progressing in recent years, deep learning models have shown their superiority in image classification and segmentation. The training and evaluation of deep learning models should be based on a large amount of data annotated with expertise, however, such data are relatively scarce in the domain of medicine. Herein, the authors developed a new medical image annotation system, called EyeHealer. It is a large-scale anterior eye segment dataset with both eye structures and lesions annotated at the pixel level. Comprehensive experiments were conducted to verify its performance in disease classification and eye lesion segmentation. The results showed that semantic segmentation models outperformed medical segmentation models. This paper describes the establishment of the system for automated classification and segmentation tasks. The dataset will be made publicly available to encourage future research in this area.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2834
Author(s):  
Billur Kazaz ◽  
Subhadipto Poddar ◽  
Saeed Arabi ◽  
Michael A. Perez ◽  
Anuj Sharma ◽  
...  

Construction activities typically create large amounts of ground disturbance, which can lead to increased rates of soil erosion. Construction stormwater practices are used on active jobsites to protect downstream waterbodies from offsite sediment transport. Federal and state regulations require routine pollution prevention inspections to ensure that temporary stormwater practices are in place and performing as intended. This study addresses the existing challenges and limitations in the construction stormwater inspections and presents a unique approach for performing unmanned aerial system (UAS)-based inspections. Deep learning-based object detection principles were applied to identify and locate practices installed on active construction sites. The system integrates a post-processing stage by clustering results. The developed framework consists of data preparation with aerial inspections, model training, validation of the model, and testing for accuracy. The developed model was created from 800 aerial images and was used to detect four different types of construction stormwater practices at 100% accuracy on the Mean Average Precision (MAP) with minimal false positive detections. Results indicate that object detection could be implemented on UAS-acquired imagery as a novel approach to construction stormwater inspections and provide accurate results for site plan comparisons by rapidly detecting the quantity and location of field-installed stormwater practices.


2021 ◽  
Author(s):  
Yao Zhao ◽  
Dong Joo Rhee ◽  
Carlos Cardenas ◽  
Laurence E. Court ◽  
Jinzhong Yang

Author(s):  
L. Hang ◽  
G. Y. Cai

Abstract. The detection and reconstruction of building have attracted more attention in the community of remote sensing and computer vision. Light detection and ranging (LiDAR) has been proved to be a good way to extract building roofs, while we have to face the problem of data shortage for most of the time. In this paper, we tried to extract the building roofs from very high resolution (VHR) images of Chinese satellite Gaofen-2 by employing convolutional neural network (CNN). It has been proved that the CNN is of a higher capability of recognizing detailed features which may not be classified out by object-based classification approach. Several major steps are concerned in this study, such as generation of training dataset, model training, image segmentation and building roofs recognition. First, urban objects such as trees, roads, squares and buildings were classified based on random forest algorithm by an object-oriented classification approach, the building regions were separated from other classes at the aid of visually interpretation and correction; Next, different types of building roofs mainly categorized by color and size information were trained using the trained CNN. Finally, the industrial and residential building roofs have been recognized individually and the results have been validated individually. The assessment results prove effectiveness of the proposed method with approximately 91% and 88% of quality rates in detection industrial and residential building roofs, respectively. Which means that the CNN approach is prospecting in detecting buildings with a very higher accuracy.


2021 ◽  
Vol 11 ◽  
Author(s):  
He Sui ◽  
Ruhang Ma ◽  
Lin Liu ◽  
Yaozong Gao ◽  
Wenhai Zhang ◽  
...  

ObjectiveTo develop a deep learning-based model using esophageal thickness to detect esophageal cancer from unenhanced chest CT images.MethodsWe retrospectively identified 141 patients with esophageal cancer and 273 patients negative for esophageal cancer (at the time of imaging) for model training. Unenhanced chest CT images were collected and used to build a convolutional neural network (CNN) model for diagnosing esophageal cancer. The CNN is a VB-Net segmentation network that segments the esophagus and automatically quantifies the thickness of the esophageal wall and detect positions of esophageal lesions. To validate this model, 52 false negatives and 48 normal cases were collected further as the second dataset. The average performance of three radiologists and that of the same radiologists aided by the model were compared.ResultsThe sensitivity and specificity of the esophageal cancer detection model were 88.8% and 90.9%, respectively, for the validation dataset set. Of the 52 missed esophageal cancer cases and the 48 normal cases, the sensitivity, specificity, and accuracy of the deep learning esophageal cancer detection model were 69%, 61%, and 65%, respectively. The independent results of the radiologists had a sensitivity of 25%, 31%, and 27%; specificity of 78%, 75%, and 75%; and accuracy of 53%, 54%, and 53%. With the aid of the model, the results of the radiologists were improved to a sensitivity of 77%, 81%, and 75%; specificity of 75%, 74%, and 74%; and accuracy of 76%, 77%, and 75%, respectively.ConclusionsDeep learning-based model can effectively detect esophageal cancer in unenhanced chest CT scans to improve the incidental detection of esophageal cancer.


PLoS ONE ◽  
2020 ◽  
Vol 15 (12) ◽  
pp. e0243253
Author(s):  
Qiang Lin ◽  
Mingyang Luo ◽  
Ruiting Gao ◽  
Tongtong Li ◽  
Zhengxing Man ◽  
...  

SPECT imaging has been identified as an effective medical modality for diagnosis, treatment, evaluation and prevention of a range of serious diseases and medical conditions. Bone SPECT scan has the potential to provide more accurate assessment of disease stage and severity. Segmenting hotspot in bone SPECT images plays a crucial role to calculate metrics like tumor uptake and metabolic tumor burden. Deep learning techniques especially the convolutional neural networks have been widely exploited for reliable segmentation of hotspots or lesions, organs and tissues in the traditional structural medical images (i.e., CT and MRI) due to their ability of automatically learning the features from images in an optimal way. In order to segment hotspots in bone SPECT images for automatic assessment of metastasis, in this work, we develop several deep learning based segmentation models. Specifically, each original whole-body bone SPECT image is processed to extract the thorax area, followed by image mirror, translation and rotation operations, which augments the original dataset. We then build segmentation models based on two commonly-used famous deep networks including U-Net and Mask R-CNN by fine-tuning their structures. Experimental evaluation conducted on a group of real-world bone SEPCT images reveals that the built segmentation models are workable on identifying and segmenting hotspots of metastasis in bone SEPCT images, achieving a value of 0.9920, 0.7721, 0.6788 and 0.6103 for PA (accuracy), CPA (precision), Rec (recall) and IoU, respectively. Finally, we conclude that the deep learning technology have the huge potential to identify and segment hotspots in bone SPECT images.


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Dennis Segebarth ◽  
Matthias Griebel ◽  
Nikolai Stein ◽  
Cora R von Collenberg ◽  
Corinna Martin ◽  
...  

Bioimage analysis of fluorescent labels is widely used in the life sciences. Recent advances in deep learning (DL) allow automating time-consuming manual image analysis processes based on annotated training data. However, manual annotation of fluorescent features with a low signal-to-noise ratio is somewhat subjective. Training DL models on subjective annotations may be instable or yield biased models. In turn, these models may be unable to reliably detect biological effects. An analysis pipeline integrating data annotation, ground truth estimation, and model training can mitigate this risk. To evaluate this integrated process, we compared different DL-based analysis approaches. With data from two model organisms (mice, zebrafish) and five laboratories, we show that ground truth estimation from multiple human annotators helps to establish objectivity in fluorescent feature annotations. Furthermore, ensembles of multiple models trained on the estimated ground truth establish reliability and validity. Our research provides guidelines for reproducible DL-based bioimage analyses.


Sign in / Sign up

Export Citation Format

Share Document