Deep learning semantic segmentation of opaque and non-opaque minerals from epoxy resin in reflected light microscopy images

2021 ◽  
Vol 170 ◽  
pp. 107007
Author(s):  
Michel Pedro Filippo ◽  
Otávio da Fonseca Martins Gomes ◽  
Gilson Alexandre Ostwald Pedro da Costa ◽  
Guilherme Lucio Abelha Mota
2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Nikita Moshkov ◽  
Botond Mathe ◽  
Attila Kertesz-Farkas ◽  
Reka Hollandi ◽  
Peter Horvath

AbstractRecent advancements in deep learning have revolutionized the way microscopy images of cells are processed. Deep learning network architectures have a large number of parameters, thus, in order to reach high accuracy, they require a massive amount of annotated data. A common way of improving accuracy builds on the artificial increase of the training set by using different augmentation techniques. A less common way relies on test-time augmentation (TTA) which yields transformed versions of the image for prediction and the results are merged. In this paper we describe how we have incorporated the test-time argumentation prediction method into two major segmentation approaches utilized in the single-cell analysis of microscopy images. These approaches are semantic segmentation based on the U-Net, and instance segmentation based on the Mask R-CNN models. Our findings show that even if only simple test-time augmentations (such as rotation or flipping and proper merging methods) are applied, TTA can significantly improve prediction accuracy. We have utilized images of tissue and cell cultures from the Data Science Bowl (DSB) 2018 nuclei segmentation competition and other sources. Additionally, boosting the highest-scoring method of the DSB with TTA, we could further improve prediction accuracy, and our method has reached an ever-best score at the DSB.


2018 ◽  
Author(s):  
Zhi Zhou ◽  
Hsien-Chi Kuo ◽  
Hanchuan Peng ◽  
Fuhui Long

AbstractReconstructing three-dimensional (3D) morphology of neurons is essential to understanding brain structures and functions. Over the past decades, a number of neuron tracing tools including manual, semi-automatic, and fully automatic approaches have been developed to extract and analyze 3D neuronal structures. Nevertheless, most of them were developed based on coding certain rules to extract and connect structural components of a neuron, showing limited performance on complicated neuron morphology. Recently, deep learning outperforms many other machine learning methods in a wide range of image analysis and computer vision tasks. Here we developed a new open source toolbox, DeepNeuron, which uses deep learning networks to learn features and rules from data and trace neuron morphology in light microscopy images. DeepNeuron provides a family of modules to solve basic yet challenging problems in neuron tracing. These problems include but not limited to: (1) detecting neuron signal under different image conditions, (2) connecting neuronal signals into tree(s), (3) pruning and refining tree morphology, (4) quantifying the quality of morphology, and (5) classifying dendrites and axons in real time. We have tested DeepNeuron using light microscopy images including bright-field and confocal images of human and mouse brain, on which DeepNeuron demonstrates robustness and accuracy in neuron tracing.


2019 ◽  
Author(s):  
Nikita Moshkov ◽  
Botond Mathe ◽  
Attila Kertesz-Farkas ◽  
Reka Hollandi ◽  
Peter Horvath

AbstractRecent advancements in deep learning have revolutionized the way microscopy images of cells are processed. Deep learning network architectures have a large number of parameters, thus, in order to reach high accuracy, they require massive amount of annotated data. A common way of improving accuracy builds on the artificial increase of the training set by using different augmentation techniques. A less common way relies on test-time augmentation (TTA) which yields transformed versions of the image for prediction and the results are merged. In this paper we describe incorporating the test-time argumentation prediction method into two major segmentation approaches used in the single-cell analysis of microscopy images, namely semantic segmentation using U-Net and instance segmentation using Mask R-CNN models. Our findings show that even using only simple test-time augmentations, such as rotation or flipping and proper merging methods, will result in significant improvement of prediction accuracy. We utilized images of tissue and cell cultures from the Data Science Bowl (DSB) 2018 nuclei segmentation competition and other sources. Additionally, boosting the highest-scoring method of the DSB with TTA, we could further improve and our method has reached an ever-best score at the DSB.


2021 ◽  
Author(s):  
Théo Aspert ◽  
Didier Hentsch ◽  
Gilles Charvin

AbstractAutomating the extraction of meaningful temporal information from sequences of microscopy images represents a major challenge to characterize dynamical biological processes. Here, we have developed DetecDiv, a microfluidic-based image acquisition platform combined with deep learning-based software for high-throughput single-cell division tracking. DetecDiv can reconstruct cellular replicative lifespans with an outstanding accuracy and provides comprehensive temporal cellular metrics using timeseries classification and image semantic segmentation.


Author(s):  
Alan Boyde ◽  
Milan Hadravský ◽  
Mojmír Petran ◽  
Timothy F. Watson ◽  
Sheila J. Jones ◽  
...  

The principles of tandem scanning reflected light microscopy and the design of recent instruments are fully described elsewhere and here only briefly. The illuminating light is intercepted by a rotating aperture disc which lies in the intermediate focal plane of a standard LM objective. This device provides an array of separate scanning beams which light up corresponding patches in the plane of focus more intensely than out of focus layers. Reflected light from these patches is imaged on to a matching array of apertures on the opposite side of the same aperture disc and which are scanning in the focal plane of the eyepiece. An arrangement of mirrors converts the central symmetry of the disc into congruency, so that the array of apertures which chop the illuminating beam is identical with the array on the observation side. Thus both illumination and “detection” are scanned in tandem, giving rise to the name Tandem Scanning Microscope (TSM). The apertures are arranged on Archimedean spirals: each opposed pair scans a single line in the image.


Impact ◽  
2020 ◽  
Vol 2020 (2) ◽  
pp. 9-11
Author(s):  
Tomohiro Fukuda

Mixed reality (MR) is rapidly becoming a vital tool, not just in gaming, but also in education, medicine, construction and environmental management. The term refers to systems in which computer-generated content is superimposed over objects in a real-world environment across one or more sensory modalities. Although most of us have heard of the use of MR in computer games, it also has applications in military and aviation training, as well as tourism, healthcare and more. In addition, it has the potential for use in architecture and design, where buildings can be superimposed in existing locations to render 3D generations of plans. However, one major challenge that remains in MR development is the issue of real-time occlusion. This refers to hiding 3D virtual objects behind real articles. Dr Tomohiro Fukuda, who is based at the Division of Sustainable Energy and Environmental Engineering, Graduate School of Engineering at Osaka University in Japan, is an expert in this field. Researchers, led by Dr Tomohiro Fukuda, are tackling the issue of occlusion in MR. They are currently developing a MR system that realises real-time occlusion by harnessing deep learning to achieve an outdoor landscape design simulation using a semantic segmentation technique. This methodology can be used to automatically estimate the visual environment prior to and after construction projects.


IEEE Access ◽  
2020 ◽  
pp. 1-1
Author(s):  
Jeremy M. Webb ◽  
Duane D. Meixner ◽  
Shaheeda A. Adusei ◽  
Eric C. Polley ◽  
Mostafa Fatemi ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4442
Author(s):  
Zijie Niu ◽  
Juntao Deng ◽  
Xu Zhang ◽  
Jun Zhang ◽  
Shijia Pan ◽  
...  

It is important to obtain accurate information about kiwifruit vines to monitoring their physiological states and undertake precise orchard operations. However, because vines are small and cling to trellises, and have branches laying on the ground, numerous challenges exist in the acquisition of accurate data for kiwifruit vines. In this paper, a kiwifruit canopy distribution prediction model is proposed on the basis of low-altitude unmanned aerial vehicle (UAV) images and deep learning techniques. First, the location of the kiwifruit plants and vine distribution are extracted from high-precision images collected by UAV. The canopy gradient distribution maps with different noise reduction and distribution effects are generated by modifying the threshold and sampling size using the resampling normalization method. The results showed that the accuracies of the vine segmentation using PSPnet, support vector machine, and random forest classification were 71.2%, 85.8%, and 75.26%, respectively. However, the segmentation image obtained using depth semantic segmentation had a higher signal-to-noise ratio and was closer to the real situation. The average intersection over union of the deep semantic segmentation was more than or equal to 80% in distribution maps, whereas, in traditional machine learning, the average intersection was between 20% and 60%. This indicates the proposed model can quickly extract the vine distribution and plant position, and is thus able to perform dynamic monitoring of orchards to provide real-time operation guidance.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3813
Author(s):  
Athanasios Anagnostis ◽  
Aristotelis C. Tagarakis ◽  
Dimitrios Kateris ◽  
Vasileios Moysiadis ◽  
Claus Grøn Sørensen ◽  
...  

This study aimed to propose an approach for orchard trees segmentation using aerial images based on a deep learning convolutional neural network variant, namely the U-net network. The purpose was the automated detection and localization of the canopy of orchard trees under various conditions (i.e., different seasons, different tree ages, different levels of weed coverage). The implemented dataset was composed of images from three different walnut orchards. The achieved variability of the dataset resulted in obtaining images that fell under seven different use cases. The best-trained model achieved 91%, 90%, and 87% accuracy for training, validation, and testing, respectively. The trained model was also tested on never-before-seen orthomosaic images or orchards based on two methods (oversampling and undersampling) in order to tackle issues with out-of-the-field boundary transparent pixels from the image. Even though the training dataset did not contain orthomosaic images, it achieved performance levels that reached up to 99%, demonstrating the robustness of the proposed approach.


Sign in / Sign up

Export Citation Format

Share Document