scholarly journals Asbestos Detection with Fluorescence Microscopy Images and Deep Learning

Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4582
Author(s):  
Changjie Cai ◽  
Tomoki Nishimura ◽  
Jooyeon Hwang ◽  
Xiao-Ming Hu ◽  
Akio Kuroda

Fluorescent probes can be used to detect various types of asbestos (serpentine and amphibole groups); however, the fiber counting using our previously developed software was not accurate for samples with low fiber concentration. Machine learning-based techniques (e.g., deep learning) for image analysis, particularly Convolutional Neural Networks (CNN), have been widely applied to many areas. The objectives of this study were to (1) create a database of a wide-range asbestos concentration (0–50 fibers/liter) fluorescence microscopy (FM) images in the laboratory; and (2) determine the applicability of the state-of-the-art object detection CNN model, YOLOv4, to accurately detect asbestos. We captured the fluorescence microscopy images containing asbestos and labeled the individual asbestos in the images. We trained the YOLOv4 model with the labeled images using one GTX 1660 Ti Graphics Processing Unit (GPU). Our results demonstrated the exceptional capacity of the YOLOv4 model to learn the fluorescent asbestos morphologies. The mean average precision at a threshold of 0.5 ([email protected]) was 96.1% ± 0.4%, using the National Institute for Occupational Safety and Health (NIOSH) fiber counting Method 7400 as a reference method. Compared to our previous counting software (Intec/HU), the YOLOv4 achieved higher accuracy (0.997 vs. 0.979), particularly much higher precision (0.898 vs. 0.418), recall (0.898 vs. 0.780) and F-1 score (0.898 vs. 0.544). In addition, the YOLOv4 performed much better for low fiber concentration samples (<15 fibers/liter) compared to Intec/HU. Therefore, the FM method coupled with YOLOv4 is remarkable in detecting asbestos fibers and differentiating them from other non-asbestos particles.

2021 ◽  
Author(s):  
Dmitry Ershov ◽  
Minh-Son Phan ◽  
Joanna W. Pylvänäinen ◽  
Stéphane U Rigaud ◽  
Laure Le Blanc ◽  
...  

TrackMate is an automated tracking software used to analyze bioimages and distributed as a Fiji plugin. Here we introduce a new version of TrackMate rewritten to improve performance and usability, and integrating several popular machine and deep learning algorithms to improve versatility. We illustrate how these new components can be used to efficiently track objects from brightfield and fluorescence microscopy images across a wide range of bio-imaging experiments.


Geophysics ◽  
2019 ◽  
Vol 84 (6) ◽  
pp. V333-V350 ◽  
Author(s):  
Siwei Yu ◽  
Jianwei Ma ◽  
Wenlong Wang

Compared with traditional seismic noise attenuation algorithms that depend on signal models and their corresponding prior assumptions, removing noise with a deep neural network is trained based on a large training set in which the inputs are the raw data sets and the corresponding outputs are the desired clean data. After the completion of training, the deep-learning (DL) method achieves adaptive denoising with no requirements of (1) accurate modelings of the signal and noise or (2) optimal parameters tuning. We call this intelligent denoising. We have used a convolutional neural network (CNN) as the basic tool for DL. In random and linear noise attenuation, the training set is generated with artificially added noise. In the multiple attenuation step, the training set is generated with the acoustic wave equation. The stochastic gradient descent is used to solve the optimal parameters for the CNN. The runtime of DL on a graphics processing unit for denoising has the same order as the [Formula: see text]-[Formula: see text] deconvolution method. Synthetic and field results indicate the potential applications of DL in automatic attenuation of random noise (with unknown variance), linear noise, and multiples.


2019 ◽  
Vol 16 (12) ◽  
pp. 1323-1331 ◽  
Author(s):  
Yichen Wu ◽  
Yair Rivenson ◽  
Hongda Wang ◽  
Yilin Luo ◽  
Eyal Ben-David ◽  
...  

2018 ◽  
Vol 10 (12) ◽  
pp. 1886 ◽  
Author(s):  
Xiangyu Liu ◽  
Yichen Tian ◽  
Chao Yuan ◽  
Feifei Zhang ◽  
Guang Yang

Opium poppies are a major source of traditional drugs, which are not only harmful to physical and mental health, but also threaten the economy and society. Monitoring poppy cultivation in key regions through remote sensing is therefore a crucial task; the location coordinates of poppy parcels represent particularly important information for their eradication by local governments. We propose a new methodology based on deep learning target detection to identify the location of poppy parcels and map their spatial distribution. We first make six training datasets with different band combinations and slide window sizes using two ZiYuan3 (ZY3) remote sensing images and separately train the single shot multibox detector (SSD) model. Then, we choose the best model and test its performance using 225 km2 verification images from Lao People’s Democratic Republic (Lao PDR), which exhibits a precision of 95% for a recall of 85%. The speed of our method is 4.5 km2/s on 1080TI Graphics Processing Unit (GPU). This study is the first attempt to monitor opium poppies with the deep learning method and achieve a high recognition rate. Our method does not require manual feature extraction and provides an alternative way to rapidly obtain the exact location coordinates of opium poppy cultivation patches.


2017 ◽  
Author(s):  
Richard Wilton ◽  
Xin Li ◽  
Andrew P. Feinberg ◽  
Alexander S. Szalay

AbstractThe alignment of bisulfite-treated DNA sequences (BS-seq reads) to a large genome involves a significant computational burden beyond that required to align non-bisulfite-treated reads. In the analysis of BS-seq data, this can present an important performance bottleneck that can potentially be addressed by appropriate software-engineering and algorithmic improvements. One strategy is to integrate this additional programming logic into the read-alignment implementation in a way that the software becomes amenable to optimizations that lead to both higher speed and greater sensitivity than can be achieved without this integration.We have evaluated this approach using Arioc, a short-read aligner that uses GPU (general-purpose graphics processing unit) hardware to accelerate computationally-expensive programming logic. We integrated the BS-seq computational logic into both GPU and CPU code throughout the Arioc implementation. We then carried out a read-by-read comparison of Arioc's reported alignments with the alignments reported by the most widely used BS-seq read aligners. With simulated reads, Arioc's accuracy is equal to or better than the other read aligners we evaluated. With human sequencing reads, Arioc's throughput is at least 10 times faster than existing BS-seq aligners across a wide range of sensitivity settings.The Arioc software is available at https://github.com/RWilton/Arioc. It is released under a BSD open-source license.


PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0255397
Author(s):  
Moritz Knolle ◽  
Georgios Kaissis ◽  
Friederike Jungmann ◽  
Sebastian Ziegelmayer ◽  
Daniel Sasse ◽  
...  

The success of deep learning in recent years has arguably been driven by the availability of large datasets for training powerful predictive algorithms. In medical applications however, the sensitive nature of the data limits the collection and exchange of large-scale datasets. Privacy-preserving and collaborative learning systems can enable the successful application of machine learning in medicine. However, collaborative protocols such as federated learning require the frequent transfer of parameter updates over a network. To enable the deployment of such protocols to a wide range of systems with varying computational performance, efficient deep learning architectures for resource-constrained environments are required. Here we present MoNet, a small, highly optimized neural-network-based segmentation algorithm leveraging efficient multi-scale image features. MoNet is a shallow, U-Net-like architecture based on repeated, dilated convolutions with decreasing dilation rates. We apply and test our architecture on the challenging clinical tasks of pancreatic segmentation in computed tomography (CT) images as well as brain tumor segmentation in magnetic resonance imaging (MRI) data. We assess our model’s segmentation performance and demonstrate that it provides performance on par with compared architectures while providing superior out-of-sample generalization performance, outperforming larger architectures on an independent validation set, while utilizing significantly fewer parameters. We furthermore confirm the suitability of our architecture for federated learning applications by demonstrating a substantial reduction in serialized model storage requirement as a surrogate for network data transfer. Finally, we evaluate MoNet’s inference latency on the central processing unit (CPU) to determine its utility in environments without access to graphics processing units. Our implementation is publicly available as free and open-source software.


Sign in / Sign up

Export Citation Format

Share Document