scholarly journals Deep learning-based tumor segmentation on digital images of histopathology slides for microdosimetry applications

Author(s):  
Luca L. Weishaupt ◽  
Jose Torres ◽  
Sophie Camilleri-Broët ◽  
Roni F. Rayes ◽  
Jonathan D. Spicer ◽  
...  

Abstract The goal of this study was (i) to use artificial intelligence to automate the traditionally labor-intensive process of manual segmentation of tumor regions in pathology slides performed by a pathologist and (ii) to validate the use of a deep learning architecture. Automation will reduce the human error involved in the manual process, increase efficiency, and result in more accurate and reproducible segmentation. This advancement will alleviate the bottleneck in the workflow in clinical and research applications due to a lack of pathologist time. Our application is patient-specific microdosimetry and radiobiological modeling, which builds on the contoured pathology slides. A deep neural network named UNet was used to segment tumor regions in pathology core biopsies of lung tissue with adenocarcinoma stained using hematoxylin and eosin. A pathologist manually contoured the tumor regions in 56 images with binary masks for training. To overcome memory limitations overlapping and non-overlapping patch extraction with various patch sizes and image downsampling were investigated individually. Data augmentation was used to reduce overfitting and artificially create more data for training. Using this deep learning approach, the UNet achieved accuracy of 0.91±0.06, specificity of 0.90±0.08, sensitivity of 0.92±0.07, and precision of 0.8±0.1. The F1/DICE score was 0.85±0.07, with a segmentation time of 3.24±0.03 seconds per image, thus achieving a 370±3 times increased efficiency over manual segmentation, which took 20 minutes per image on average. In some cases, the neural network correctly delineated the tumor's stroma from its epithelial component in tumor regions that were classified as tumor by the pathologist. The UNet architecture can segment images with a level of efficiency and accuracy that makes it suitable for tumor segmentation of histopathological images in fields such as radiotherapy dosimetry, specifically in the subfields of microdosimetry.

Sensors ◽  
2019 ◽  
Vol 19 (18) ◽  
pp. 4050 ◽  
Author(s):  
Vahab Khoshdel ◽  
Ahmed Ashraf ◽  
Joe LoVetri

We present a deep learning method used in conjunction with dual-modal microwave-ultrasound imaging to produce tomographic reconstructions of the complex-valued permittivity of numerical breast phantoms. We also assess tumor segmentation performance using the reconstructed permittivity as a feature. The contrast source inversion (CSI) technique is used to create the complex-permittivity images of the breast with ultrasound-derived tissue regions utilized as prior information. However, imaging artifacts make the detection of tumors difficult. To overcome this issue we train a convolutional neural network (CNN) that takes in, as input, the dual-modal CSI reconstruction and attempts to produce the true image of the complex tissue permittivity. The neural network consists of successive convolutional and downsampling layers, followed by successive deconvolutional and upsampling layers based on the U-Net architecture. To train the neural network, the input-output pairs consist of CSI’s dual-modal reconstructions, along with the true numerical phantom images from which the microwave scattered field was synthetically generated. The reconstructed permittivity images produced by the CNN show that the network is not only able to remove the artifacts that are typical of CSI reconstructions, but can also improve the detectability of tumors. The performance of the CNN is assessed using a four-fold cross-validation on our dataset that shows improvement over CSI both in terms of reconstruction error and tumor segmentation performance.


Sensors ◽  
2022 ◽  
Vol 22 (1) ◽  
pp. 328
Author(s):  
Chih-Hsiung Shen ◽  
Wei-Lun Chen ◽  
Jung-Jie Wu

Oxyhemoglobin saturation by pulse oximetry (SpO2) has always played an important role in the diagnosis of symptoms. Considering that the traditional SpO2 measurement has a certain error due to the number of wavelengths and the algorithm and the wider application of machine learning and spectrum combination, we propose to use 12-wavelength spectral absorption measurement to improve the accuracy of SpO2 measurement. To investigate the multiple spectral regions for deep learning for SpO2 measurement, three datasets for training and verification were built, which were constructed over the spectra of first region, second region, and full region and their sub-regions, respectively. For each region under the procedures of optimization of our model, a thorough of investigation of hyperparameters is proceeded. Additionally, data augmentation is preformed to expand dataset with added noise randomly, increasing the diversity of data and improving the generalization of the neural network. After that, the established dataset is input to a one dimensional convolution neural network (1D-CNN) to obtain a measurement model of SpO2. In order to enhance the model accuracy, GridSearchCV and Bayesian optimization are applied to optimize the hyperparameters. The optimal accuracies of proposed model optimized by GridSearchCV and Bayesian Optimization is 89.3% and 99.4%, respectively, and trained with the dataset at the spectral region of six wavelengths including 650 nm, 680 nm, 730 nm, 760 nm, 810 nm, 860 nm. The total relative error of the best model is only 0.46%, optimized by Bayesian optimization. Although the spectral measurement with more features can improve the resolution ability of the neural network, the results reveal that the training with the dataset of the shorter six wavelength is redundant. This analysis shows that it is very important to construct an effective 1D-CNN model area for spectral measurement using the appropriate spectral ranges and number of wavelengths. It shows that our proposed 1D-CNN model gives a new and feasible approach to measure SpO2 based on multi-wavelength.


2021 ◽  
Vol 2021 (3) ◽  
Author(s):  
Yuki Fujimoto ◽  
Kenji Fukushima ◽  
Koichi Murase

Abstract We discuss deep learning inference for the neutron star equation of state (EoS) using the real observational data of the mass and the radius. We make a quantitative comparison between the conventional polynomial regression and the neural network approach for the EoS parametrization. For our deep learning method to incorporate uncertainties in observation, we augment the training data with noise fluctuations corresponding to observational uncertainties. Deduced EoSs can accommodate a weak first-order phase transition, and we make a histogram for likely first-order regions. We also find that our observational data augmentation has a byproduct to tame the overfitting behavior. To check the performance improved by the data augmentation, we set up a toy model as the simplest inference problem to recover a double-peaked function and monitor the validation loss. We conclude that the data augmentation could be a useful technique to evade the overfitting without tuning the neural network architecture such as inserting the dropout.


2021 ◽  
Vol 11 (11) ◽  
pp. 4758
Author(s):  
Ana Malta ◽  
Mateus Mendes ◽  
Torres Farinha

Maintenance professionals and other technical staff regularly need to learn to identify new parts in car engines and other equipment. The present work proposes a model of a task assistant based on a deep learning neural network. A YOLOv5 network is used for recognizing some of the constituent parts of an automobile. A dataset of car engine images was created and eight car parts were marked in the images. Then, the neural network was trained to detect each part. The results show that YOLOv5s is able to successfully detect the parts in real time video streams, with high accuracy, thus being useful as an aid to train professionals learning to deal with new equipment using augmented reality. The architecture of an object recognition system using augmented reality glasses is also designed.


2021 ◽  
Vol 11 (15) ◽  
pp. 7148
Author(s):  
Bedada Endale ◽  
Abera Tullu ◽  
Hayoung Shi ◽  
Beom-Soo Kang

Unmanned aerial vehicles (UAVs) are being widely utilized for various missions: in both civilian and military sectors. Many of these missions demand UAVs to acquire artificial intelligence about the environments they are navigating in. This perception can be realized by training a computing machine to classify objects in the environment. One of the well known machine training approaches is supervised deep learning, which enables a machine to classify objects. However, supervised deep learning comes with huge sacrifice in terms of time and computational resources. Collecting big input data, pre-training processes, such as labeling training data, and the need for a high performance computer for training are some of the challenges that supervised deep learning poses. To address these setbacks, this study proposes mission specific input data augmentation techniques and the design of light-weight deep neural network architecture that is capable of real-time object classification. Semi-direct visual odometry (SVO) data of augmented images are used to train the network for object classification. Ten classes of 10,000 different images in each class were used as input data where 80% were for training the network and the remaining 20% were used for network validation. For the optimization of the designed deep neural network, a sequential gradient descent algorithm was implemented. This algorithm has the advantage of handling redundancy in the data more efficiently than other algorithms.


2020 ◽  
Vol 13 (1) ◽  
pp. 34
Author(s):  
Rong Yang ◽  
Robert Wang ◽  
Yunkai Deng ◽  
Xiaoxue Jia ◽  
Heng Zhang

The random cropping data augmentation method is widely used to train convolutional neural network (CNN)-based target detectors to detect targets in optical images (e.g., COCO datasets). It can expand the scale of the dataset dozens of times while consuming only a small amount of calculations when training the neural network detector. In addition, random cropping can also greatly enhance the spatial robustness of the model, because it can make the same target appear in different positions of the sample image. Nowadays, random cropping and random flipping have become the standard configuration for those tasks with limited training data, which makes it natural to introduce them into the training of CNN-based synthetic aperture radar (SAR) image ship detectors. However, in this paper, we show that the introduction of traditional random cropping methods directly in the training of the CNN-based SAR image ship detector may generate a lot of noise in the gradient during back propagation, which hurts the detection performance. In order to eliminate the noise in the training gradient, a simple and effective training method based on feature map mask is proposed. Experiments prove that the proposed method can effectively eliminate the gradient noise introduced by random cropping and significantly improve the detection performance under a variety of evaluation indicators without increasing inference cost.


2018 ◽  
Author(s):  
Zeinab Golgooni ◽  
Sara Mirsadeghi ◽  
Mahdieh Soleymani Baghshah ◽  
Pedram Ataee ◽  
Hossein Baharvand ◽  
...  

AbstractAimAn early characterization of drug-induced cardiotoxicity may be possible by combining comprehensive in vitro pro-arrhythmia assay and deep learning techniques. The goal of this study was to develop a deep learning method to automatically detect irregular beating rhythm as well as abnormal waveforms of field potentials in an in vitro cardiotoxicity assay using human pluripotent stem cell (hPSC) derived cardiomyocytes and multi-electrode array (MEA) system.Methods and ResultsWe included field potential waveforms from 380 experiments which obtained by application of some cardioactive drugs on healthy and/or patient-specific induced pluripotent stem cells derived cardiomyocytes (iPSC-CM). We employed convolutional and recurrent neural networks, in order to develop a new method for automatic classification of field potential recordings without using any hand-engineered features. In the proposed method, a preparation phase was initially applied to split 60-second long recordings into a series of 5-second long windows. Thereafter, the classification phase comprising of two main steps was designed. In the first step, 5-second long windows were classified using a designated convolutional neural network (CNN). In the second step, the results of 5-second long window assessments were used as the input sequence to a recurrent neural network (RNN). The output was then compared to electrophysiologist-level arrhythmia (irregularity or abnormal waveforms) detection, resulting in 0.84 accuracy, 0.84 sensitivity, 0.85 specificity, and 0.88 precision.ConclusionA novel deep learning approach based on a two-step CNN-RNN method can be used for automated analysis of “irregularity or abnormal waveforms” in an in vitro model of cardiotoxicity experiments.


2020 ◽  
pp. 74-80
Author(s):  
Philippe Schweizer ◽  

We would like to show the small distance in neutropsophy applications in sciences and humanities, has both finally consider as a terminal user a human. The pace of data production continues to grow, leading to increased needs for efficient storage and transmission. Indeed, the consumption of this information is preferably made on mobile terminals using connections invoiced to the user and having only reduced storage capacities. Deep learning neural networks have recently exceeded the compression rates of algorithmic techniques for text. We believe that they can also significantly challenge classical methods for both audio and visual data (images and videos). To obtain the best physiological compression, i.e. the highest compression ratio because it comes closest to the specificity of human perception, we propose using a neutrosophical representation of the information for the entire compression-decompression cycle. Such a representation consists for each elementary information to add to it a simple neutrosophical number which informs the neural network about its characteristics relative to compression during this treatment. Such a neutrosophical number is in fact a triplet (t,i,f) representing here the belonging of the element to the three constituent components of information in compression; 1° t = the true significant part to be preserved, 2° i = the inderterminated redundant part or noise to be eliminated in compression and 3° f = the false artifacts being produced in the compression process (to be compensated). The complexity of human perception and the subtle niches of its defects that one seeks to exploit requires a detailed and complex mapping that a neural network can produce better than any other algorithmic solution, and networks with deep learning have proven their ability to produce a detailed boundary surface in classifiers.


Author(s):  
Lifu Wang ◽  
Bo Shen ◽  
Ning Zhao ◽  
Zhiyuan Zhang

The residual network is now one of the most effective structures in deep learning, which utilizes the skip connections to “guarantee" the performance will not get worse. However, the non-convexity of the neural network makes it unclear whether the skip connections do provably improve the learning ability since the nonlinearity may create many local minima. In some previous works [Freeman and Bruna, 2016], it is shown that despite the non-convexity, the loss landscape of the two-layer ReLU network has good properties when the number m of hidden nodes is very large. In this paper, we follow this line to study the topology (sub-level sets) of the loss landscape of deep ReLU neural networks with a skip connection and theoretically prove that the skip connection network inherits the good properties of the two-layer network and skip connections can help to control the connectedness of the sub-level sets, such that any local minima worse than the global minima of some two-layer ReLU network will be very “shallow". The “depth" of these local minima are at most O(m^(η-1)/n), where n is the input dimension, η<1. This provides a theoretical explanation for the effectiveness of the skip connection in deep learning.


2021 ◽  
Vol 14 (6) ◽  
pp. 3421-3435
Author(s):  
Zhenjiao Jiang ◽  
Dirk Mallants ◽  
Lei Gao ◽  
Tim Munday ◽  
Gregoire Mariethoz ◽  
...  

Abstract. This study introduces an efficient deep-learning model based on convolutional neural networks with joint autoencoder and adversarial structures for 3D subsurface mapping from 2D surface observations. The method was applied to delineate paleovalleys in an Australian desert landscape. The neural network was trained on a 6400 km2 domain by using a land surface topography as 2D input and an airborne electromagnetic (AEM)-derived probability map of paleovalley presence as 3D output. The trained neural network has a squared error <0.10 across 99 % of the training domain and produces a squared error <0.10 across 93 % of the validation domain, demonstrating that it is reliable in reconstructing 3D paleovalley patterns beyond the training area. Due to its generic structure, the neural network structure designed in this study and the training algorithm have broad application potential to construct 3D geological features (e.g., ore bodies, aquifer) from 2D land surface observations.


Sign in / Sign up

Export Citation Format

Share Document