scholarly journals Evaluation of a Deep Learning Algorithm for Automated Spleen Segmentation in Patients with Conditions Directly or Indirectly Affecting the Spleen

Tomography ◽  
2021 ◽  
Vol 7 (4) ◽  
pp. 950-960
Author(s):  
Aymen Meddeb ◽  
Tabea Kossen ◽  
Keno K. Bressem ◽  
Bernd Hamm ◽  
Sebastian N. Nagel

The aim of this study was to develop a deep learning-based algorithm for fully automated spleen segmentation using CT images and to evaluate the performance in conditions directly or indirectly affecting the spleen (e.g., splenomegaly, ascites). For this, a 3D U-Net was trained on an in-house dataset (n = 61) including diseases with and without splenic involvement (in-house U-Net), and an open-source dataset from the Medical Segmentation Decathlon (open dataset, n = 61) without splenic abnormalities (open U-Net). Both datasets were split into a training (n = 32.52%), a validation (n = 9.15%) and a testing dataset (n = 20.33%). The segmentation performances of the two models were measured using four established metrics, including the Dice Similarity Coefficient (DSC). On the open test dataset, the in-house and open U-Net achieved a mean DSC of 0.906 and 0.897 respectively (p = 0.526). On the in-house test dataset, the in-house U-Net achieved a mean DSC of 0.941, whereas the open U-Net obtained a mean DSC of 0.648 (p < 0.001), showing very poor segmentation results in patients with abnormalities in or surrounding the spleen. Thus, for reliable, fully automated spleen segmentation in clinical routine, the training dataset of a deep learning-based algorithm should include conditions that directly or indirectly affect the spleen.

2021 ◽  
pp. bjophthalmol-2020-318107
Author(s):  
Kenichi Nakahara ◽  
Ryo Asaoka ◽  
Masaki Tanito ◽  
Naoto Shibata ◽  
Keita Mitsuhashi ◽  
...  

Background/aimsTo validate a deep learning algorithm to diagnose glaucoma from fundus photography obtained with a smartphone.MethodsA training dataset consisting of 1364 colour fundus photographs with glaucomatous indications and 1768 colour fundus photographs without glaucomatous features was obtained using an ordinary fundus camera. The testing dataset consisted of 73 eyes of 73 patients with glaucoma and 89 eyes of 89 normative subjects. In the testing dataset, fundus photographs were acquired using an ordinary fundus camera and a smartphone. A deep learning algorithm was developed to diagnose glaucoma using a training dataset. The trained neural network was evaluated by prediction result of the diagnostic of glaucoma or normal over the test datasets, using images from both an ordinary fundus camera and a smartphone. Diagnostic accuracy was assessed using the area under the receiver operating characteristic curve (AROC).ResultsThe AROC with a fundus camera was 98.9% and 84.2% with a smartphone. When validated only in eyes with advanced glaucoma (mean deviation value < −12 dB, N=26), the AROC with a fundus camera was 99.3% and 90.0% with a smartphone. There were significant differences between these AROC values using different cameras.ConclusionThe usefulness of a deep learning algorithm to automatically screen for glaucoma from smartphone-based fundus photographs was validated. The algorithm had a considerable high diagnostic ability, particularly in eyes with advanced glaucoma.


2021 ◽  
Vol 13 (9) ◽  
pp. 1779
Author(s):  
Xiaoyan Yin ◽  
Zhiqun Hu ◽  
Jiafeng Zheng ◽  
Boyong Li ◽  
Yuanyuan Zuo

Radar beam blockage is an important error source that affects the quality of weather radar data. An echo-filling network (EFnet) is proposed based on a deep learning algorithm to correct the echo intensity under the occlusion area in the Nanjing S-band new-generation weather radar (CINRAD/SA). The training dataset is constructed by the labels, which are the echo intensity at the 0.5° elevation in the unblocked area, and by the input features, which are the intensity in the cube including multiple elevations and gates corresponding to the location of bottom labels. Two loss functions are applied to compile the network: one is the common mean square error (MSE), and the other is a self-defined loss function that increases the weight of strong echoes. Considering that the radar beam broadens with distance and height, the 0.5° elevation scan is divided into six range bands every 25 km to train different models. The models are evaluated by three indicators: explained variance (EVar), mean absolute error (MAE), and correlation coefficient (CC). Two cases are demonstrated to compare the effect of the echo-filling model by different loss functions. The results suggest that EFnet can effectively correct the echo reflectivity and improve the data quality in the occlusion area, and there are better results for strong echoes when the self-defined loss function is used.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Hua Zheng ◽  
Zhenglong Wu ◽  
Shiqiang Duan ◽  
Jiangtao Zhou

Due to the inevitable deviations between the results of theoretical calculations and physical experiments, flutter tests and flutter signal analysis often play significant roles in designing the aeroelasticity of a new aircraft. The measured structural response from aeroelastic models in both wind tunnel tests and real fight flutter tests contain an abundance of structural information, but traditional methods tend to have limited ability to extract features of concern. Inspired by deep learning concepts, a novel feature extraction method for flutter signal analysis was established in this study by combining the convolutional neural network (CNN) with empirical mode decomposition (EMD). It is widely hypothesized that when flutter occurs, the measured structural signals are harmonic or divergent in the time domain, and that the flutter modal (1) is singular and (2) its energy increases significantly in the frequency domain. A measured-signal feature extraction and flutter criterion framework was constructed accordingly. The measured signals from a wind tunnel test were manually labeled “flutter” and “no-flutter” as the foundational dataset for the deep learning algorithm. After the normalized preprocessing, the intrinsic mode functions (IMFs) of the flutter test signals are obtained by the EMD method. The IMFs are then reshaped to make them the suitable size to be input to the CNN. The CNN parameters are optimized though the training dataset, and the trained model is validated through the test dataset (i.e., cross-validation). The accuracy rate of the proposed method reached 100% on the test dataset. The training model appears to effectively distinguish whether or not the structural response signal contains flutter. The combination of EMD and CNN provides effective feature extraction of time series signals in flutter test data. This research explores the connection between structural response signals and flutter from the perspective of artificial intelligence. The method allows for real-time, online prediction with low computational complexity.


2021 ◽  
Author(s):  
Sidhant Idgunji ◽  
Madison Ho ◽  
Jonathan L. Payne ◽  
Daniel Lehrmann ◽  
Michele Morsilli ◽  
...  

&lt;p&gt;The growing digitization of fossil images has vastly improved and broadened the potential application of big data and machine learning, particularly computer vision, in paleontology. Recent studies show that machine learning is capable of approaching human abilities of classifying images, and with the increase in computational power and visual data, it stands to reason that it can match human ability but at much greater efficiency in the near future. Here we demonstrate this potential of using deep learning to identify skeletal grains at different levels of the Linnaean taxonomic hierarchy. Our approach was two-pronged. First, we built a database of skeletal grain images spanning a wide range of animal phyla and classes and used this database to train the model. We used a Python-based method to automate image recognition and extraction from published sources. Second, we developed a deep learning algorithm that can attach multiple labels to a single image. Conventionally, deep learning is used to predict a single class from an image; here, we adopted a Branch Convolutional Neural Network (B-CNN) technique to classify multiple taxonomic levels for a single skeletal grain image. Using this method, we achieved over 90% accuracy for both the coarse, phylum-level recognition and the fine, class-level recognition across diverse skeletal grains (6 phyla and 15 classes). Furthermore, we found that image augmentation improves the overall accuracy. This tool has potential applications in geology ranging from biostratigraphy to paleo-bathymetry, paleoecology, and microfacies analysis. Further improvement of the algorithm and expansion of the training dataset will continue to narrow the efficiency gap between human expertise and machine learning.&lt;/p&gt;


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Shotaro Asano ◽  
Ryo Asaoka ◽  
Hiroshi Murata ◽  
Yohei Hashimoto ◽  
Atsuya Miki ◽  
...  

AbstractWe aimed to develop a model to predict visual field (VF) in the central 10 degrees in patients with glaucoma, by training a convolutional neural network (CNN) with optical coherence tomography (OCT) images and adjusting the values with Humphrey Field Analyzer (HFA) 24–2 test. The training dataset included 558 eyes from 312 glaucoma patients and 90 eyes from 46 normal subjects. The testing dataset included 105 eyes from 72 glaucoma patients. All eyes were analyzed by the HFA 10-2 test and OCT; eyes in the testing dataset were additionally analyzed by the HFA 24-2 test. During CNN model training, the total deviation (TD) values of the HFA 10-2 test point were predicted from the combined OCT-measured macular retinal layers’ thicknesses. Then, the predicted TD values were corrected using the TD values of the innermost four points from the HFA 24-2 test. Mean absolute error derived from the CNN models ranged between 9.4 and 9.5 B. These values reduced to 5.5 dB on average, when the data were corrected using the HFA 24-2 test. In conclusion, HFA 10-2 test results can be predicted with a OCT images using a trained CNN model with adjustment using HFA 24-2 test.


2021 ◽  
Vol 54 (3-4) ◽  
pp. 439-445
Author(s):  
Chih-Ta Yen ◽  
Sheng-Nan Chang ◽  
Cheng-Hong Liao

This study used photoplethysmography signals to classify hypertensive into no hypertension, prehypertension, stage I hypertension, and stage II hypertension. There are four deep learning models are compared in the study. The difficulties in the study are how to find the optimal parameters such as kernel, kernel size, and layers in less photoplethysmographyt (PPG) training data condition. PPG signals were used to train deep residual network convolutional neural network (ResNetCNN) and bidirectional long short-term memory (BILSTM) to determine the optimal operating parameters when each dataset consisted of 2100 data points. During the experiment, the proportion of training and testing datasets was 8:2. The model demonstrated an optimal classification accuracy of 76% when the testing dataset was used.


2021 ◽  
Vol 2 (2) ◽  
pp. 17-25
Author(s):  
Kseniia A. Gadylshina ◽  
Kirill G. Gadylshin ◽  
Vadim V. Lisitsa ◽  
Dmitry M. Vishnevsky

Seismic modelling is the most computationally intense and time consuming part of seismic processing and imaging algorithms. Indeed, generation of a typical seismic data-set requires approximately 10 core-hours of a standard CPU-based clusters. Such a high demand in the resources is due to the use of fine spatial discretizations to achieve a low level of numerical dispersion (numerical error). This paper presents an original approach to seismic modelling where the wavefields for all sources (right-hand sides) are simulated inaccurately using coarse meshes. A small number of the wavefields are generated with computationally intense fine-meshes and then used as a training dataset for the Deep Learning algorithm - Numerical Dispersion Mitigation network (NDM-net). Being trained, the NDM-net is applied to suppress the numerical dispersion of the entire seismic dataset.


Sign in / Sign up

Export Citation Format

Share Document