Deep Neural Network Based Disease Discrimination Learning from Small Medical Image Training Set and User Feedback

Author(s):  
Daji Tang ◽  
Kehua Guo ◽  
Jianhua Ma ◽  
Xiaoyan Kui
2018 ◽  
Vol 468 ◽  
pp. 142-154 ◽  
Author(s):  
Hui Liu ◽  
Jun Xu ◽  
Yan Wu ◽  
Qiang Guo ◽  
Bulat Ibragimov ◽  
...  

2021 ◽  
Author(s):  
Daichi Kitaguchi ◽  
Toru Fujino ◽  
Nobuyoshi Takeshita ◽  
Hiro Hasegawa ◽  
Kensaku Mori ◽  
...  

Abstract Clarifying the scalability of deep-learning-based surgical instrument segmentation networks in diverse surgical environments is important in recognizing the challenges of overfitting in surgical device development. This study comprehensively evaluated deep neural network scalability for surgical instrument segmentation, using 5238 images randomly extracted from 128 intraoperative videos. The video dataset contained 112 laparoscopic colorectal resection, 5 laparoscopic distal gastrectomy, 5 laparoscopic cholecystectomy, and 6 laparoscopic partial hepatectomy cases. Deep-learning-based surgical instrument segmentation was performed for test sets with 1) the same conditions as the training set; 2) the same recognition target surgical instrument and surgery type but different laparoscopic recording systems; 3) the same laparoscopic recording system and surgery type but slightly different recognition target laparoscopic surgical forceps; 4) the same laparoscopic recording system and recognition target surgical instrument but different surgery types. The mean average precision and mean intersection over union for test sets 1, 2, 3, and 4 were 0.941 and 0.887, 0.866 and 0.671, 0.772 and 0.676, and 0.588 and 0.395, respectively. Therefore, the recognition accuracy decreased even under slightly different conditions. To enhance the generalization of deep neural networks in surgery, constructing a training set that considers diverse surgical environments under real-world conditions is crucial. Trial Registration Number: 2020–315, date of registration: October 5, 2020


2022 ◽  
Vol 73 ◽  
pp. 103444
Author(s):  
Samaneh Abbasi ◽  
Meysam Tavakoli ◽  
Hamid Reza Boveiri ◽  
Mohammad Amin Mosleh Shirazi ◽  
Raouf Khayami ◽  
...  

Author(s):  
Seung-Geon Lee ◽  
Jaedeok Kim ◽  
Hyun-Joo Jung ◽  
Yoonsuck Choe

Estimating the relative importance of each sample in a training set has important practical and theoretical value, such as in importance sampling or curriculum learning. This kind of focus on individual samples invokes the concept of samplewise learnability: How easy is it to correctly learn each sample (cf. PAC learnability)? In this paper, we approach the sample-wise learnability problem within a deep learning context. We propose a measure of the learnability of a sample with a given deep neural network (DNN) model. The basic idea is to train the given model on the training set, and for each sample, aggregate the hits and misses over the entire training epochs. Our experiments show that the samplewise learnability measure collected this way is highly linearly correlated across different DNN models (ResNet-20, VGG-16, and MobileNet), suggesting that such a measure can provide deep general insights on the data’s properties. We expect our method to help develop better curricula for training, and help us better understand the data itself.


2019 ◽  
Author(s):  
Md Abid Hasan ◽  
Stefano Lonardi

AbstractEssential genes are genes that critical for the survival of an organism. The prediction of essential genes in bacteria can provide targets for the design of novel antibiotic compounds or antimicrobial strategies. Here we propose a deep neural network (DNN) for predicting essential genes in microbes. Our DNN-based architecture called DeeplyEssentialmakes minimal assumptions about the input data (i.e., it only uses gene primary sequence and the corresponding protein sequence) to carry out the prediction, thus maximizing its practical application compared to existing predictors that require structural or topological features which might not be readily available. Our extensive experimental results show that DeeplyEssentialoutperforms existing classifiers that either employ down-sampling to balance the training set or use clustering to exclude multiple copies of orthologous genes. We also expose and study a hidden performance bias that affected previous classifiers.The code of DeeplyEssentialis freely available athttps://github.com/ucrbioinfo/DeeplyEssential


2020 ◽  
pp. 18-28
Author(s):  
Andrei Kliuev ◽  
Roman Klestov ◽  
Valerii Stolbov

The paper investigates the algorithmic stability of learning a deep neural network in problems of recognition of the materials microstructure. It is shown that at 8% of quantitative deviation in the basic test set the algorithm trained network loses stability. This means that with such a quantitative or qualitative deviation in the training or test sets, the results obtained with such trained network can hardly be trusted. Although the results of this study are applicable to the particular case, i.e. problems of recognition of the microstructure using ResNet-152, the authors propose a cheaper method for studying stability based on the analysis of the test, rather than the training set.


2020 ◽  
Author(s):  
Magdalena Mittermeier ◽  
Émilie Bresson ◽  
Dominique Paquin ◽  
Ralf Ludwig

<p>Climate change is altering the Earth’s atmospheric circulation and the dynamic drivers of extreme events. Extreme weather events pose a great potential risk to infrastructure and human security. In Southern Québec, freezing rain is among the rare, yet high-impact events that remain particularly difficult to detect, describe or even predict.</p><p>Large climate model ensembles are instrumental for a profound analysis of extreme events, as they can be used to provide a sufficient number of model years. Due to the physical nature and the high spatiotemporal resolution of regional climate models (RCMs), large ensembles can not only be employed to investigate the intensity and frequency of extreme events, but they also allow to analyze the synoptic drivers of freezing rain events and to explore the respective dynamic alterations under climate change conditions. However, several challenges remain for the analysis of large RCM ensembles, mainly the high computational costs and the resulting data volume, which requires novel statistical methods for efficient screening and analysis, such as deep neural networks (DNN). Further, to date, only the Canadian Regional Climate Model version 5 (CRCM5) is simulating freezing rain in-line using a diagnostic method. For the analysis of freezing rain in other RCMs, computational intensive, off-line diagnostic schemes have to be applied to archived data. Another approach for freezing rain analysis focuses on the relation between synoptic drivers at 500 hPa resp. sea level pressure and the occurrence of freezing rain in the study area of Montréal.</p><p>Here, we explore the capability of training a deep neural network on the detection of the synoptic patterns associated with the occurrence of freezing rain in Montréal. This climate pattern detection task is a visual image classification problem that is addressed with supervised machine learning. Labels for the training set are derived from CRCM5 in-line simulations of freezing rain. This study aims to provide a trained network, which can be applied to large multi-model ensembles over the North American domain of the Coordinated Regional Climate Downscaling Experiment (CORDEX) in order to efficiently filter the climate datasets for the current and future large-scale drivers of freezing rain.</p><p>We present the setup of the deep learning approach including the network architecture, the training set statistics and the optimization and regularization methods. Additionally, we present the classification results of the deep neural network in the form of a single-number evaluation metric as well as confusion matrices. Furthermore, we show analysis of our training set regarding false positives and false negatives.</p>


Sign in / Sign up

Export Citation Format

Share Document