scholarly journals Dropout, a basic and effective regularization method for a deep learning model: a case study

Author(s):  
Brahim Jabir ◽  
Noureddine Falih

Deep learning is based on a network of artificial neurons inspired by the human brain. This network is made up of tens or even hundreds of "layers" of neurons. The fields of application of deep learning are indeed multiple; Agriculture is one of those fields in which deep learning is used in various agricultural problems (disease detection, pest detection, and weed identification). A major problem with deep learning is how to create a model that works well, not only on the learning set but also on the validation set. Many approaches used in neural networks are explicitly designed to reduce overfit, possibly at the expense of increasing validation accuracy and training accuracy. In this paper, a basic technique (dropout) is proposed to minimize overfit, we integrated it into a convolutional neural network model to classify weed species and see how it impacts performance, a complementary solution (exponential linear units) are proposed to optimize the obtained results. The results showed that these proposed solutions are practical and highly accurate, enabling us to adopt them in deep learning models.

Author(s):  
Kichang Kwak ◽  
Marc Niethammer ◽  
Kelly S. Giovanello ◽  
Martin Styner ◽  
Eran Dayan ◽  
...  

AbstractMild cognitive impairment (MCI) is often considered the precursor of Alzheimer’s disease. However, MCI is associated with substantially variable progression rates, which are not well understood. Attempts to identify the mechanisms that underlie MCI progression have often focused on the hippocampus, but have mostly overlooked its intricate structure and subdivisions. Here, we utilized deep learning to delineate the contribution of hippocampal subfields to MCI progression using a total sample of 1157 subjects (349 in the training set, 427 in a validation set and 381 in the testing set). We propose a dense convolutional neural network architecture that differentiates stable and progressive MCI based on hippocampal morphometry. The proposed deep learning model predicted MCI progression with an accuracy of 75.85%. A novel implementation of occlusion analysis revealed marked differences in the contribution of hippocampal subfields to the performance of the model, with presubiculum, CA1, subiculum, and molecular layer showing the most central role. Moreover, the analysis reveals that 10.5% of the volume of the hippocampus was redundant in the differentiation between stable and progressive MCI. Our predictive model uncovers pronounced differences in the contribution of hippocampal subfields to the progression of MCI. The results may reflect the sparing of hippocampal structure in individuals with a slower progression of neurodegeneration.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Guanghao Jin ◽  
Yixin Hu ◽  
Yuming Jiao ◽  
Junfang Wen ◽  
Qingzeng Song

Generally, the performance of deep learning-based classification models is highly related to the captured features of training samples. When a sample is not clear or contains a similar number of features of many objects, we cannot easily classify what it is. Actually, human beings classify objects by not only the features but also some information such as the probability of these objects in an environment. For example, when we know further information such as one object has a higher probability in the environment than the others, we can easily give the answer about what is in the sample. We call this kind of probability as local probability as this is related to the local environment. In this paper, we carried out a new framework that is named L-PDL to improve the performance of deep learning based on the analysis of this kind of local probability. Firstly, our method trains the deep learning model on the training set. Then, we can get the probability of objects on each sample by this trained model. Secondly, we get the posterior local probability of objects on the validation set. Finally, this probability conditionally cooperates with the probability of objects on testing samples. We select three popular deep learning models on three real datasets for the evaluation. The experimental results show that our method can obviously improve the performance on the real datasets, which is better than the state-of-the-art methods.


2020 ◽  
Vol 9 (05) ◽  
pp. 25052-25056
Author(s):  
Abhi Kadam ◽  
Anupama Mhatre ◽  
Sayali Redasani ◽  
Amit Nerurkar

Current lighting technologies extend the options for changing the appearance of rooms and closed spaces, as such creating ambiences with an affective meaning. Using intelligence, these ambiences may instantly be adapted to the needs of the room’s occupant(s), possibly improving their well-being. In this paper, we set actuate lighting in our surrounding using mood detection. We analyze the mood of the person by Facial Emotion Recognition using deep learning model such as Convolutional Neural Network (CNN). On recognizing this emotion, we will actuate lighting in our surrounding in accordance with the mood. Based on implementation results, the system needs to be developed further by adding more specific data class and training data.


AI ◽  
2020 ◽  
Vol 1 (4) ◽  
pp. 465-487
Author(s):  
Rina Komatsu ◽  
Tad Gonsalves

Digital images often become corrupted by undesirable noise during the process of acquisition, compression, storage, and transmission. Although the kinds of digital noise are varied, current denoising studies focus on denoising only a single and specific kind of noise using a devoted deep-learning model. Lack of generalization is a major limitation of these models. They cannot be extended to filter image noises other than those for which they are designed. This study deals with the design and training of a generalized deep learning denoising model that can remove five different kinds of noise from any digital image: Gaussian noise, salt-and-pepper noise, clipped whites, clipped blacks, and camera shake. The denoising model is constructed on the standard segmentation U-Net architecture and has three variants—U-Net with Group Normalization, Residual U-Net, and Dense U-Net. The combination of adversarial and L1 norm loss function re-produces sharply denoised images and show performance improvement over the standard U-Net, Denoising Convolutional Neural Network (DnCNN), and Wide Interface Network (WIN5RB) denoising models.


2020 ◽  
Vol 7 (16) ◽  
pp. 2269-2277
Author(s):  
Zunyun Fu ◽  
Xutong Li ◽  
Zhaohui Wang ◽  
Zhaojun Li ◽  
Xiaohong Liu ◽  
...  

Deep learning was used to optimize chemical reactions with the quantum mechanical properties of chemical contexts and reaction conditions as inputs. The trained deep learning model determines optimal reaction conditions by in silico exploration of accessible reaction space.


2019 ◽  
Vol 22 (16) ◽  
pp. 3473-3486 ◽  
Author(s):  
Heng Liu ◽  
Yunfeng Zhang

Automated and robust damage detection tool is needed to enhance the resilience of civil infrastructures. In this article, a deep learning-based damage detection procedure using acceleration data is proposed as an automated post-hazard inspection tool for rapid structural condition assessment. The procedure is investigated with a focus on application in concentrically braced frame structure, a commonly used seismic force-resisting structural system with bracing as fuse members. A case study of six-story concentrically braced frame building was selected to numerically validate and demonstrate the proposed method. The deep learning model, a convolutional neural network, was trained and tested using numerically generated dataset from over 2000 sets of nonlinear seismic simulation, and an accuracy of over 90% was observed for bracing buckling damage detection in this case study. The results of the deep learning model were also discussed and extended to define other damage feature indices. This study shows that the proposed procedure is promising for rapid bracing condition inspection in concentrically braced frame structures after earthquakes.


2019 ◽  
Vol 8 (2S8) ◽  
pp. 1289-1294

Identifying a cell’s nucleus is the starting point for analysis of any kind of drug research. Presently this process is manually carried out by scientists. They take note of each nucleus from microscopic images to begin the drug discovery process. This takes hundreds of thousands of hours for scientific researchers to get their job done. In order to avoid such a bottleneck, this paper proposes an efficient solution using machine learning/ deep learning model. The proposed system can spot nuclei in cell images along with its run-length-encoded code without biologist intervention. A U-Net framework is used for the training the model to create efficient system. GPU based system is implemented to get accurate results for storage, retrieval and training of medical cell images. Thus, the system automates the spotting of nuclei thereby drastically reducing time in the drug discovery process.


2019 ◽  
Author(s):  
Jungirl Seok ◽  
Jae-Jin Song ◽  
Ja-Won Koo ◽  
Hee Chan Kim ◽  
Byung Yoon Choi

AbstractObjectivesThe purpose of this study was to create a deep learning model for the detection and segmentation of major structures of the tympanic membrane.MethodsTotal 920 tympanic endoscopic images had been stored were obtained, retrospectively. We constructed a detection and segmentation model using Mask R-CNN with ResNet-50 backbone targeting three clinically meaningful structures: (1) tympanic membrane (TM); (2) malleus with side of tympanic membrane; and (3) suspected perforation area. The images were randomly divided into three sets – taining set, validation set, and test set – at a ratio of 0.6:0.2:0.2, resulting in 548, 187, and 185 images, respectively. After assignment, 548 tympanic membrane images were augmented 50 times each, reaching 27,400 images.ResultsAt the most optimized point of the model, it achieved a mean average precision of 92.9% on test set. When an intersection over Union (IoU) score of greater than 0.5 was used as the reference point, the tympanic membrane was 100% detectable, the accuracy of side of the tympanic membrane based on the malleus segmentation was 88.6% and detection accuracy of suspicious perforation was 91.4%.ConclusionsAnatomical segmentation may allow the inclusion of an explanation provided by deep learning as part of the results. This method is applicable not only to tympanic endoscope, but also to sinus endoscope, laryngoscope, and stroboscope. Finally, it will be the starting point for the development of automated medical records descriptor of endoscope images.


2021 ◽  
Vol 39 (15_suppl) ◽  
pp. e12594-e12594
Author(s):  
Yiyue Xu ◽  
Bing Zou ◽  
Bingjie Fan ◽  
Wanlong Li ◽  
Shijiang Wang ◽  
...  

e12594 Background: Triple-negative breast cancer (TNBC) is the subtype of breast cancer with the worst prognosis. There is no reliable model for survival prediction of TNBC patients. The traditional Cox regression analysis with poor prediction power cannot satisfy the clinical needs. The purpose was to establish a deep learning model and develop a new prognostic system for TNBC patients. Methods: This study collected data of TNBC patients from the Surveillance, Epidemiology, and End Results (SEER) program between 2010 and 2016. 70% were used to develop the deep learning model, 15% were used as the validation set, and 15% as the independent testing set. Then the concordance-index (c-index) and Brier score (IBS) were calculated and compared with the Cox regression analysis and random forest. Finally, according to the classification of the deep survival model, an individualized prognosis system was established. Results: A total of 37,818 patients were enrolled in this study. In the validation set, the c-index of the deep learning was 0.799, which was better than the traditional Cox regression model (0.774) and random forest (0.763). The independent testing set further proved the robustness of the deep survival model (c-index 0.788). The new prognosis system based on the deep survival model reached an area under the curve (AUC) of 0.805, which was better than the Tumor, Node, Metastases (TNM) staging system (0.771). Conclusions: Deep learning model had better prediction power than the Cox regression analysis and the random forest. The established prognosis system can better predict prognosis and aid individual risk stratification for TNBC patients patients.


Sign in / Sign up

Export Citation Format

Share Document