scholarly journals Efficient and accurate identification of ear diseases using an ensemble deep learning model

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Xinyu Zeng ◽  
Zifan Jiang ◽  
Wen Luo ◽  
Honggui Li ◽  
Hongye Li ◽  
...  

AbstractEarly detection and appropriate medical treatment are of great use for ear disease. However, a new diagnostic strategy is necessary for the absence of experts and relatively low diagnostic accuracy, in which deep learning plays an important role. This paper puts forward a mechanic learning model which uses abundant otoscope image data gained in clinical cases to achieve an automatic diagnosis of ear diseases in real time. A total of 20,542 endoscopic images were employed to train nine common deep convolution neural networks. According to the characteristics of the eardrum and external auditory canal, eight kinds of ear diseases were classified, involving the majority of ear diseases, such as normal, Cholestestoma of the middle ear, Chronic suppurative otitis media, External auditory cana bleeding, Impacted cerumen, Otomycosis external, Secretory otitis media, Tympanic membrane calcification. After we evaluate these optimization schemes, two best performance models are selected to combine the ensemble classifiers with real-time automatic classification. Based on accuracy and training time, we choose a transferring learning model based on DensNet-BC169 and DensNet-BC1615, getting a result that each model has obvious improvement by using these two ensemble classifiers, and has an average accuracy of 95.59%. Considering the dependence of classifier performance on data size in transfer learning, we evaluate the high accuracy of the current model that can be attributed to large databases. Current studies are unparalleled regarding disease diversity and diagnostic precision. The real-time classifier trains the data under different acquisition conditions, which is suitable for real cases. According to this study, in the clinical case, the deep learning model is of great use in the early detection and remedy of ear diseases.

2021 ◽  
Author(s):  
Xinyu Zeng ◽  
Zifan Jiang ◽  
Wen Luo ◽  
Honggui Li ◽  
Hongye Li ◽  
...  

Abstract Early detection and appropriate medical treatment is of great use for ear disease. However, a new diagnostic strategy is necessary in the absence of experts and relatively low diagnostic accuracy, in which deep learning plays an important role. This paper puts forward a mechanic learning model which uses abundant otoscope image data gained in the clinical cases in order to achieve automatic diagnosis of ear diseases in real time. A total of 20,542 endoscopic images were employed to train nine common deep convolution neural networks. According to the characteristics of eardrum and external auditory canal, eight kinds of ear diseases were classified, involving the majority of ear diseases, such as normal, Cholestestoma of middle ear, Chronic suppurative otitis media, External auditory cana bleeding, Impacted cerumen, Otomycosis external, Secretory otitis media, Tympanic membrane calcification. After we evaluate these optimization schemes, two best performance models are selected to combine the ensemble classifiers with real-time automatic classification. Based on accuracy and training time, we choose a transferring learning model based on DensNet-BC169 and DensNet-BC1615, getting a result that each model has obvious improvement by using these two ensemble classifiers, and has average accuracy of 95.59%. Considering the dependence of classifier performance on data size in transfer learning, we evaluate the high accuracy of the current model that can be attributed to large databases. Current studies are unparalleled regarding disease diversity and diagnostic precision. The real-time classifier trains the data under different acquisition conditions, which is suitable for the real cases. According to this study, in the clinical case, deep learning model is of great use in early detection and remedy of ear diseases.


Author(s):  
Tossaporn Santad ◽  
Piyarat Silapasupphakornwong ◽  
Worawat Choensawat ◽  
Kingkarn Sookhanaphibarn

2021 ◽  
Author(s):  
Gaurav Chachra ◽  
Qingkai Kong ◽  
Jim Huang ◽  
Srujay Korlakunta ◽  
Jennifer Grannen ◽  
...  

Abstract After significant earthquakes, we can see images posted on social media platforms by individuals and media agencies owing to the mass usage of smartphones these days. These images can be utilized to provide information about the shaking damage in the earthquake region both to the public and research community, and potentially to guide rescue work. This paper presents an automated way to extract the damaged building images after earthquakes from social media platforms such as Twitter and thus identify the particular user posts containing such images. Using transfer learning and ~6500 manually labelled images, we trained a deep learning model to recognize images with damaged buildings in the scene. The trained model achieved good performance when tested on newly acquired images of earthquakes at different locations and ran in near real-time on Twitter feed after the 2020 M7.0 earthquake in Turkey. Furthermore, to better understand how the model makes decisions, we also implemented the Grad-CAM method to visualize the important locations on the images that facilitate the decision.


2021 ◽  
Author(s):  
Jannes Münchmeyer ◽  
Dino Bindi ◽  
Ulf Leser ◽  
Frederik Tilmann

<p><span>The estimation of earthquake source parameters, in particular magnitude and location, in real time is one of the key tasks for earthquake early warning and rapid response. In recent years, several publications introduced deep learning approaches for these fast assessment tasks. Deep learning is well suited for these tasks, as it can work directly on waveforms and </span><span>can</span><span> learn features and their relation from data.</span></p><p><span>A drawback of deep learning models is their lack of interpretability, i.e., it is usually unknown what reasoning the network uses. Due to this issue, it is also hard to estimate how the model will handle new data whose properties differ in some aspects from the training set, for example earthquakes in previously seismically quite regions. The discussions of previous studies usually focused on the average performance of models and did not consider this point in any detail.</span></p><p><span>Here we analyze a deep learning model for real time magnitude and location estimation through targeted experiments and a qualitative error analysis. We conduct our analysis on three large scale regional data sets from regions with diverse seismotectonic settings and network properties: Italy and Japan with dense networks </span><span>(station spacing down to 10 km)</span><span> of strong motion sensors, and North Chile with a sparser network </span><span>(station spacing around 40 km) </span><span>of broadband stations. </span></p><p><span>We obtained several key insights. First, the deep learning model does not seem to follow the classical approaches for magnitude and location estimation. For magnitude, one would classically expect the model to estimate attenuation, but the network rather seems to focus its attention on the spectral composition of the waveforms. For location, one would expect a triangulation approach, but our experiments instead show indications of a fingerprinting approach. </span>Second, we can pinpoint the effect of training data size on model performance. For example, a four times larger training set reduces average errors for both magnitude and location prediction by more than half, and reduces the required time for real time assessment by a factor of four. <span>Third, the model fails for events with few similar training examples. For magnitude, this means that the largest event</span><span>s</span><span> are systematically underestimated. For location, events in regions with few events in the training set tend to get mislocated to regions with more training events. </span><span>These characteristics can have severe consequences in downstream tasks like early warning and need to be taken into account for future model development and evaluation.</span></p>


2021 ◽  
pp. 132-143
Author(s):  
Akihiro Sugiura ◽  
Yoshiki Itazu ◽  
Kunihiko Tanaka ◽  
Hiroki Takada

Critical Care ◽  
2019 ◽  
Vol 23 (1) ◽  
Author(s):  
Soo Yeon Kim ◽  
Saehoon Kim ◽  
Joongbum Cho ◽  
Young Suh Kim ◽  
In Suk Sol ◽  
...  

Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2556
Author(s):  
Liyang Wang ◽  
Yao Mu ◽  
Jing Zhao ◽  
Xiaoya Wang ◽  
Huilian Che

The clinical symptoms of prediabetes are mild and easy to overlook, but prediabetes may develop into diabetes if early intervention is not performed. In this study, a deep learning model—referred to as IGRNet—is developed to effectively detect and diagnose prediabetes in a non-invasive, real-time manner using a 12-lead electrocardiogram (ECG) lasting 5 s. After searching for an appropriate activation function, we compared two mainstream deep neural networks (AlexNet and GoogLeNet) and three traditional machine learning algorithms to verify the superiority of our method. The diagnostic accuracy of IGRNet is 0.781, and the area under the receiver operating characteristic curve (AUC) is 0.777 after testing on the independent test set including mixed group. Furthermore, the accuracy and AUC are 0.856 and 0.825, respectively, in the normal-weight-range test set. The experimental results indicate that IGRNet diagnoses prediabetes with high accuracy using ECGs, outperforming existing other machine learning methods; this suggests its potential for application in clinical practice as a non-invasive, prediabetes diagnosis technology.


Electronics ◽  
2020 ◽  
Vol 9 (10) ◽  
pp. 1664
Author(s):  
Yoon-Ki Kim ◽  
Yongsung Kim

Recently, as the amount of real-time video streaming data has increased, distributed parallel processing systems have rapidly evolved to process large-scale data. In addition, with an increase in the scale of computing resources constituting the distributed parallel processing system, the orchestration of technology has become crucial for proper management of computing resources, in terms of allocating computing resources, setting up a programming environment, and deploying user applications. In this paper, we present a new distributed parallel processing platform for real-time large-scale image processing based on deep learning model inference, called DiPLIP. It provides a scheme for large-scale real-time image inference using buffer layer and a scalable parallel processing environment according to the size of the stream image. It allows users to easily process trained deep learning models for processing real-time images in a distributed parallel processing environment at high speeds, through the distribution of the virtual machine container.


2021 ◽  
Vol Publish Ahead of Print ◽  
Author(s):  
Parth K. Shah ◽  
Jennifer C. Ginestra ◽  
Lyle H. Ungar ◽  
Paul Junker ◽  
Jeff I. Rohrbach ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document