Real-Time Estimation of Eye Movement Condition Using a Deep Learning Model

2021 ◽  
pp. 132-143
Author(s):  
Akihiro Sugiura ◽  
Yoshiki Itazu ◽  
Kunihiko Tanaka ◽  
Hiroki Takada
Sensors ◽  
2020 ◽  
Vol 20 (19) ◽  
pp. 5606
Author(s):  
Yung-Hui Li ◽  
Latifa Nabila Harfiya ◽  
Kartika Purwandari ◽  
Yue-Der Lin

Blood pressure monitoring is one avenue to monitor people’s health conditions. Early detection of abnormal blood pressure can help patients to get early treatment and reduce mortality associated with cardiovascular diseases. Therefore, it is very valuable to have a mechanism to perform real-time monitoring for blood pressure changes in patients. In this paper, we propose deep learning regression models using an electrocardiogram (ECG) and photoplethysmogram (PPG) for the real-time estimation of systolic blood pressure (SBP) and diastolic blood pressure (DBP) values. We use a bidirectional layer of long short-term memory (LSTM) as the first layer and add a residual connection inside each of the following layers of the LSTMs. We also perform experiments to compare the performance between the traditional machine learning methods, another existing deep learning model, and the proposed deep learning models using the dataset of Physionet’s multiparameter intelligent monitoring in intensive care II (MIMIC II) as the source of ECG and PPG signals as well as the arterial blood pressure (ABP) signal. The results show that the proposed model outperforms the existing methods and is able to achieve accurate estimation which is promising in order to be applied in clinical practice effectively.


Author(s):  
Tossaporn Santad ◽  
Piyarat Silapasupphakornwong ◽  
Worawat Choensawat ◽  
Kingkarn Sookhanaphibarn

2021 ◽  
Author(s):  
Gaurav Chachra ◽  
Qingkai Kong ◽  
Jim Huang ◽  
Srujay Korlakunta ◽  
Jennifer Grannen ◽  
...  

Abstract After significant earthquakes, we can see images posted on social media platforms by individuals and media agencies owing to the mass usage of smartphones these days. These images can be utilized to provide information about the shaking damage in the earthquake region both to the public and research community, and potentially to guide rescue work. This paper presents an automated way to extract the damaged building images after earthquakes from social media platforms such as Twitter and thus identify the particular user posts containing such images. Using transfer learning and ~6500 manually labelled images, we trained a deep learning model to recognize images with damaged buildings in the scene. The trained model achieved good performance when tested on newly acquired images of earthquakes at different locations and ran in near real-time on Twitter feed after the 2020 M7.0 earthquake in Turkey. Furthermore, to better understand how the model makes decisions, we also implemented the Grad-CAM method to visualize the important locations on the images that facilitate the decision.


2021 ◽  
Author(s):  
Jannes Münchmeyer ◽  
Dino Bindi ◽  
Ulf Leser ◽  
Frederik Tilmann

<p><span>The estimation of earthquake source parameters, in particular magnitude and location, in real time is one of the key tasks for earthquake early warning and rapid response. In recent years, several publications introduced deep learning approaches for these fast assessment tasks. Deep learning is well suited for these tasks, as it can work directly on waveforms and </span><span>can</span><span> learn features and their relation from data.</span></p><p><span>A drawback of deep learning models is their lack of interpretability, i.e., it is usually unknown what reasoning the network uses. Due to this issue, it is also hard to estimate how the model will handle new data whose properties differ in some aspects from the training set, for example earthquakes in previously seismically quite regions. The discussions of previous studies usually focused on the average performance of models and did not consider this point in any detail.</span></p><p><span>Here we analyze a deep learning model for real time magnitude and location estimation through targeted experiments and a qualitative error analysis. We conduct our analysis on three large scale regional data sets from regions with diverse seismotectonic settings and network properties: Italy and Japan with dense networks </span><span>(station spacing down to 10 km)</span><span> of strong motion sensors, and North Chile with a sparser network </span><span>(station spacing around 40 km) </span><span>of broadband stations. </span></p><p><span>We obtained several key insights. First, the deep learning model does not seem to follow the classical approaches for magnitude and location estimation. For magnitude, one would classically expect the model to estimate attenuation, but the network rather seems to focus its attention on the spectral composition of the waveforms. For location, one would expect a triangulation approach, but our experiments instead show indications of a fingerprinting approach. </span>Second, we can pinpoint the effect of training data size on model performance. For example, a four times larger training set reduces average errors for both magnitude and location prediction by more than half, and reduces the required time for real time assessment by a factor of four. <span>Third, the model fails for events with few similar training examples. For magnitude, this means that the largest event</span><span>s</span><span> are systematically underestimated. For location, events in regions with few events in the training set tend to get mislocated to regions with more training events. </span><span>These characteristics can have severe consequences in downstream tasks like early warning and need to be taken into account for future model development and evaluation.</span></p>


Critical Care ◽  
2019 ◽  
Vol 23 (1) ◽  
Author(s):  
Soo Yeon Kim ◽  
Saehoon Kim ◽  
Joongbum Cho ◽  
Young Suh Kim ◽  
In Suk Sol ◽  
...  

Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2556
Author(s):  
Liyang Wang ◽  
Yao Mu ◽  
Jing Zhao ◽  
Xiaoya Wang ◽  
Huilian Che

The clinical symptoms of prediabetes are mild and easy to overlook, but prediabetes may develop into diabetes if early intervention is not performed. In this study, a deep learning model—referred to as IGRNet—is developed to effectively detect and diagnose prediabetes in a non-invasive, real-time manner using a 12-lead electrocardiogram (ECG) lasting 5 s. After searching for an appropriate activation function, we compared two mainstream deep neural networks (AlexNet and GoogLeNet) and three traditional machine learning algorithms to verify the superiority of our method. The diagnostic accuracy of IGRNet is 0.781, and the area under the receiver operating characteristic curve (AUC) is 0.777 after testing on the independent test set including mixed group. Furthermore, the accuracy and AUC are 0.856 and 0.825, respectively, in the normal-weight-range test set. The experimental results indicate that IGRNet diagnoses prediabetes with high accuracy using ECGs, outperforming existing other machine learning methods; this suggests its potential for application in clinical practice as a non-invasive, prediabetes diagnosis technology.


Electronics ◽  
2020 ◽  
Vol 9 (10) ◽  
pp. 1664
Author(s):  
Yoon-Ki Kim ◽  
Yongsung Kim

Recently, as the amount of real-time video streaming data has increased, distributed parallel processing systems have rapidly evolved to process large-scale data. In addition, with an increase in the scale of computing resources constituting the distributed parallel processing system, the orchestration of technology has become crucial for proper management of computing resources, in terms of allocating computing resources, setting up a programming environment, and deploying user applications. In this paper, we present a new distributed parallel processing platform for real-time large-scale image processing based on deep learning model inference, called DiPLIP. It provides a scheme for large-scale real-time image inference using buffer layer and a scalable parallel processing environment according to the size of the stream image. It allows users to easily process trained deep learning models for processing real-time images in a distributed parallel processing environment at high speeds, through the distribution of the virtual machine container.


Sign in / Sign up

Export Citation Format

Share Document