scholarly journals Deep Learning based Intraretinal Layer Segmentation using Cascaded Compressed U-Net

Author(s):  
Sunil K Yadav ◽  
Rahele Kafieh ◽  
Hanna G Zimmermann ◽  
Josef Kauer-Bonin ◽  
Kouros Nouri-Mahdavi ◽  
...  

Intraretinal layer segmentation on macular optical coherence tomography (OCT) images generates non invasive biomarkers querying neuronal structures with near cellular resolution. While first deep learning methods have delivered promising results with high computing power demands, a reliable, power efficient and reproducible intraretinal layer segmentation is still an unmet need. We propose a cascaded two-stage network for intraretinal layer segmentation, with both networks being compressed versions of U-Net (CCU-INSEG). The first network is responsible for retinal tissue segmentation from OCT B-scans. The second network segments 8 intraretinal layers with high fidelity. By compressing U-Net, we achieve 392- and 26-time reductions in model size and parameters in the first and second network, respectively. Still, our method delivers almost similar accuracy compared to U-Net without additional constraints of computation and memory resources. At the post-processing stage, we introduce Laplacian-based outlier detection with layer surface hole filling by adaptive non-linear interpolation. We trained our method using 17,458 B-scans from patients with autoimmune optic neuropathies, i.e. multiple sclerosis, and healthy controls. Voxel-wise comparison against manual segmentation produces a mean absolute error of 2.3mu, which is 2.5x better than the device's own segmentation. Voxel-wise comparison against external multicenter data leads to a mean absolute error of 2.6mu for glaucoma data using the same gold standard segmentation approach, and 3.7mu mean absolute error compared against an externally segmented reference data set. In 20 macular volume scans from patients with severe disease, 3.5% of B-scan segmentation results were rejected by an experienced grader, whereas this was the case in 41.4% of B-scans segmented with a graph-based reference method.

2021 ◽  
Vol 11 ◽  
Author(s):  
Ji-Yeon Kim ◽  
Yong Seok Lee ◽  
Jonghan Yu ◽  
Youngmin Park ◽  
Se Kyung Lee ◽  
...  

Several prognosis prediction models have been developed for breast cancer (BC) patients with curative surgery, but there is still an unmet need to precisely determine BC prognosis for individual BC patients in real time. This is a retrospectively collected data analysis from adjuvant BC registry at Samsung Medical Center between January 2000 and December 2016. The initial data set contained 325 clinical data elements: baseline characteristics with demographics, clinical and pathologic information, and follow-up clinical information including laboratory and imaging data during surveillance. Weibull Time To Event Recurrent Neural Network (WTTE-RNN) by Martinsson was implemented for machine learning. We searched for the optimal window size as time-stamped inputs. To develop the prediction model, data from 13,117 patients were split into training (60%), validation (20%), and test (20%) sets. The median follow-up duration was 4.7 years and the median number of visits was 8.4. We identified 32 features related to BC recurrence and considered them in further analyses. Performance at a point of statistics was calculated using Harrell's C-index and area under the curve (AUC) at each 2-, 5-, and 7-year points. After 200 training epochs with a batch size of 100, the C-index reached 0.92 for the training data set and 0.89 for the validation and test data sets. The AUC values were 0.90 at 2-year point, 0.91 at 5-year point, and 0.91 at 7-year point. The deep learning-based final model outperformed three other machine learning-based models. In terms of pathologic characteristics, the median absolute error (MAE) and weighted mean absolute error (wMAE) showed great results of as little as 3.5%. This BC prognosis model to determine the probability of BC recurrence in real time was developed using information from the time of BC diagnosis and the follow-up period in RNN machine learning model.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3719
Author(s):  
Aoxin Ni ◽  
Arian Azarang ◽  
Nasser Kehtarnavaz

The interest in contactless or remote heart rate measurement has been steadily growing in healthcare and sports applications. Contactless methods involve the utilization of a video camera and image processing algorithms. Recently, deep learning methods have been used to improve the performance of conventional contactless methods for heart rate measurement. After providing a review of the related literature, a comparison of the deep learning methods whose codes are publicly available is conducted in this paper. The public domain UBFC dataset is used to compare the performance of these deep learning methods for heart rate measurement. The results obtained show that the deep learning method PhysNet generates the best heart rate measurement outcome among these methods, with a mean absolute error value of 2.57 beats per minute and a mean square error value of 7.56 beats per minute.


Vibration ◽  
2021 ◽  
Vol 4 (2) ◽  
pp. 341-356
Author(s):  
Jessada Sresakoolchai ◽  
Sakdirat Kaewunruen

Various techniques have been developed to detect railway defects. One of the popular techniques is machine learning. This unprecedented study applies deep learning, which is a branch of machine learning techniques, to detect and evaluate the severity of rail combined defects. The combined defects in the study are settlement and dipped joint. Features used to detect and evaluate the severity of combined defects are axle box accelerations simulated using a verified rolling stock dynamic behavior simulation called D-Track. A total of 1650 simulations are run to generate numerical data. Deep learning techniques used in the study are deep neural network (DNN), convolutional neural network (CNN), and recurrent neural network (RNN). Simulated data are used in two ways: simplified data and raw data. Simplified data are used to develop the DNN model, while raw data are used to develop the CNN and RNN model. For simplified data, features are extracted from raw data, which are the weight of rolling stock, the speed of rolling stock, and three peak and bottom accelerations from two wheels of rolling stock. In total, there are 14 features used as simplified data for developing the DNN model. For raw data, time-domain accelerations are used directly to develop the CNN and RNN models without processing and data extraction. Hyperparameter tuning is performed to ensure that the performance of each model is optimized. Grid search is used for performing hyperparameter tuning. To detect the combined defects, the study proposes two approaches. The first approach uses one model to detect settlement and dipped joint, and the second approach uses two models to detect settlement and dipped joint separately. The results show that the CNN models of both approaches provide the same accuracy of 99%, so one model is good enough to detect settlement and dipped joint. To evaluate the severity of the combined defects, the study applies classification and regression concepts. Classification is used to evaluate the severity by categorizing defects into light, medium, and severe classes, and regression is used to estimate the size of defects. From the study, the CNN model is suitable for evaluating dipped joint severity with an accuracy of 84% and mean absolute error (MAE) of 1.25 mm, and the RNN model is suitable for evaluating settlement severity with an accuracy of 99% and mean absolute error (MAE) of 1.58 mm.


Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2424 ◽  
Author(s):  
Md Atiqur Rahman Ahad ◽  
Thanh Trung Ngo ◽  
Anindya Das Antar ◽  
Masud Ahmed ◽  
Tahera Hossain ◽  
...  

Wearable sensor-based systems and devices have been expanded in different application domains, especially in the healthcare arena. Automatic age and gender estimation has several important applications. Gait has been demonstrated as a profound motion cue for various applications. A gait-based age and gender estimation challenge was launched in the 12th IAPR International Conference on Biometrics (ICB), 2019. In this competition, 18 teams initially registered from 14 countries. The goal of this challenge was to find some smart approaches to deal with age and gender estimation from sensor-based gait data. For this purpose, we employed a large wearable sensor-based gait dataset, which has 745 subjects (357 females and 388 males), from 2 to 78 years old in the training dataset; and 58 subjects (19 females and 39 males) in the test dataset. It has several walking patterns. The gait data sequences were collected from three IMUZ sensors, which were placed on waist-belt or at the top of a backpack. There were 67 solutions from ten teams—for age and gender estimation. This paper extensively analyzes the methods and achieved-results from various approaches. Based on analysis, we found that deep learning-based solutions lead the competitions compared with conventional handcrafted methods. We found that the best result achieved 24.23% prediction error for gender estimation, and 5.39 mean absolute error for age estimation by employing angle embedded gait dynamic image and temporal convolution network.


Author(s):  
Maraza-Quispe Benjamín ◽  
◽  
Enrique Damián Valderrama-Chauca ◽  
Lenin Henry Cari-Mogrovejo ◽  
Jorge Milton Apaza-Huanca ◽  
...  

The present research aims to implement a predictive model in the KNIME platform to analyze and compare the prediction of academic performance using data from a Learning Management System (LMS), identifying students at academic risk in order to generate timely and timely interventions. The CRISP-DM methodology was used, structured in six phases: Problem analysis, data analysis, data understanding, data preparation, modeling, evaluation and implementation. Based on the analysis of online learning behavior through 22 behavioral indicators observed in the LMS of the Faculty of Educational Sciences of the National University of San Agustin. These indicators are distributed in five dimensions: Academic Performance, Access, Homework, Social Aspects and Quizzes. The model has been implemented in the KNIME platform using the Simple Regression Tree Learner training algorithm. The total population consists of 30,000 student records from which a sample of 1,000 records has been taken by simple random sampling. The accuracy of the model for early prediction of students' academic performance is evaluated, the 22 observed behavioral indicators are compared with the means of academic performance in three courses. The prediction results of the implemented model are satisfactory where the mean absolute error compared to the mean of the first course was 3. 813 and with an accuracy of 89.7%, the mean absolute error compared to the mean of the second course was 2.809 with an accuracy of 94.2% and the mean absolute error compared to the mean of the third course was 2.779 with an accuracy of 93.8%. These results demonstrate that the proposed model can be used to predict students' future academic performance from an LMS data set.


Optics ◽  
2021 ◽  
Vol 2 (2) ◽  
pp. 87-95
Author(s):  
Xudong Yuan ◽  
Yaguang Xu ◽  
Ruizhi Zhao ◽  
Xuhao Hong ◽  
Ronger Lu ◽  
...  

The Laguerre-Gaussian (LG) beam demonstrates great potential for optical communication due to its orthogonality between different eigenstates, and has gained increased research interest in recent years. Here, we propose a dual-output mode analysis method based on deep learning that can accurately obtain both the mode weight and phase information of multimode LG beams. We reconstruct the LG beams based on the result predicted by the convolutional neural network. It shows that the correlation coefficient values after reconstruction are above 0.9999, and the mean absolute error (MAE) of the mode weights and phases are about 1.4 × 10-3 and 2.9 × 10-3, respectively. The model still maintains relatively accurate prediction for the associated unknown data set and the noise-disturbed samples. In addition, the computation time of the model for a single test sample takes only 0.975 ms on average. These results show that our method has good abilities of generalization and robustness and allows for nearly real-time modal analysis.


2021 ◽  
Vol 10 (11) ◽  
pp. e33101119347
Author(s):  
Ewethon Dyego de Araujo Batista ◽  
Wellington Candeia de Araújo ◽  
Romeryto Vieira Lira ◽  
Laryssa Izabel de Araujo Batista

Introdução: a dengue é uma arbovirose causada pelo vírus DENV e transmitida para o homem através do mosquito Aedes aegypti. Atualmente, não existe uma vacina eficaz para combater todas as sorologias do vírus. Diante disso, o combate à doença se volta para medidas preventivas contra a proliferação do mosquito. Os pesquisadores estão utilizando Machine Learning (ML) e Deep Learning (DL) como ferramentas para prever casos de dengue e ajudar os governantes nesse combate. Objetivo: identificar quais técnicas e abordagens de ML e de DL estão sendo utilizadas na previsão de dengue. Métodos: revisão sistemática realizada nas bases das áreas de Medicina e de Computação com intuito de responder as perguntas de pesquisa: é possível realizar previsões de casos de dengue através de técnicas de ML e de DL, quais técnicas são utilizadas, onde os estudos estão sendo realizados, como e quais dados estão sendo utilizados? Resultados: após realizar as buscas, aplicar os critérios de inclusão, exclusão e leitura aprofundada, 14 artigos foram aprovados. As técnicas Random Forest (RF), Support Vector Regression (SVR), e Long Short-Term Memory (LSTM) estão presentes em 85% dos trabalhos. Em relação aos dados, na maioria, foram utilizados 10 anos de dados históricos da doença e informações climáticas. Por fim, a técnica Root Mean Absolute Error (RMSE) foi a preferida para mensurar o erro. Conclusão: a revisão evidenciou a viabilidade da utilização de técnicas de ML e de DL para a previsão de casos de dengue, com baixa taxa de erro e validada através de técnicas estatísticas.


2021 ◽  
Vol 22 (17) ◽  
pp. 9194
Author(s):  
Dmitriy D. Matyushin ◽  
Anastasia Yu. Sholokhova ◽  
Aleksey K. Buryak

Prediction of gas chromatographic retention indices based on compound structure is an important task for analytical chemistry. The predicted retention indices can be used as a reference in a mass spectrometry library search despite the fact that their accuracy is worse in comparison with the experimental reference ones. In the last few years, deep learning was applied for this task. The use of deep learning drastically improved the accuracy of retention index prediction for non-polar stationary phases. In this work, we demonstrate for the first time the use of deep learning for retention index prediction on polar (e.g., polyethylene glycol, DB-WAX) and mid-polar (e.g., DB-624, DB-210, DB-1701, OV-17) stationary phases. The achieved accuracy lies in the range of 16–50 in terms of the mean absolute error for several stationary phases and test data sets. We also demonstrate that our approach can be directly applied to the prediction of the second dimension retention times (GC × GC) if a large enough data set is available. The achieved accuracy is considerably better compared with the previous results obtained using linear quantitative structure-retention relationships and ACD ChromGenius software. The source code and pre-trained models are available online.


Author(s):  
Benjamin W Nelson ◽  
Nicholas B Allen

BACKGROUND Wrist-worn smart watches and fitness monitors (ie, wearables) have become widely adopted by consumers and are gaining increased attention from researchers for their potential contribution to naturalistic digital measurement of health in a scalable, mobile, and unobtrusive way. Various studies have examined the accuracy of these devices in controlled laboratory settings (eg, treadmill and stationary bike); however, no studies have investigated the heart rate accuracy of wearables during a continuous and ecologically valid 24-hour period of actual consumer device use conditions. OBJECTIVE The aim of this study was to determine the heart rate accuracy of 2 popular wearable devices, the Apple Watch 3 and Fitbit Charge 2, as compared with the gold standard reference method, an ambulatory electrocardiogram (ECG), during consumer device use conditions in an individual. Data were collected across 5 daily conditions, including sitting, walking, running, activities of daily living (ADL; eg, chores, brushing teeth), and sleeping. METHODS One participant, (first author; 29-year-old Caucasian male) completed a 24-hour ecologically valid protocol by wearing 2 popular wrist wearable devices (Apple Watch 3 and Fitbit Charge 2). In addition, an ambulatory ECG (Vrije Universiteit Ambulatory Monitoring System) was used as the gold standard reference method, which resulted in the collection of 102,740 individual heartbeats. A single-subject design was used to keep all variables constant except for wearable devices while providing a rapid response design to provide initial assessment of wearable accuracy for allowing the research cycle to keep pace with technological advancements. Accuracy of these devices compared with the gold standard ECG was assessed using mean error, mean absolute error, and mean absolute percent error. These data were supplemented with Bland-Altman analyses and concordance class correlation to assess agreement between devices. RESULTS The Apple Watch 3 and Fitbit Charge 2 were generally highly accurate across the 24-hour condition. Specifically, the Apple Watch 3 had a mean difference of −1.80 beats per minute (bpm), a mean absolute error percent of 5.86%, and a mean agreement of 95% when compared with the ECG across 24 hours. The Fitbit Charge 2 had a mean difference of −3.47 bpm, a mean absolute error of 5.96%, and a mean agreement of 91% when compared with the ECG across 24 hours. These findings varied by condition. CONCLUSIONS The Apple Watch 3 and the Fitbit Charge 2 provided acceptable heart rate accuracy (<±10%) across the 24 hour and during each activity, except for the Apple Watch 3 during the daily activities condition. Overall, these findings provide preliminary support that these devices appear to be useful for implementing ambulatory measurement of cardiac activity in research studies, especially those where the specific advantages of these methods (eg, scalability, low participant burden) are particularly suited to the population or research question.


This paper presents a deep learning approach for age estimation of human beings using their facial images. The different racial groups based on skin colour have been incorporated in the annotations of the images in the dataset, while ensuring an adequate distribution of subjects across the racial groups so as to achieve an accurate Automatic Facial Age Estimation (AFAE). The principle of transfer learning is applied to the ResNet50 Convolutional Neural Network (CNN) initially pretrained for the task of object classification and finetuning it’s hyperparameters to propose an AFAE system that can be used to automate ages of humans across multiple racial groups. The mean absolute error of 4.25 years is obtained at the end of the research which proved the effectiveness and superiority of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document