scholarly journals Deep Learning Based Airway Segmentation Using Key Point Prediction

2021 ◽  
Vol 11 (8) ◽  
pp. 3501
Author(s):  
Jinyoung Park ◽  
JaeJoon Hwang ◽  
Jihye Ryu ◽  
Inhye Nam ◽  
Sol-A Kim ◽  
...  

The purpose of this study was to investigate the accuracy of the airway volume measurement by a Regression Neural Network-based deep-learning model. A set of manually outlined airway data was set to build the algorithm for fully automatic segmentation of a deep learning process. Manual landmarks of the airway were determined by one examiner using a mid-sagittal plane of cone-beam computed tomography (CBCT) images of 315 patients. Clinical dataset-based training with data augmentation was conducted. Based on the annotated landmarks, the airway passage was measured and segmented. The accuracy of our model was confirmed by measuring the following between the examiner and the program: (1) a difference in volume of nasopharynx, oropharynx, and hypopharynx, and (2) the Euclidean distance. For the agreement analysis, 61 samples were extracted and compared. The correlation test showed a range of good to excellent reliability. A difference between volumes were analyzed using regression analysis. The slope of the two measurements was close to 1 and showed a linear regression correlation (r2 = 0.975, slope = 1.02, p < 0.001). These results indicate that fully automatic segmentation of the airway is possible by training via deep learning of artificial intelligence. Additionally, a high correlation between manual data and deep learning data was estimated.

Sensors ◽  
2020 ◽  
Vol 20 (10) ◽  
pp. 2972
Author(s):  
Qinghua Gao ◽  
Shuo Jiang ◽  
Peter B. Shull

Hand gesture classification and finger angle estimation are both critical for intuitive human–computer interaction. However, most approaches study them in isolation. We thus propose a dual-output deep learning model to enable simultaneous hand gesture classification and finger angle estimation. Data augmentation and deep learning were used to detect spatial-temporal features via a wristband with ten modified barometric sensors. Ten subjects performed experimental testing by flexing/extending each finger at the metacarpophalangeal joint while the proposed model was used to classify each hand gesture and estimate continuous finger angles simultaneously. A data glove was worn to record ground-truth finger angles. Overall hand gesture classification accuracy was 97.5% and finger angle estimation R 2 was 0.922, both of which were significantly higher than shallow existing learning approaches used in isolation. The proposed method could be used in applications related to the human–computer interaction and in control environments with both discrete and continuous variables.


Author(s):  
Wei Zhang ◽  
Gaoliang Peng ◽  
Chuanhao Li ◽  
Yuanhang Chen ◽  
Zhujun Zhang

Intelligent fault diagnosis techniques have replaced the time-consuming and unreliable human analysis, increasing the efficiency of fault diagnosis. Deep learning model can improve the accuracy of intelligent fault diagnosis with the help of its multilayer nonlinear mapping ability. This paper has proposed a novel method named Deep Convolutional Neural Networks with Wide First-layer Kernels (WDCNN). The proposed method uses raw vibration signals as input (data augmentation is used to generate more inputs), and uses the wide kernels in first convolutional layer for extracting feature and suppressing high frequency noise. Small convolutional kernels in the preceding layers are used for multilayer nonlinear mapping. AdaBN is implemented to improve the domain adaptation ability of the model. The proposed model addresses the problem that currently, the accuracy of CNN applied to fault diagnosis is not very high. WDCNN can not only achieve 100% classification accuracy on normal signals, but also outperform state of the art DNN model which is based on frequency features under different working load and noisy environment.


2021 ◽  
Vol 22 (Supplement_2) ◽  
Author(s):  
S Alabed ◽  
K Karunasaagarar ◽  
F Alandejani ◽  
P Garg ◽  
J Uthoff ◽  
...  

Abstract Funding Acknowledgements Type of funding sources: Foundation. Main funding source(s): Wellcome Trust (UK), NIHR (UK) Introduction Cardiac magnetic resonance (CMR) measurements have significant diagnostic and prognostic value. Accurate and repeatable measurements are essential to assess disease severity, evaluate therapy response and monitor disease progression. Deep learning approaches have shown promise for automatic left ventricular (LV) segmentation on CMR, however fully automatic right ventricular (RV) segmentation remains challenging. We aimed to develop a biventricular automatic contouring model and evaluate the interstudy repeatability of the model in a prospectively recruited cohort. Methods A deep learning CMR contouring model was developed in a retrospective multi-vendor (Siemens and General Electric), multi-pathology cohort of patients, predominantly with heart failure, pulmonary hypertension and lung diseases (n = 400, ASPIRE registry). Biventricular segmentations were made on all CMR studies across cardiac phases. To test the accuracy of the automatic segmentation, 30 ASPIRE CMRs were segmented independently by two CMR experts. Each segmentation was compared to the automatic contouring with agreement assessed using the Dice similarity coefficient (DSC).  A prospective validation cohort of 46 subjects (10 healthy volunteers and 36 patients with pulmonary hypertension) were recruited to assess interstudy agreement of automatic and manual CMR assessments. Two CMR studies were performed during separate sessions on the same day. Interstudy repeatability was assessed using intraclass correlation coefficient (ICC) and Bland-Altman plots.  Results DSC showed high agreement (figure 1) comparing automatic and expert CMR readers, with minimal bias towards either CMR expert. The scan-scan repeatability CMR measurements were higher for all automatic RV measurements (ICC 0.89 to 0.98) compared to manual RV measurements (0.78 to 0.98). LV automatic and manual measurements were similarly repeatable (figure 2). Bland-Altman plots showed strong agreement with small mean differences between the scan-scan measurements (figure 2). Conclusion Fully automatic biventricular short-axis segmentations are comparable with expert manual segmentations, and have shown excellent interstudy repeatability.


2020 ◽  
Vol 2 (4) ◽  
pp. e190102
Author(s):  
Gabriel E. Humpire-Mamani ◽  
Joris Bukala ◽  
Ernst T. Scholten ◽  
Mathias Prokop ◽  
Bram van Ginneken ◽  
...  

Electronics ◽  
2020 ◽  
Vol 9 (6) ◽  
pp. 1048 ◽  
Author(s):  
Muhammad Ather Iqbal Hussain ◽  
Babar Khan ◽  
Zhijie Wang ◽  
Shenyi Ding

The weave pattern (texture) of woven fabric is considered to be an important factor of the design and production of high-quality fabric. Traditionally, the recognition of woven fabric has a lot of challenges due to its manual visual inspection. Moreover, the approaches based on early machine learning algorithms directly depend on handcrafted features, which are time-consuming and error-prone processes. Hence, an automated system is needed for classification of woven fabric to improve productivity. In this paper, we propose a deep learning model based on data augmentation and transfer learning approach for the classification and recognition of woven fabrics. The model uses the residual network (ResNet), where the fabric texture features are extracted and classified automatically in an end-to-end fashion. We evaluated the results of our model using evaluation metrics such as accuracy, balanced accuracy, and F1-score. The experimental results show that the proposed model is robust and achieves state-of-the-art accuracy even when the physical properties of the fabric are changed. We compared our results with other baseline approaches and a pretrained VGGNet deep learning model which showed that the proposed method achieved higher accuracy when rotational orientations in fabric and proper lighting effects were considered.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6126
Author(s):  
Tae Hyong Kim ◽  
Ahnryul Choi ◽  
Hyun Mu Heo ◽  
Hyunggun Kim ◽  
Joung Hwan Mun

Pre-impact fall detection can detect a fall before a body segment hits the ground. When it is integrated with a protective system, it can directly prevent an injury due to hitting the ground. An impact acceleration peak magnitude is one of key measurement factors that can affect the severity of an injury. It can be used as a design parameter for wearable protective devices to prevent injuries. In our study, a novel method is proposed to predict an impact acceleration magnitude after loss of balance using a single inertial measurement unit (IMU) sensor and a sequential-based deep learning model. Twenty-four healthy participants participated in this study for fall experiments. Each participant worn a single IMU sensor on the waist to collect tri-axial accelerometer and angular velocity data. A deep learning method, bi-directional long short-term memory (LSTM) regression, is applied to predict a fall’s impact acceleration magnitude prior to fall impact (a fall in five directions). To improve prediction performance, a data augmentation technique with increment of dataset is applied. Our proposed model showed a mean absolute percentage error (MAPE) of 6.69 ± 0.33% with r value of 0.93 when all three different types of data augmentation techniques are applied. Additionally, there was a significant reduction of MAPE by 45.2% when the number of training datasets was increased by 4-fold. These results show that impact acceleration magnitude can be used as an activation parameter for fall prevention such as in a wearable airbag system by optimizing deployment process to minimize fall injury in real time.


2020 ◽  
Author(s):  
Quoc-Viet Tran ◽  
Yen-Po Chin ◽  
Phung-Anh Nguyen ◽  
Ming-Yang Lee ◽  
Hsuan-Chia Yang ◽  
...  

BACKGROUND The automatic segmentation of skin lesions has been reported using the data of dermoscopic images. It is, however, not applicable to real-time detection using a smartphone. OBJECTIVE This study aims to examine a deep learning model for detecting and localizing positions of the mole on the captured images to precisely extract the crop images of the model without any other objects. METHODS The data were collected through public health events in Taiwan between December 2017 and February 2019. All the participants who concerned about the risk of their moles were asked to take the mole-images. Images were then measured and determined the risks by three dermatologists. We labeled the mole position with bounding boxes using the ‘LabelImg’ tool. Two architectures, SSD and Faster-RCNN, have been used to build eight different mole-detection models. The confidence score, intersection over union (IoU), and mean average precision (mAP) with the COCO metrics were used to measure the accuracy of those models. RESULTS 2790-mole images were used for the development and the validation of the models. The Faster-RCNN Inception Resnet model had the highest overall mAP of 0.245, following by 0.234 of the Faster-RCNN Resnet 101, and 0.227 of the Faster-RCNN Resnet 50 model. The SSD Mobilenet v1 model had the lowest mAP of 0.142. The Faster-RCNN Inception Resnet model had a dominant AP of 0.377, 0.236, and 0.129 for the large, medium, and small size of moles. We observed that the Faster RCNN Inception Resnet has shown the best performance with the high confident scores (over 97%) for all kinds of moles. CONCLUSIONS We successfully developed the detection models based on the techniques of SSD and Faster-RCNN. These models might help researchers to localize accurately the position of the moles with its risks as a feasible detection app on the smartphone. We provided the pre-trained models for further studies via GitHub link, https://github.com/vietdaica/Mole_Detection.


Sign in / Sign up

Export Citation Format

Share Document