scholarly journals A Convolutional Neural Network Combining Discriminative Dictionary Learning and Sequence Tracking for Left Ventricular Detection

Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3693
Author(s):  
Xuchu Wang ◽  
Fusheng Wang ◽  
Yanmin Niu

Cardiac MRI left ventricular (LV) detection is frequently employed to assist cardiac registration or segmentation in computer-aided diagnosis of heart diseases. Focusing on the challenging problems in LV detection, such as the large span and varying size of LV areas in MRI, as well as the heterogeneous myocardial and blood pool parts in LV areas, a convolutional neural network (CNN) detection method combining discriminative dictionary learning and sequence tracking is proposed in this paper. To efficiently represent the different sub-objects in LV area, the method deploys discriminant dictionary to classify the superpixel oversegmented regions, then the target LV region is constructed by label merging and multi-scale adaptive anchors are generated in the target region for handling the varying sizes. Combining with non-differential anchors in regional proposal network, the left ventricle object is localized by the CNN based regression and classification strategy. In order to solve the problem of slow classification speed of discriminative dictionary, a fast generation module of left ventricular scale adaptive anchors based on sequence tracking is also proposed on the same individual. The method and its variants were tested on the heart atlas data set. Experimental results verified the effectiveness of the proposed method and according to some evaluation indicators, it obtained 92.95% in AP50 metric and it was the most competitive result compared to typical related methods. The combination of discriminative dictionary learning and scale adaptive anchor improves adaptability of the proposed algorithm to the varying left ventricular areas. This study would be beneficial in some cardiac image processing such as region-of-interest cropping and left ventricle volume measurement.

2019 ◽  
Author(s):  
Zini Jian ◽  
Xianpei Wang ◽  
Jingzhe Zhang ◽  
Xinyu Wang ◽  
Youbin Deng

Abstract Background: Clinically, doctors obtain the left ventricular posterior wall thickness (LVPWT) mainly by observing ultrasonic echocardiographic video stream to capture a single frame of images with diagnostic significance, and then mark two key points on both sides of the posterior wall of the left ventricle with their own experience for computer measurement. In the actual measurement, the doctor's selection point is subjective, which is not only time-consuming and laborious, but also difficult to accurately locate the edge, which will bring errors to the measurement results. Methods: In this paper, a convolutional neural network model of left ventricular posterior wall positioning was built under the TensorFlow framework, and the target region images were obtained after the positioning results were processed by non-local mean filtering and opening operation. Then the edge detection algorithm based on threshold segmentation is used. After the contour was extracted by adjusting the segmentation threshold through prior analysis and the OTSU algorithm, the design algorithm completed the computer selection point measurement of the thickness of the posterior wall of the left ventricle. Results: The proposed method can effectively extract the left ventricular posterior wall contour and measure its thickness. The experimental results show that the relative error between the measurement result and the hospital measurement value is less than 15%, which is less than 20% of the acceptable repeatability error in clinical practice. Conclusions: Therefore, the method proposed in this paper not only has the advantage of less manual intervention, but also can reduce the workload of doctors.


2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Zini Jian ◽  
Xianpei Wang ◽  
Jingzhe Zhang ◽  
Xinyu Wang ◽  
Youbin Deng

Abstract Background Clinically, doctors obtain the left ventricular posterior wall thickness (LVPWT) mainly by observing ultrasonic echocardiographic video stream to capture a single frame of images with diagnostic significance, and then mark two key points on both sides of the posterior wall of the left ventricle with their own experience for computer measurement. In the actual measurement, the doctor’s selection point is subjective, and difficult to accurately locate the edge, which will bring errors to the measurement results. Methods In this paper, a convolutional neural network model of left ventricular posterior wall positioning was built under the TensorFlow framework, and the target region images were obtained after the positioning results were processed by non-local mean filtering and opening operation. Then the edge detection algorithm based on threshold segmentation is used. After the contour was extracted by adjusting the segmentation threshold through prior analysis and the OTSU algorithm, the design algorithm completed the computer selection point measurement of the thickness of the posterior wall of the left ventricle. Results The proposed method can effectively extract the left ventricular posterior wall contour and measure its thickness. The experimental results show that the relative error between the measurement result and the hospital measurement value is less than 15%, which is less than 20% of the acceptable repeatability error in clinical practice. Conclusions Therefore, the measurement method proposed in this paper has the advantages of less manual intervention, and the processing method is reasonable and has practical value.


2020 ◽  
Author(s):  
Zini Jian ◽  
Xianpei Wang ◽  
Jingzhe Zhang ◽  
Xinyu Wang ◽  
Youbin Deng

Abstract Background: Clinically, doctors obtain the left ventricular posterior wall thickness (LVPWT) mainly by observing ultrasonic echocardiographic video stream to capture a single frame of images with diagnostic significance, and then mark two key points on both sides of the posterior wall of the left ventricle with their own experience for computer measurement. In the actual measurement, the doctor's selection point is subjective, which is not only time-consuming and laborious, but also difficult to accurately locate the edge, which will bring errors to the measurement results.Material/Methods: In this paper, a convolutional neural network model of left ventricular posterior wall positioning was built under the TensorFlow framework, and the target region images were obtained after the positioning results were processed by non-local mean filtering and opening operation. Then the edge detection algorithm based on threshold segmentation is used. After the contour was extracted by adjusting the segmentation threshold through prior analysis and the OTSU algorithm, the design algorithm completed the computer selection point measurement of the thickness of the posterior wall of the left ventricle. Results: The proposed method can effectively extract the left ventricular posterior wall contour and measure its thickness. The experimental results show that the relative error between the measurement result and the hospital measurement value is less than 15%, which is less than 20% of the acceptable repeatability error in clinical practice. Conclusions: Therefore, the method proposed in this paper not only has the advantage of less manual intervention, but also can reduce the workload of doctors.


2020 ◽  
Author(s):  
Zini Jian ◽  
Xianpei Wang ◽  
Jingzhe Zhang ◽  
Xinyu Wang ◽  
Youbin Deng

Abstract Background: Clinically, doctors obtain the left ventricular posterior wall thickness (LVPWT) mainly by observing ultrasonic echocardiographic video stream to capture a single frame of images with diagnostic significance, and then mark two key points on both sides of the posterior wall of the left ventricle with their own experience for computer measurement. In the actual measurement, the doctor's selection point is subjective, and difficult to accurately locate the edge, which will bring errors to the measurement results.Material/Methods: In this paper, a convolutional neural network model of left ventricular posterior wall positioning was built under the TensorFlow framework, and the target region images were obtained after the positioning results were processed by non-local mean filtering and opening operation. Then the edge detection algorithm based on threshold segmentation is used. After the contour was extracted by adjusting the segmentation threshold through prior analysis and the OTSU algorithm, the design algorithm completed the computer selection point measurement of the thickness of the posterior wall of the left ventricle. Results: The proposed method can effectively extract the left ventricular posterior wall contour and measure its thickness. The experimental results show that the relative error between the measurement result and the hospital measurement value is less than 15%, which is less than 20% of the acceptable repeatability error in clinical practice. Conclusions: Therefore, the method proposed in this paper not only has the advantage of less manual intervention, but also can reduce the workload of doctors.


2020 ◽  
Author(s):  
Zini Jian ◽  
Xianpei Wang ◽  
Jingzhe Zhang ◽  
Xinyu Wang ◽  
Youbin Deng

Abstract Background: Clinically, doctors obtain the left ventricular posterior wall thickness (LVPWT) mainly by observing ultrasonic echocardiographic video stream to capture a single frame of images with diagnostic significance, and then mark two key points on both sides of the posterior wall of the left ventricle with their own experience for computer measurement. In the actual measurement, the doctor's selection point is subjective, and difficult to accurately locate the edge, which will bring errors to the measurement results.Methods: In this paper, a convolutional neural network model of left ventricular posterior wall positioning was built under the TensorFlow framework, and the target region images were obtained after the positioning results were processed by non-local mean filtering and opening operation. Then the edge detection algorithm based on threshold segmentation is used. After the contour was extracted by adjusting the segmentation threshold through prior analysis and the OTSU algorithm, the design algorithm completed the computer selection point measurement of the thickness of the posterior wall of the left ventricle. Results: The proposed method can effectively extract the left ventricular posterior wall contour and measure its thickness. The experimental results show that the relative error between the measurement result and the hospital measurement value is less than 15%, which is less than 20% of the acceptable repeatability error in clinical practice. Conclusions: Therefore, the measurement method proposed in this paper has the advantages of less manual intervention, and the processing method is reasonable and has practical value.


2020 ◽  
Vol 13 (12) ◽  
Author(s):  
Orod Razeghi ◽  
Iain Sim ◽  
Caroline H. Roney ◽  
Rashed Karim ◽  
Henry Chubb ◽  
...  

Background: Pathological atrial fibrosis is a major contributor to sustained atrial fibrillation. Currently, late gadolinium enhancement (LGE) scans provide the only noninvasive estimate of atrial fibrosis. However, widespread adoption of atrial LGE has been hindered partly by nonstandardized image processing techniques, which can be operator and algorithm dependent. Minimal validation and limited access to transparent software platforms have also exacerbated the problem. This study aims to estimate atrial fibrosis from cardiac magnetic resonance scans using a reproducible operator-independent fully automatic open-source end-to-end pipeline. Methods: A multilabel convolutional neural network was designed to accurately delineate atrial structures including the blood pool, pulmonary veins, and mitral valve. The output from the network removed the operator dependent steps in a reproducible pipeline and allowed for automated estimation of atrial fibrosis from LGE-cardiac magnetic resonance scans. The pipeline results were compared against manual fibrosis burdens, calculated using published thresholds: image intensity ratio 0.97, image intensity ratio 1.61, and mean blood pool signal +3.3 SD. Results: We validated our methods on a large 3-dimensional LGE-cardiac magnetic resonance data set from 207 labeled scans. Automatic atrial segmentation achieved a 91% Dice score, compared with the mutual agreement of 85% in Dice seen in the interobserver analysis of operators. Intraclass correlation coefficients of the automatic pipeline with manually generated results were excellent and better than or equal to interobserver correlations for all 3 thresholds: 0.94 versus 0.88, 0.99 versus 0.99, 0.99 versus 0.96 for image intensity ratio 0.97, image intensity ratio 1.61, and +3.3 SD thresholds, respectively. Automatic analysis required 3 minutes per case on a standard workstation. The network and the analysis software are publicly available. Conclusions: Our pipeline provides a fully automatic estimation of fibrosis burden from LGE-cardiac magnetic resonance scans that is comparable to manual analysis. This removes one key source of variability in the measurement of atrial fibrosis.


2020 ◽  
Vol 10 (5) ◽  
pp. 1023-1032
Author(s):  
Lin Qi ◽  
Haoran Zhang ◽  
Xuehao Cao ◽  
Xuyang Lyu ◽  
Lisheng Xu ◽  
...  

Accurate segmentation of the blood pool of left ventricle (LV) and myocardium (or left ventricular epicardium, MYO) from cardiac magnetic resonance (MR) can help doctors to quantify LV ejection fraction and myocardial deformation. To reduce doctor’s burden of manual segmentation, in this study, we propose an automated and concurrent segmentation method of the LV and MYO. First, we employ a convolutional neural network (CNN) architecture to extract the region of interest (ROI) from short-axis cardiac cine MR images as a preprocessing step. Next, we present a multi-scale feature fusion (MSFF) CNN with a new weighted Dice index (WDI) loss function to get the concurrent segmentation of the LV and MYO. We use MSFF modules with three scales to extract different features, and then concatenate feature maps by the short and long skip connections in the encoder and decoder path to capture more complete context information and geometry structure for better segmentation. Finally, we compare the proposed method with Fully Convolutional Networks (FCN) and U-Net on the combined cardiac datasets from MICCAI 2009 and ACDC 2017. Experimental results demonstrate that the proposed method could perform effectively on LV and MYOs segmentation in the combined datasets, indicating its potential for clinical application.


2021 ◽  
Vol 7 (2) ◽  
pp. 356-362
Author(s):  
Harry Coppock ◽  
Alex Gaskell ◽  
Panagiotis Tzirakis ◽  
Alice Baird ◽  
Lyn Jones ◽  
...  

BackgroundSince the emergence of COVID-19 in December 2019, multidisciplinary research teams have wrestled with how best to control the pandemic in light of its considerable physical, psychological and economic damage. Mass testing has been advocated as a potential remedy; however, mass testing using physical tests is a costly and hard-to-scale solution.MethodsThis study demonstrates the feasibility of an alternative form of COVID-19 detection, harnessing digital technology through the use of audio biomarkers and deep learning. Specifically, we show that a deep neural network based model can be trained to detect symptomatic and asymptomatic COVID-19 cases using breath and cough audio recordings.ResultsOur model, a custom convolutional neural network, demonstrates strong empirical performance on a data set consisting of 355 crowdsourced participants, achieving an area under the curve of the receiver operating characteristics of 0.846 on the task of COVID-19 classification.ConclusionThis study offers a proof of concept for diagnosing COVID-19 using cough and breath audio signals and motivates a comprehensive follow-up research study on a wider data sample, given the evident advantages of a low-cost, highly scalable digital COVID-19 diagnostic tool.


2020 ◽  
Vol 2020 ◽  
pp. 1-6
Author(s):  
Jian-ye Yuan ◽  
Xin-yuan Nan ◽  
Cheng-rong Li ◽  
Le-le Sun

Considering that the garbage classification is urgent, a 23-layer convolutional neural network (CNN) model is designed in this paper, with the emphasis on the real-time garbage classification, to solve the low accuracy of garbage classification and recycling and difficulty in manual recycling. Firstly, the depthwise separable convolution was used to reduce the Params of the model. Then, the attention mechanism was used to improve the accuracy of the garbage classification model. Finally, the model fine-tuning method was used to further improve the performance of the garbage classification model. Besides, we compared the model with classic image classification models including AlexNet, VGG16, and ResNet18 and lightweight classification models including MobileNetV2 and SuffleNetV2 and found that the model GAF_dense has a higher accuracy rate, fewer Params, and FLOPs. To further check the performance of the model, we tested the CIFAR-10 data set and found the accuracy rates of the model (GAF_dense) are 0.018 and 0.03 higher than ResNet18 and SufflenetV2, respectively. In the ImageNet data set, the accuracy rates of the model (GAF_dense) are 0.225 and 0.146 higher than Resnet18 and SufflenetV2, respectively. Therefore, the garbage classification model proposed in this paper is suitable for garbage classification and other classification tasks to protect the ecological environment, which can be applied to classification tasks such as environmental science, children’s education, and environmental protection.


Sign in / Sign up

Export Citation Format

Share Document