scholarly journals Automated recognition of ultrasound cardiac views based on deep learning with graph constraint

Author(s):  
Yanhua Gao ◽  
Yuan Zhu ◽  
Bo Liu ◽  
Yue Hu ◽  
Youmin Guo

ObjectiveIn Transthoracic echocardiographic (TTE) examination, it is essential to identify the cardiac views accurately. Computer-aided recognition is expected to improve the accuracy of the TTE examination.MethodsThis paper proposes a new method for automatic recognition of cardiac views based on deep learning, including three strategies. First, A spatial transform network is performed to learn cardiac shape changes during the cardiac cycle, which reduces intra-class variability. Second, a channel attention mechanism is introduced to adaptively recalibrates channel-wise feature responses. Finally, unlike conventional deep learning methods, which learned each input images individually, the structured signals are applied by a graph of similarities among images. These signals are transformed into the graph-based image embedding, which act as unsupervised regularization constraints to improve the generalization accuracy.ResultsThe proposed method was trained and tested in 171792 cardiac images from 584 subjects. Compared with the known result of the state of the art, the overall accuracy of the proposed method on cardiac image classification is 99.10% vs. 91.7%, and the mean AUC is 99.36%. Moreover, the overall accuracy is 98.15%, and the mean AUC is 98.96% on an independent test set with 34211 images from 100 subjects.ConclusionThe method of this paper achieved the results of the state of the art, which is expected to be an automated recognition tool for cardiac views recognition. The work confirms the potential of deep learning on ultrasound medicine.

Diagnostics ◽  
2021 ◽  
Vol 11 (7) ◽  
pp. 1177
Author(s):  
Yanhua Gao ◽  
Yuan Zhu ◽  
Bo Liu ◽  
Yue Hu ◽  
Gang Yu ◽  
...  

In transthoracic echocardiographic (TTE) examination, it is essential to identify the cardiac views accurately. Computer-aided recognition is expected to improve the accuracy of cardiac views of the TTE examination, particularly when obtained by non-trained providers. A new method for automatic recognition of cardiac views is proposed consisting of three processes. First, a spatial transform network is performed to learn cardiac shape changes during a cardiac cycle, which reduces intra-class variability. Second, a channel attention mechanism is introduced to adaptively recalibrate channel-wise feature responses. Finally, the structured signals by the similarities among cardiac views are transformed into the graph-based image embedding, which acts as unsupervised regularization constraints to improve the generalization accuracy. The proposed method is trained and tested in 171792 cardiac images from 584 subjects. The overall accuracy of the proposed method on cardiac image classification is 99.10%, and the mean AUC is 99.36%, better than known methods. Moreover, the overall accuracy is 97.73%, and the mean AUC is 98.59% on an independent test set with 37,883 images from 100 subjects. The proposed automated recognition model achieved comparable accuracy with true cardiac views, and thus can be applied clinically to help find standard cardiac views.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4486
Author(s):  
Niall O’Mahony ◽  
Sean Campbell ◽  
Lenka Krpalkova ◽  
Anderson Carvalho ◽  
Joseph Walsh ◽  
...  

Fine-grained change detection in sensor data is very challenging for artificial intelligence though it is critically important in practice. It is the process of identifying differences in the state of an object or phenomenon where the differences are class-specific and are difficult to generalise. As a result, many recent technologies that leverage big data and deep learning struggle with this task. This review focuses on the state-of-the-art methods, applications, and challenges of representation learning for fine-grained change detection. Our research focuses on methods of harnessing the latent metric space of representation learning techniques as an interim output for hybrid human-machine intelligence. We review methods for transforming and projecting embedding space such that significant changes can be communicated more effectively and a more comprehensive interpretation of underlying relationships in sensor data is facilitated. We conduct this research in our work towards developing a method for aligning the axes of latent embedding space with meaningful real-world metrics so that the reasoning behind the detection of change in relation to past observations may be revealed and adjusted. This is an important topic in many fields concerned with producing more meaningful and explainable outputs from deep learning and also for providing means for knowledge injection and model calibration in order to maintain user confidence.


2021 ◽  
Vol 54 (1) ◽  
pp. 1-39
Author(s):  
Zara Nasar ◽  
Syed Waqar Jaffry ◽  
Muhammad Kamran Malik

With the advent of Web 2.0, there exist many online platforms that result in massive textual-data production. With ever-increasing textual data at hand, it is of immense importance to extract information nuggets from this data. One approach towards effective harnessing of this unstructured textual data could be its transformation into structured text. Hence, this study aims to present an overview of approaches that can be applied to extract key insights from textual data in a structured way. For this, Named Entity Recognition and Relation Extraction are being majorly addressed in this review study. The former deals with identification of named entities, and the latter deals with problem of extracting relation between set of entities. This study covers early approaches as well as the developments made up till now using machine learning models. Survey findings conclude that deep-learning-based hybrid and joint models are currently governing the state-of-the-art. It is also observed that annotated benchmark datasets for various textual-data generators such as Twitter and other social forums are not available. This scarcity of dataset has resulted into relatively less progress in these domains. Additionally, the majority of the state-of-the-art techniques are offline and computationally expensive. Last, with increasing focus on deep-learning frameworks, there is need to understand and explain the under-going processes in deep architectures.


Author(s):  
J.M. Murray ◽  
P. Pfeffer ◽  
R. Seifert ◽  
A. Hermann ◽  
J. Handke ◽  
...  

Objective: Manual plaque segmentation in microscopy images is a time-consuming process in atherosclerosis research and potentially subject to unacceptable user-to-user variability and observer bias. We address this by releasing Vesseg a tool that includes state-of-the-art deep learning models for atherosclerotic plaque segmentation. Approach and Results: Vesseg is a containerized, extensible, open-source, and user-oriented tool. It includes 2 models, trained and tested on 1089 hematoxylin-eosin stained mouse model atherosclerotic brachiocephalic artery sections. The models were compared to 3 human raters. Vesseg can be accessed at https://vesseg .online or downloaded. The models show mean Soerensen-Dice scores of 0.91±0.15 for plaque and 0.97±0.08 for lumen pixels. The mean accuracy is 0.98±0.05. Vesseg is already in active use, generating time savings of >10 minutes per slide. Conclusions: Vesseg brings state-of-the-art deep learning methods to atherosclerosis research, providing drastic time savings, while allowing for continuous improvement of models and the underlying pipeline.


Author(s):  
Usman Ahmed ◽  
Jerry Chun-Wei Lin ◽  
Gautam Srivastava

Deep learning methods have led to a state of the art medical applications, such as image classification and segmentation. The data-driven deep learning application can help stakeholders to collaborate. However, limited labelled data set limits the deep learning algorithm to generalize for one domain into another. To handle the problem, meta-learning helps to learn from a small set of data. We proposed a meta learning-based image segmentation model that combines the learning of the state-of-the-art model and then used it to achieve domain adoption and high accuracy. Also, we proposed a prepossessing algorithm to increase the usability of the segments part and remove noise from the new test image. The proposed model can achieve 0.94 precision and 0.92 recall. The ability to increase 3.3% among the state-of-the-art algorithms.


2021 ◽  
Vol 14 (11) ◽  
pp. 1950-1963
Author(s):  
Jie Liu ◽  
Wenqian Dong ◽  
Qingqing Zhou ◽  
Dong Li

Cardinality estimation is a fundamental and critical problem in databases. Recently, many estimators based on deep learning have been proposed to solve this problem and they have achieved promising results. However, these estimators struggle to provide accurate results for complex queries, due to not capturing real inter-column and inter-table correlations. Furthermore, none of these estimators contain the uncertainty information about their estimations. In this paper, we present a join cardinality estimator called Fauce. Fauce learns the correlations across all columns and all tables in the database. It also contains the uncertainty information of each estimation. Among all studied learned estimators, our results are promising: (1) Fauce is a light-weight estimator, it has 10× faster inference speed than the state of the art estimator; (2) Fauce is robust to the complex queries, it provides 1.3×--6.7× smaller estimation errors for complex queries compared with the state of the art estimator; (3) To the best of our knowledge, Fauce is the first estimator that incorporates uncertainty information for cardinality estimation into a deep learning model.


Author(s):  
Yang Liu ◽  
Yachao Yuan ◽  
Jing Liu

Abstract Automatic defect classification is vital to ensure product quality, especially for steel production. In the real world, the amount of collected samples with labels is limited due to high labor costs, and the gathered dataset is usually imbalanced, making accurate steel defect classification very challenging. In this paper, a novel deep learning model for imbalanced multi-label surface defect classification, named ImDeep, is proposed. It can be deployed easily in steel production lines to identify different defect types on the steel's surface. ImDeep incorporates three key techniques, i.e., Imbalanced Sampler, Fussy-FusionNet, and Transfer Learning. It improves the model's classification performance with multi-label and reduces the model's complexity over small datasets with low latency. The performance of different fusion strategies and three key techniques of ImDeep is verified. Simulation results prove that ImDeep accomplishes better performance than the state-of-the-art over the public dataset with varied sizes. Specifically, ImDeep achieves about 97% accuracy of steel surface defect classification over a small imbalanced dataset with a low latency, which improves about 10% compared with that of the state-of-the-art.


Sensors ◽  
2020 ◽  
Vol 20 (5) ◽  
pp. 1459 ◽  
Author(s):  
Tamás Czimmermann ◽  
Gastone Ciuti ◽  
Mario Milazzo ◽  
Marcello Chiurazzi ◽  
Stefano Roccella ◽  
...  

This paper reviews automated visual-based defect detection approaches applicable to various materials, such as metals, ceramics and textiles. In the first part of the paper, we present a general taxonomy of the different defects that fall in two classes: visible (e.g., scratches, shape error, etc.) and palpable (e.g., crack, bump, etc.) defects. Then, we describe artificial visual processing techniques that are aimed at understanding of the captured scenery in a mathematical/logical way. We continue with a survey of textural defect detection based on statistical, structural and other approaches. Finally, we report the state of the art for approaching the detection and classification of defects through supervised and non-supervised classifiers and deep learning.


Computers ◽  
2020 ◽  
Vol 9 (2) ◽  
pp. 37 ◽  
Author(s):  
Luca Cappelletti ◽  
Tommaso Fontana ◽  
Guido Walter Di Donato ◽  
Lorenzo Di Tucci ◽  
Elena Casiraghi ◽  
...  

Missing data imputation has been a hot topic in the past decade, and many state-of-the-art works have been presented to propose novel, interesting solutions that have been applied in a variety of fields. In the past decade, the successful results achieved by deep learning techniques have opened the way to their application for solving difficult problems where human skill is not able to provide a reliable solution. Not surprisingly, some deep learners, mainly exploiting encoder-decoder architectures, have also been designed and applied to the task of missing data imputation. However, most of the proposed imputation techniques have not been designed to tackle “complex data”, that is high dimensional data belonging to datasets with huge cardinality and describing complex problems. Precisely, they often need critical parameters to be manually set or exploit complex architecture and/or training phases that make their computational load impracticable. In this paper, after clustering the state-of-the-art imputation techniques into three broad categories, we briefly review the most representative methods and then describe our data imputation proposals, which exploit deep learning techniques specifically designed to handle complex data. Comparative tests on genome sequences show that our deep learning imputers outperform the state-of-the-art KNN-imputation method when filling gaps in human genome sequences.


Sign in / Sign up

Export Citation Format

Share Document