scholarly journals A Deep Learning Approach for the Automatic Identification of the Left Atrium Within CT Scans

Author(s):  
Alex Deakyne ◽  
Erik Gaasedelen ◽  
Paul A. Iaizzo

Recent advancements in deep learning have led to the possibility of increased performance in computer vision tools. A major development has been the usage of Convolutional Neural Networks (CNN) for automatically detecting features within a given image. Architectures such as YOLO1 have obtained incredibly high performances for the real-time detection of every-day objects within images. However to date, there have been few reports of deep learning applied to detect anatomical features within CT scans; especially those within the cardiovascular space. We propose here an automatic anatomical feature detection pipeline for identifying the features of the left atrium using a CNN. Slices of CT scans were fed into a single neural network which predicted the four bounding box coordinates that encapsulate the left atrium. The network can be optimized end-to-end and generate predictions at great speed, achieving a validation smooth L1 loss of 11.95 when predicting the left atrial bounding boxes.

Author(s):  
Vlad Vasilescu ◽  
Ana Neacsu ◽  
Emilie Chouzenoux ◽  
Jean-Christophe Pesquet ◽  
Corneliu Burileanu

2020 ◽  
Vol 20 (2020) ◽  
pp. 370-371
Author(s):  
Marcelo Igor Lourenço De Souza ◽  
Jean David Job Emmanuel Marie Caprace ◽  
Ramiro Fernandes Ramos ◽  
João Vitor Marques de Oliveira Moita ◽  
Luisa Nogueira de Azeredo Coutinho Soares ◽  
...  

2020 ◽  
Vol 9 (2) ◽  
Author(s):  
Rohan Bhansali ◽  
Rahul Kumar ◽  
Duke Writer

Coronavirus disease (COVID-19) is currently the cause of a global pandemic that is affecting millions of people around the world. Inadequate testing resources have resulted in several people going undiagnosed and consequently untreated; however, using computerized tomography (CT) scans for diagnosis is an alternative to bypass this limitation. Unfortunately, CT scan analysis is time-consuming and labor intensive and rendering is generally infeasible in most diagnosis situations. In order to alleviate this problem, previous studies have utilized multiple deep learning techniques to analyze biomedical images such as CT scans. Specifically, convolutional neural networks (CNNs) have been shown to provide medical diagnosis with a high degree of accuracy. A common issue in the training of CNNs for biomedical applications is the requirement of large datasets. In this paper, we propose the use of affine transformations to artificially magnify the size of our dataset. Additionally, we propose the use of the Laplace filter to increase feature detection in CT scan analysis. We then feed the preprocessed images to a novel deep CNN structure: CoronaNet. We find that the use of the Laplace filter significantly increases the performance of CoronaNet across all metrics. Additionally, we find that affine transformations successfully magnify the dataset without resulting in high degrees of overfitting. Specifically, we achieved an accuracy of 92% and an F1 of 0.8735. Our novel research describes the potential of the Laplace filter to significantly increase deep CNN performance in biomedical applications such as COVID-19 diagnosis.


2018 ◽  
Author(s):  
Chaoxin Wang ◽  
Xukun Li ◽  
Doina Caragea ◽  
Raju Bheemanahalli ◽  
S.V. Krishna Jagadish

The aboveground plant efficiency has improved significantly in recent years, and the improvement has led to a steady increase in global food production. The improvement of belowground plant efficiency has the potential to further increase food production. However, the belowground plant roots are harder to study, due to inherent challenges presented by root phenotyping. Several tools for identifying root anatomical features in root cross-section images have been proposed. However, the existing tools are not fully automated and require significant human effort to produce accurate results. To address this limitation, we propose a fully automated approach, called Deep Learning for Root Anatomy (DL-RootAnatomy), for identifying anatomical traits in root cross-section images. Using the Faster Region-based Convolutional Neural Network (Faster R-CNN), the DL-RootAnatomy models detect objects such as root, stele and late metaxylem, and predict rectangular bounding boxes around such objects. Subsequently, the bounding boxes are used to estimate the root diameter, stele diameter, and late metaxylem number and average diameter. Experimental evaluation using standard object detection metrics, such as intersection-over-union and mean average precision, has shown that our models can accurately detect the root, stele and late metaxylem objects. Furthermore, the results have shown that the measurements estimated based on predicted bounding boxes have very small root mean square error when compared with the corresponding ground truth values, suggesting that DL-RootAnatomy can be used to accurately detect anatomical features. Finally, a comparison with existing approaches, which involve some degree of human interaction, has shown that the proposed approach is more accurate than existing approaches on a subset of our data. A webserver for performing root anatomy using our deep learning pre-trained models is available at https://rootanatomy.org, together with a link to a GitHub repository that contains code that can be used to re-train or fine-tune our network with other types of root-cross section images. The labeled images used for training and evaluating our models are also available from the GitHub repository.


2019 ◽  
Vol 110 (5) ◽  
pp. 328-337
Author(s):  
Ren Wei ◽  
Chendan Jiang ◽  
Jun Gao ◽  
Ping Xu ◽  
Debing Zhang ◽  
...  

Background: Deep learning has the potential to assist the medical diagnostic process. We aimed to identify facial anomalies associated with endocrinal disorders using a deep-learning approach to facilitate the process of diagnosis and follow-up. Methods: We collected facial images of patients with hypercortisolism and acromegaly, and we augmented these images with additional negative samples from public databases. A model with a pretrained deep-learning network was constructed to automatically identify these hypersecretion statuses based on characteristic facial changes. We compared its performance to that of endocrine experts and further investigated key factors upon which the best performing model focused. Findings: The model achieved areas under the receiver operating characteristic curve of 0.9647 (Cushing’s syndrome) and 0.9556 (acromegaly), accuracies of 0.9593 (Cushing’s syndrome) and 0.9479 (acromegaly), and recalls of 0.7593 (Cushing’s syndrome) and 0.8089 (acromegaly). It performed better than any level of our endocrine experts. Furthermore, the regions of interest on the part of the machine were primarily the same as those upon which the humans focused. Interpretation: Our findings suggest that the deep-learning model learned the facial characters based merely on labeled data without learning prerequisite medical knowledge, and its performance was comparable with professional medical practitioners. The model has the potential to assist in the diagnosis and follow-up of these hypersecretion statuses.


Author(s):  
Akif Quddus Khan ◽  
Salman Khan

Generic object detection is one of the most important and flourishing branches of computer vision and has real-life applications in our day to day life. With the exponential development of deep learning-based techniques for object detection, the performance has enhanced considerably over the last 2 decades. However, due to the data-hungry nature of deep models, they don't perform well on tasks which have very limited labeled dataset available. To handle this problem, we proposed a transfer learning-based deep learning approach for detecting multiple pigs in the indoor farm setting. The approach is based on YOLO-v2 and the initial parameters are used as the optimal starting values for train-ing the network. Compared to the original YOLO-v2, we transformed the detector to detect only one class of objects i.e. pigs and the back-ground. For training the network, the farm-specific data is annotated with the bounding boxes enclosing pigs in the top view. Experiments are performed on a different configuration of the pen in the farm and convincing results have been achieved while using a few hundred annotated frames for fine-tuning the network.


2019 ◽  
Vol 16 (Special Issue) ◽  
Author(s):  
Shakiba Moradi ◽  
Mostafa Ghelich Oghli ◽  
Azin Alizadehasl ◽  
Ali Shabanzadeh

2021 ◽  
Author(s):  
Xue Wang ◽  
Xuemei Yang ◽  
Jian Du ◽  
Xuwen Wang ◽  
Jiao Li ◽  
...  

AbstractBreakthrough research in scientific fields usually comes as a manifestation of major development and advancement. These advances build to an epiphany where new ways of thinking about a problem become possible. Identifying breakthrough research can be useful for cultivating and funding further innovation. This article presents a new method for identifying scientific breakthroughs from research papers based on cue words commonly associated with major advancements. We looked for specific terms signifying scientific breakthroughs in citing sentences to identify breakthrough articles. By setting a threshold for the number of citing sentences (“citances”) with breakthrough cue words that peer scholars often use when evaluating research, we identified articles containing breakthrough research. We call this approach the “others-evaluation” process. We then shortlisted candidates from the selected articles based on the authors’ evaluations of their own research, found in the abstracts. This we call the “self-evaluation” process. Combining the two approaches into a dual “others-self” evaluation process, we arrived at a sample of 237 potential breakthrough articles, most of which are recommended by the Faculty Opinions. Based on the breakthrough articles identified, using SVM, TextCNN, and BERT to train the models to identify abstracts with breakthrough evaluations. This automatic identification model can greatly simplify the process of others-self-evaluation process and promote identifying breakthrough research.


Sign in / Sign up

Export Citation Format

Share Document