Comparative Analysis of Deep Neural Network and Texture-Based Classifiers for Recognition of Acute Stroke using Non-Contrast CT Images

Author(s):  
Victor Nedel'ko ◽  
Roman Kozinets ◽  
Andrey Tulupov ◽  
Vladimir Berikov
Heart Rhythm ◽  
2021 ◽  
Vol 18 (8) ◽  
pp. S352
Author(s):  
Rebecca Yu ◽  
Rheeda L. Ali ◽  
Pallavi Pandey ◽  
Ryan P. Bradley ◽  
David D. Spragg ◽  
...  

Author(s):  
Guoting Luo ◽  
Qing Yang ◽  
Tao Chen ◽  
Tao Zheng ◽  
Wei Xie ◽  
...  

2020 ◽  
Vol 11 (1) ◽  
Author(s):  
Takuya Maekawa ◽  
Kazuya Ohara ◽  
Yizhe Zhang ◽  
Matasaburo Fukutomi ◽  
Sakiko Matsumoto ◽  
...  

Abstract A comparative analysis of animal behavior (e.g., male vs. female groups) has been widely used to elucidate behavior specific to one group since pre-Darwinian times. However, big data generated by new sensing technologies, e.g., GPS, makes it difficult for them to contrast group differences manually. This study introduces DeepHL, a deep learning-assisted platform for the comparative analysis of animal movement data, i.e., trajectories. This software uses a deep neural network based on an attention mechanism to automatically detect segments in trajectories that are characteristic of one group. It then highlights these segments in visualized trajectories, enabling biologists to focus on these segments, and helps them reveal the underlying meaning of the highlighted segments to facilitate formulating new hypotheses. We tested the platform on a variety of trajectories of worms, insects, mice, bears, and seabirds across a scale from millimeters to hundreds of kilometers, revealing new movement features of these animals.


Cancers ◽  
2021 ◽  
Vol 13 (13) ◽  
pp. 3300
Author(s):  
Jing Gong ◽  
Jiyu Liu ◽  
Haiming Li ◽  
Hui Zhu ◽  
Tingting Wang ◽  
...  

This study aims to develop a deep neural network (DNN)-based two-stage risk stratification model for early lung adenocarcinomas in CT images, and investigate the performance compared with practicing radiologists. A total of 2393 GGNs were retrospectively collected from 2105 patients in four centers. All the pathologic results of GGNs were obtained from surgically resected specimens. A two-stage deep neural network was developed based on the 3D residual network and atrous convolution module to diagnose benign and malignant GGNs (Task1) and classify between invasive adenocarcinoma (IA) and non-IA for these malignant GGNs (Task2). A multi-reader multi-case observer study with six board-certified radiologists’ (average experience 11 years, range 2–28 years) participation was conducted to evaluate the model capability. DNN yielded area under the receiver operating characteristic curve (AUC) values of 0.76 ± 0.03 (95% confidence interval (CI): (0.69, 0.82)) and 0.96 ± 0.02 (95% CI: (0.92, 0.98)) for Task1 and Task2, which were equivalent to or higher than radiologists in the senior group with average AUC values of 0.76 and 0.95, respectively (p > 0.05). With the CT image slice thickness increasing from 1.15 mm ± 0.36 to 1.73 mm ± 0.64, DNN performance decreased 0.08 and 0.22 for the two tasks. The results demonstrated (1) a positive trend between the diagnostic performance and radiologist’s experience, (2) the DNN yielded equivalent or even higher performance in comparison with senior radiologists, and (3) low image resolution decreased model performance in predicting the risks of GGNs. Once tested prospectively in clinical practice, the DNN could have the potential to assist doctors in precision diagnosis and treatment of early lung adenocarcinoma.


2020 ◽  
Author(s):  
Bin Liu ◽  
Xiaoxue Gao ◽  
Mengshuang He ◽  
Fengmao Lv ◽  
Guosheng Yin

Chest computed tomography (CT) scanning is one of the most important technologies for COVID-19 diagnosis and disease monitoring, particularly for early detection of coronavirus. Recent advancements in computer vision motivate more concerted efforts in developing AI-driven diagnostic tools to accommodate the enormous demands for the COVID-19 diagnostic tests globally. To help alleviate burdens on medical systems, we develop a lesion-attention deep neural network (LA-DNN) to predict COVID-19 positive or negative with a richly annotated chest CT image dataset. Based on the textual radiological report accompanied with each CT image, we extract two types of important information for the annotations: One is the indicator of a positive or negative case of COVID-19, and the other is the description of five lesions on the CT images associated with the positive cases. The proposed data-efficient LA-DNN model focuses on the primary task of binary classification for COVID-19 diagnosis, while an auxiliary multi-label learning task is implemented simultaneously to draw the model's attention to the five lesions associated with COVID-19. The joint task learning process makes it a highly sample-efficient deep neural network that can learn COVID-19 radiology features more effectively with limited but high-quality, rich-information samples. The experimental results show that the area under the curve (AUC) and sensitivity (recall), precision, and accuracy for COVID-19 diagnosis are 94.0%, 88.8%, 87.9%, and 88.6% respectively, which reach the clinical standards for practical use. A free online system is currently alive for fast diagnosis using CT images at the website https://www.covidct.cn/, and all codes and datasets are freely accessible at our github address.


Sign in / Sign up

Export Citation Format

Share Document