Deep neural network-based detection and segmentation of intracranial aneurysms on 3D rotational DSA

2021 ◽  
pp. 159101992110009
Author(s):  
Xinke Liu ◽  
Junqiang Feng ◽  
Zhenzhou Wu ◽  
Zhonghao Neo ◽  
Chengcheng Zhu ◽  
...  

Objective Accurate diagnosis and measurement of intracranial aneurysms are challenging. This study aimed to develop a 3D convolutional neural network (CNN) model to detect and segment intracranial aneurysms (IA) on 3D rotational DSA (3D-RA) images. Methods 3D-RA images were collected and annotated by 5 neuroradiologists. The annotated images were then divided into three datasets: training, validation, and test. A 3D Dense-UNet-like CNN (3D-Dense-UNet) segmentation algorithm was constructed and trained using the training dataset. Diagnostic performance to detect aneurysms and segmentation accuracy was assessed for the final model on the test dataset using the free-response receiver operating characteristic (FROC). Finally, the CNN-inferred maximum diameter was compared against expert measurements by Pearson’s correlation and Bland-Altman limits of agreement (LOA). Results A total of 451 patients with 3D-RA images were split into n = 347/41/63 training/validation/test datasets, respectively. For aneurysm detection, observed FROC analysis showed that the model managed to attain a sensitivity of 0.710 at 0.159 false positives (FP)/case, and 0.986 at 1.49 FP/case. The proposed method had good agreement with reference manual aneurysmal maximum diameter measurements (8.3 ± 4.3 mm vs. 7.8 ± 4.8 mm), with a correlation coefficient r = 0.77, small bias of 0.24 mm, and LOA of -6.2 to 5.71 mm. 37.0% and 77% of diameter measurements were within ±1 mm and ±2.5 mm of expert measurements. Conclusions A 3D-Dense-UNet model can detect and segment aneurysms with relatively high accuracy using 3D-RA images. The automatically measured maximum diameter has potential clinical application value.

2021 ◽  
Vol 16 (1) ◽  
Author(s):  
Hideaki Hirashima ◽  
Mitsuhiro Nakamura ◽  
Pascal Baillehache ◽  
Yusuke Fujimoto ◽  
Shota Nakagawa ◽  
...  

Abstract Background This study aimed to (1) develop a fully residual deep convolutional neural network (CNN)-based segmentation software for computed tomography image segmentation of the male pelvic region and (2) demonstrate its efficiency in the male pelvic region. Methods A total of 470 prostate cancer patients who had undergone intensity-modulated radiotherapy or volumetric-modulated arc therapy were enrolled. Our model was based on FusionNet, a fully residual deep CNN developed to semantically segment biological images. To develop the CNN-based segmentation software, 450 patients were randomly selected and separated into the training, validation and testing groups (270, 90, and 90 patients, respectively). In Experiment 1, to determine the optimal model, we first assessed the segmentation accuracy according to the size of the training dataset (90, 180, and 270 patients). In Experiment 2, the effect of varying the number of training labels on segmentation accuracy was evaluated. After determining the optimal model, in Experiment 3, the developed software was used on the remaining 20 datasets to assess the segmentation accuracy. The volumetric dice similarity coefficient (DSC) and the 95th-percentile Hausdorff distance (95%HD) were calculated to evaluate the segmentation accuracy for each organ in Experiment 3. Results In Experiment 1, the median DSC for the prostate were 0.61 for dataset 1 (90 patients), 0.86 for dataset 2 (180 patients), and 0.86 for dataset 3 (270 patients), respectively. The median DSCs for all the organs increased significantly when the number of training cases increased from 90 to 180 but did not improve upon further increase from 180 to 270. The number of labels applied during training had a little effect on the DSCs in Experiment 2. The optimal model was built by 270 patients and four organs. In Experiment 3, the median of the DSC and the 95%HD values were 0.82 and 3.23 mm for prostate; 0.71 and 3.82 mm for seminal vesicles; 0.89 and 2.65 mm for the rectum; 0.95 and 4.18 mm for the bladder, respectively. Conclusions We have developed a CNN-based segmentation software for the male pelvic region and demonstrated that the CNN-based segmentation software is efficient for the male pelvic region.


2019 ◽  
Vol 24 (1-2) ◽  
pp. 94-100
Author(s):  
Kondratiuk S.S. ◽  

The technology, which is implemented with cross platform tools, is proposed for modeling of gesture units of sign language, animation between states of gesture units with a combination of gestures (words). Implemented technology simulates sequence of gestures using virtual spatial hand model and performs recognition of dactyl items from camera input using trained on collected training dataset set convolutional neural network, based on the MobileNetv3 architecture, and with the optimal configuration of layers and network parameters. On the collected test dataset accuracy of over 98% is achieved.


2019 ◽  
Vol 130 (2) ◽  
pp. 573-578 ◽  
Author(s):  
Yohichi Imaizumi ◽  
Tohru Mizutani ◽  
Katsuyoshi Shimizu ◽  
Yosuke Sato ◽  
Junichi Taguchi

OBJECTIVEThe purpose of this study was to evaluate the detection rate and occurrence site according to patient sex and age of unruptured intracranial aneurysms detected through MRI and MR angiography (MRA).METHODSA total of 4070 healthy adults 22 years or older (mean age [± SD] 50.6 ± 11.0 years; 41.9% women) who underwent a brain examination known as “Brain Dock” in the central Tokyo area between April 2014 and March 2015 were checked for unruptured saccular aneurysm using 3-T MRI/MRA. The following types of cases were excluded: 1) protrusions with a maximum diameter < 2 mm at locations other than arterial bifurcations, 2) conical protrusions at arterial bifurcations with a diameter < 3 mm, and 3) cases of suspected aneurysms with unclear imaging of the involved artery. When an aneurysm was definitively diagnosed, the case was included in the aneurysm group. The authors also investigated the relationship between aneurysm occurrence and risk factors (age, sex, smoking history, hypertension, diabetes, and hyperlipidemia).RESULTSOne hundred eighty-eight aneurysms were identified in 176 individuals (detection rate 4.32%), with the detection rate for women being significantly higher (6.2% vs 3.0%, p < 0.001). The average age in the aneurysm group was significantly higher than in the patients in whom aneurysms were not detected (53.0 ± 11.1 vs 50.5 ± 11.0 years). The detection rate tended to increase with age. The detection rates were 3.6% for people in their 30s, 3.5% for those in their 40s, 4.1% for those in their 50s, 6.9% for those in their 60s, and 6.8% for those in their 70s. Excluding persons in their 20s and 80s—age groups in which no aneurysms were discovered—the detection rate in women was higher in all age ranges. Of the individuals with aneurysms, 12 (6.81%) had multiple cerebral aneurysms; no sex difference was observed with respect to the prevalence of multiple aneurysms. Regarding aneurysm size, 2.0–2.9 mm was the most common size range, with 87 occurrences (46.3%), followed by 3.0–3.9 mm (67 [35.6%]) and 4.0–4.9 mm (20 [10.6%]). The largest aneurysm was 13 mm. Regarding location, the internal carotid artery (ICA) was the most common aneurysm site, with 148 (78.7%) occurrences. Within the ICA, C1 was the site of 46 aneurysms (24.5%); C2, 57 (30.3%); and C3, 29 (15.4%). The aneurysm detection rates for C2, C3, and C4 were 2.23%, 1.23%, and 0.64%, respectively, for women and 0.68%, 0.34%, and 0.21%, respectively, for men; ICA aneurysms were significantly more common in women than in men (5.27% vs 2.20%, p < 0.001). Multivariate logistic regression analysis revealed that age (p < 0.001, OR 1.03, 95% CI 1.01–1.04), female sex (p < 0.001, OR 2.28, 95% CI 1.64–3.16), and smoking history (p = 0.011, OR 1.52, 95% CI 1.10–2.11) were significant risk factors for aneurysm occurrenceCONCLUSIONSIn this study, both female sex and older age were independently associated with an increased aneurysm detection rate. Aneurysms were most common in the ICA, and the frequency of aneurysms in ICA sites was markedly higher in women.


Author(s):  
Abhinav N Patil

Image recognition is important side of image processing for machine learning without involving any human support at any step. In this paper we study how image classification is completed using imagery backend. Couple of thousands of images of every, cats and dogs are taken then distributed them into category of test dataset and training dataset for our learning model. The results are obtained using custom neural network with the architecture of Convolution Neural Networks and Keras API.


Cancers ◽  
2021 ◽  
Vol 13 (20) ◽  
pp. 5140
Author(s):  
Gun Oh Chong ◽  
Shin-Hyung Park ◽  
Nora Jee-Young Park ◽  
Bong Kyung Bae ◽  
Yoon Hee Lee ◽  
...  

Background: Our previous study demonstrated that tumor budding (TB) status was associated with inferior overall survival in cervical cancer. The purpose of this study is to evaluate whether radiomic features can predict TB status in cervical cancer patients. Methods: Seventy-four patients with cervical cancer who underwent preoperative MRI and radical hysterectomy from 2011 to 2015 at our institution were enrolled. The patients were randomly allocated to the training dataset (n = 48) and test dataset (n = 26). Tumors were segmented on axial gadolinium-enhanced T1- and T2-weighted images. A total of 2074 radiomic features were extracted. Four machine learning classifiers, including logistic regression (LR), random forest (RF), support vector machine (SVM), and neural network (NN), were used. The trained models were validated on the test dataset. Results: Twenty radiomic features were selected; all were features from filtered-images and 85% were texture-related features. The area under the curve values and accuracy of the models by LR, RF, SVM and NN were 0.742 and 0.769, 0.782 and 0.731, 0.849 and 0.885, and 0.891 and 0.731, respectively, in the test dataset. Conclusion: MRI-based radiomic features could predict TB status in patients with cervical cancer.


2020 ◽  
Vol 2020 (10) ◽  
pp. 181-1-181-7
Author(s):  
Takahiro Kudo ◽  
Takanori Fujisawa ◽  
Takuro Yamaguchi ◽  
Masaaki Ikehara

Image deconvolution has been an important issue recently. It has two kinds of approaches: non-blind and blind. Non-blind deconvolution is a classic problem of image deblurring, which assumes that the PSF is known and does not change universally in space. Recently, Convolutional Neural Network (CNN) has been used for non-blind deconvolution. Though CNNs can deal with complex changes for unknown images, some CNN-based conventional methods can only handle small PSFs and does not consider the use of large PSFs in the real world. In this paper we propose a non-blind deconvolution framework based on a CNN that can remove large scale ringing in a deblurred image. Our method has three key points. The first is that our network architecture is able to preserve both large and small features in the image. The second is that the training dataset is created to preserve the details. The third is that we extend the images to minimize the effects of large ringing on the image borders. In our experiments, we used three kinds of large PSFs and were able to observe high-precision results from our method both quantitatively and qualitatively.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Mu Sook Lee ◽  
Yong Soo Kim ◽  
Minki Kim ◽  
Muhammad Usman ◽  
Shi Sub Byon ◽  
...  

AbstractWe examined the feasibility of explainable computer-aided detection of cardiomegaly in routine clinical practice using segmentation-based methods. Overall, 793 retrospectively acquired posterior–anterior (PA) chest X-ray images (CXRs) of 793 patients were used to train deep learning (DL) models for lung and heart segmentation. The training dataset included PA CXRs from two public datasets and in-house PA CXRs. Two fully automated segmentation-based methods using state-of-the-art DL models for lung and heart segmentation were developed. The diagnostic performance was assessed and the reliability of the automatic cardiothoracic ratio (CTR) calculation was determined using the mean absolute error and paired t-test. The effects of thoracic pathological conditions on performance were assessed using subgroup analysis. One thousand PA CXRs of 1000 patients (480 men, 520 women; mean age 63 ± 23 years) were included. The CTR values derived from the DL models and diagnostic performance exhibited excellent agreement with reference standards for the whole test dataset. Performance of segmentation-based methods differed based on thoracic conditions. When tested using CXRs with lesions obscuring heart borders, the performance was lower than that for other thoracic pathological findings. Thus, segmentation-based methods using DL could detect cardiomegaly; however, the feasibility of computer-aided detection of cardiomegaly without human intervention was limited.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Young-Gon Kim ◽  
Sungchul Kim ◽  
Cristina Eunbee Cho ◽  
In Hye Song ◽  
Hee Jin Lee ◽  
...  

AbstractFast and accurate confirmation of metastasis on the frozen tissue section of intraoperative sentinel lymph node biopsy is an essential tool for critical surgical decisions. However, accurate diagnosis by pathologists is difficult within the time limitations. Training a robust and accurate deep learning model is also difficult owing to the limited number of frozen datasets with high quality labels. To overcome these issues, we validated the effectiveness of transfer learning from CAMELYON16 to improve performance of the convolutional neural network (CNN)-based classification model on our frozen dataset (N = 297) from Asan Medical Center (AMC). Among the 297 whole slide images (WSIs), 157 and 40 WSIs were used to train deep learning models with different dataset ratios at 2, 4, 8, 20, 40, and 100%. The remaining, i.e., 100 WSIs, were used to validate model performance in terms of patch- and slide-level classification. An additional 228 WSIs from Seoul National University Bundang Hospital (SNUBH) were used as an external validation. Three initial weights, i.e., scratch-based (random initialization), ImageNet-based, and CAMELYON16-based models were used to validate their effectiveness in external validation. In the patch-level classification results on the AMC dataset, CAMELYON16-based models trained with a small dataset (up to 40%, i.e., 62 WSIs) showed a significantly higher area under the curve (AUC) of 0.929 than those of the scratch- and ImageNet-based models at 0.897 and 0.919, respectively, while CAMELYON16-based and ImageNet-based models trained with 100% of the training dataset showed comparable AUCs at 0.944 and 0.943, respectively. For the external validation, CAMELYON16-based models showed higher AUCs than those of the scratch- and ImageNet-based models. Model performance for slide feasibility of the transfer learning to enhance model performance was validated in the case of frozen section datasets with limited numbers.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3813
Author(s):  
Athanasios Anagnostis ◽  
Aristotelis C. Tagarakis ◽  
Dimitrios Kateris ◽  
Vasileios Moysiadis ◽  
Claus Grøn Sørensen ◽  
...  

This study aimed to propose an approach for orchard trees segmentation using aerial images based on a deep learning convolutional neural network variant, namely the U-net network. The purpose was the automated detection and localization of the canopy of orchard trees under various conditions (i.e., different seasons, different tree ages, different levels of weed coverage). The implemented dataset was composed of images from three different walnut orchards. The achieved variability of the dataset resulted in obtaining images that fell under seven different use cases. The best-trained model achieved 91%, 90%, and 87% accuracy for training, validation, and testing, respectively. The trained model was also tested on never-before-seen orthomosaic images or orchards based on two methods (oversampling and undersampling) in order to tackle issues with out-of-the-field boundary transparent pixels from the image. Even though the training dataset did not contain orthomosaic images, it achieved performance levels that reached up to 99%, demonstrating the robustness of the proposed approach.


Machines ◽  
2021 ◽  
Vol 9 (3) ◽  
pp. 66
Author(s):  
Tianci Chen ◽  
Rihong Zhang ◽  
Lixue Zhu ◽  
Shiang Zhang ◽  
Xiaomin Li

In an orchard environment with a complex background and changing light conditions, the banana stalk, fruit, branches, and leaves are very similar in color. The fast and accurate detection and segmentation of a banana stalk are crucial to realize the automatic picking using a banana picking robot. In this paper, a banana stalk segmentation method based on a lightweight multi-feature fusion deep neural network (MFN) is proposed. The proposed network is mainly composed of encoding and decoding networks, in which the sandglass bottleneck design is adopted to alleviate the information a loss in high dimension. In the decoding network, a different sized dilated convolution kernel is used for convolution operation to make the extracted banana stalk features denser. The proposed network is verified by experiments. In the experiments, the detection precision, segmentation accuracy, number of parameters, operation efficiency, and average execution time are used as evaluation metrics, and the proposed network is compared with Resnet_Segnet, Mobilenet_Segnet, and a few other networks. The experimental results show that compared to other networks, the number of network parameters of the proposed network is significantly reduced, the running frame rate is improved, and the average execution time is shortened.


Sign in / Sign up

Export Citation Format

Share Document