scholarly journals Prediction of Cerebral Aneurysm Hemodynamics With Porous-Medium Models of Flow-Diverting Stents via Deep Learning

2021 ◽  
Vol 12 ◽  
Author(s):  
Gaoyang Li ◽  
Xiaorui Song ◽  
Haoran Wang ◽  
Siwei Liu ◽  
Jiayuan Ji ◽  
...  

The interventional treatment of cerebral aneurysm requires hemodynamics to provide proper guidance. Computational fluid dynamics (CFD) is gradually used in calculating cerebral aneurysm hemodynamics before and after flow-diverting (FD) stent placement. However, the complex operation (such as the construction and placement simulation of fully resolved or porous-medium FD stent) and high computational cost of CFD hinder its application. To solve these problems, we applied aneurysm hemodynamics point cloud data sets and a deep learning network with double input and sampling channels. The flexible point cloud format can represent the geometry and flow distribution of different aneurysms before and after FD stent (represented by porous medium layer) placement with high resolution. The proposed network can directly analyze the relationship between aneurysm geometry and internal hemodynamics, to further realize the flow field prediction and avoid the complex operation of CFD. Statistical analysis shows that the prediction results of hemodynamics by our deep learning method are consistent with the CFD method (error function <13%), but the calculation time is significantly reduced 1,800 times. This study develops a novel deep learning method that can accurately predict the hemodynamics of different cerebral aneurysms before and after FD stent placement with low computational cost and simple operation processes.

2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Gaoyang Li ◽  
Haoran Wang ◽  
Mingzi Zhang ◽  
Simon Tupin ◽  
Aike Qiao ◽  
...  

AbstractThe clinical treatment planning of coronary heart disease requires hemodynamic parameters to provide proper guidance. Computational fluid dynamics (CFD) is gradually used in the simulation of cardiovascular hemodynamics. However, for the patient-specific model, the complex operation and high computational cost of CFD hinder its clinical application. To deal with these problems, we develop cardiovascular hemodynamic point datasets and a dual sampling channel deep learning network, which can analyze and reproduce the relationship between the cardiovascular geometry and internal hemodynamics. The statistical analysis shows that the hemodynamic prediction results of deep learning are in agreement with the conventional CFD method, but the calculation time is reduced 600-fold. In terms of over 2 million nodes, prediction accuracy of around 90%, computational efficiency to predict cardiovascular hemodynamics within 1 second, and universality for evaluating complex arterial system, our deep learning method can meet the needs of most situations.


2021 ◽  
Vol 13 (21) ◽  
pp. 4445
Author(s):  
Behrokh Nazeri ◽  
Melba Crawford

High-resolution point cloud data acquired with a laser scanner from any platform contain random noise and outliers. Therefore, outlier detection in LiDAR data is often necessary prior to analysis. Applications in agriculture are particularly challenging, as there is typically no prior knowledge of the statistical distribution of points, plant complexity, and local point densities, which are crop-dependent. The goals of this study were first to investigate approaches to minimize the impact of outliers on LiDAR acquired over agricultural row crops, and specifically for sorghum and maize breeding experiments, by an unmanned aerial vehicle (UAV) and a wheel-based ground platform; second, to evaluate the impact of existing outliers in the datasets on leaf area index (LAI) prediction using LiDAR data. Two methods were investigated to detect and remove the outliers from the plant datasets. The first was based on surface fitting to noisy point cloud data via normal and curvature estimation in a local neighborhood. The second utilized the PointCleanNet deep learning framework. Both methods were applied to individual plants and field-based datasets. To evaluate the method, an F-score was calculated for synthetic data in the controlled conditions, and LAI, the variable being predicted, was computed both before and after outlier removal for both scenarios. Results indicate that the deep learning method for outlier detection is more robust than the geometric approach to changes in point densities, level of noise, and shapes. The prediction of LAI was also improved for the wheel-based vehicle data based on the coefficient of determination (R2) and the root mean squared error (RMSE) of the residuals before and after the removal of outliers.


2021 ◽  
Vol 13 (5) ◽  
pp. 859
Author(s):  
Elyta Widyaningrum ◽  
Qian Bai ◽  
Marda K. Fajari ◽  
Roderik C. Lindenbergh

Classification of aerial point clouds with high accuracy is significant for many geographical applications, but not trivial as the data are massive and unstructured. In recent years, deep learning for 3D point cloud classification has been actively developed and applied, but notably for indoor scenes. In this study, we implement the point-wise deep learning method Dynamic Graph Convolutional Neural Network (DGCNN) and extend its classification application from indoor scenes to airborne point clouds. This study proposes an approach to provide cheap training samples for point-wise deep learning using an existing 2D base map. Furthermore, essential features and spatial contexts to effectively classify airborne point clouds colored by an orthophoto are also investigated, in particularly to deal with class imbalance and relief displacement in urban areas. Two airborne point cloud datasets of different areas are used: Area-1 (city of Surabaya—Indonesia) and Area-2 (cities of Utrecht and Delft—the Netherlands). Area-1 is used to investigate different input feature combinations and loss functions. The point-wise classification for four classes achieves a remarkable result with 91.8% overall accuracy when using the full combination of spectral color and LiDAR features. For Area-2, different block size settings (30, 50, and 70 m) are investigated. It is found that using an appropriate block size of, in this case, 50 m helps to improve the classification until 93% overall accuracy but does not necessarily ensure better classification results for each class. Based on the experiments on both areas, we conclude that using DGCNN with proper settings is able to provide results close to production.


2020 ◽  
Vol 13 (4) ◽  
pp. 627-640 ◽  
Author(s):  
Avinash Chandra Pandey ◽  
Dharmveer Singh Rajpoot

Background: Sentiment analysis is a contextual mining of text which determines viewpoint of users with respect to some sentimental topics commonly present at social networking websites. Twitter is one of the social sites where people express their opinion about any topic in the form of tweets. These tweets can be examined using various sentiment classification methods to find the opinion of users. Traditional sentiment analysis methods use manually extracted features for opinion classification. The manual feature extraction process is a complicated task since it requires predefined sentiment lexicons. On the other hand, deep learning methods automatically extract relevant features from data hence; they provide better performance and richer representation competency than the traditional methods. Objective: The main aim of this paper is to enhance the sentiment classification accuracy and to reduce the computational cost. Method: To achieve the objective, a hybrid deep learning model, based on convolution neural network and bi-directional long-short term memory neural network has been introduced. Results: The proposed sentiment classification method achieves the highest accuracy for the most of the datasets. Further, from the statistical analysis efficacy of the proposed method has been validated. Conclusion: Sentiment classification accuracy can be improved by creating veracious hybrid models. Moreover, performance can also be enhanced by tuning the hyper parameters of deep leaning models.


2019 ◽  
Vol 9 (22) ◽  
pp. 4749
Author(s):  
Lingyun Jiang ◽  
Kai Qiao ◽  
Linyuan Wang ◽  
Chi Zhang ◽  
Jian Chen ◽  
...  

Decoding human brain activities, especially reconstructing human visual stimuli via functional magnetic resonance imaging (fMRI), has gained increasing attention in recent years. However, the high dimensionality and small quantity of fMRI data impose restrictions on satisfactory reconstruction, especially for the reconstruction method with deep learning requiring huge amounts of labelled samples. When compared with the deep learning method, humans can recognize a new image because our human visual system is naturally capable of extracting features from any object and comparing them. Inspired by this visual mechanism, we introduced the mechanism of comparison into deep learning method to realize better visual reconstruction by making full use of each sample and the relationship of the sample pair by learning to compare. In this way, we proposed a Siamese reconstruction network (SRN) method. By using the SRN, we improved upon the satisfying results on two fMRI recording datasets, providing 72.5% accuracy on the digit dataset and 44.6% accuracy on the character dataset. Essentially, this manner can increase the training data about from n samples to 2n sample pairs, which takes full advantage of the limited quantity of training samples. The SRN learns to converge sample pairs of the same class or disperse sample pairs of different class in feature space.


Author(s):  
Mathieu Turgeon-Pelchat ◽  
Samuel Foucher ◽  
Yacine Bouroubi

GigaScience ◽  
2021 ◽  
Vol 10 (5) ◽  
Author(s):  
Teng Miao ◽  
Weiliang Wen ◽  
Yinglun Li ◽  
Sheng Wu ◽  
Chao Zhu ◽  
...  

Abstract Background The 3D point cloud is the most direct and effective data form for studying plant structure and morphology. In point cloud studies, the point cloud segmentation of individual plants to organs directly determines the accuracy of organ-level phenotype estimation and the reliability of the 3D plant reconstruction. However, highly accurate, automatic, and robust point cloud segmentation approaches for plants are unavailable. Thus, the high-throughput segmentation of many shoots is challenging. Although deep learning can feasibly solve this issue, software tools for 3D point cloud annotation to construct the training dataset are lacking. Results We propose a top-to-down point cloud segmentation algorithm using optimal transportation distance for maize shoots. We apply our point cloud annotation toolkit for maize shoots, Label3DMaize, to achieve semi-automatic point cloud segmentation and annotation of maize shoots at different growth stages, through a series of operations, including stem segmentation, coarse segmentation, fine segmentation, and sample-based segmentation. The toolkit takes ∼4–10 minutes to segment a maize shoot and consumes 10–20% of the total time if only coarse segmentation is required. Fine segmentation is more detailed than coarse segmentation, especially at the organ connection regions. The accuracy of coarse segmentation can reach 97.2% that of fine segmentation. Conclusion Label3DMaize integrates point cloud segmentation algorithms and manual interactive operations, realizing semi-automatic point cloud segmentation of maize shoots at different growth stages. The toolkit provides a practical data annotation tool for further online segmentation research based on deep learning and is expected to promote automatic point cloud processing of various plants.


Sign in / Sign up

Export Citation Format

Share Document