Prediction of the electromagnetic responses of geological bodies based on a temporal convolutional network model

2021 ◽  
Author(s):  
Chongxin Yuan ◽  
Xuben Wang ◽  
Fei Deng ◽  
Kunpeng Wang ◽  
Rui Yang
2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Sreevani Katabathula ◽  
Qinyong Wang ◽  
Rong Xu

Abstract Background Alzheimer’s disease (AD) is a progressive and irreversible brain disorder. Hippocampus is one of the involved regions and its atrophy is a widely used biomarker for AD diagnosis. We have recently developed DenseCNN, a lightweight 3D deep convolutional network model, for AD classification based on hippocampus magnetic resonance imaging (MRI) segments. In addition to the visual features of the hippocampus segments, the global shape representations of the hippocampus are also important for AD diagnosis. In this study, we propose DenseCNN2, a deep convolutional network model for AD classification by incorporating global shape representations along with hippocampus segmentations. Methods The data was obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) and was T1-weighted structural MRI from initial screening or baseline, including ADNI 1,2/GO and 3. DenseCNN2 was trained and evaluated with 326 AD subjects and 607 CN hippocampus MRI using 5-fold cross-validation strategy. DenseCNN2 was compared with other state-of-the-art machine learning approaches for the task of AD classification. Results We showed that DenseCNN2 with combined visual and global shape features performed better than deep learning models with visual or global shape features alone. DenseCNN2 achieved an average accuracy of 0.925, sensitivity of 0.882, specificity of 0.949, and area under curve (AUC) of 0.978, which are better than or comparable to the state-of-the-art methods in AD classification. Data visualization analysis through 2D embedding of UMAP confirmed that global shape features improved class discrimination between AD and normal. Conclusion DenseCNN2, a lightweight 3D deep convolutional network model based on combined hippocampus segmentations and global shape features, achieved high performance and has potential as an efficient diagnostic tool for AD classification.


2021 ◽  
Vol 2 (1) ◽  
pp. 101-105
Author(s):  
Runyu Hong ◽  
Wenke Liu ◽  
David Fenyö

Studies have shown that STK11 mutation plays a critical role in affecting the lung adenocarcinoma (LUAD) tumor immune environment. By training an Inception-Resnet-v2 deep convolutional neural network model, we were able to classify STK11-mutated and wild-type LUAD tumor histopathology images with a promising accuracy (per slide AUROC = 0.795). Dimensional reduction of the activation maps before the output layer of the test set images revealed that fewer immune cells were accumulated around cancer cells in STK11-mutation cases. Our study demonstrated that deep convolutional network model can automatically identify STK11 mutations based on histopathology slides and confirmed that the immune cell density was the main feature used by the model to distinguish STK11-mutated cases.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Qichang Xu

Aiming at the shortcomings of traditional moving target detection methods in complex scenes such as low detection accuracy and high complexity, and not considering the overall structure information of the video frame image, this paper proposes a moving-target detection based on sensor network. First, a low-power motion detection wireless sensor network node is designed to obtain motion detection information in real time. Secondly, the background of the video scene is quickly extracted by the time domain averaging method, and the video sequence and the background image are channel-merged to construct a deep full convolutional network model. Finally, the network model is used to learn the deep features of the video scene and output the pixel-level classification results to achieve moving target detection. This method not only can adapt to complex video scenes of different sizes but also has a simple background extraction method, which effectively improves the detection speed.


2020 ◽  
Vol 24 (5) ◽  
pp. 1382-1401
Author(s):  
Kun Qin ◽  
Yuanquan Xu ◽  
Chaogui Kang ◽  
Mei‐Po Kwan

Sign in / Sign up

Export Citation Format

Share Document