scholarly journals Integration and Fusion of Geologic Hazard Data under Deep Learning and Big Data Analysis Technology

Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Feng He ◽  
Chunxue Liu ◽  
Hongjiang Liu

To discuss the analysis and evaluation of highway landslides, the application of data mining methods combined with deep learning frameworks in geologic hazard evaluation and monitoring is explored preliminarily. On the premise of optimizing the processing of landslide images, first, the Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) based on the natural statistical characteristics of the spatial domain is introduced, which is initially combined with Super-Resolution Convolutional Neural Network (SRCNN). Then, the AlexNet is fine-tuned and applied to highway landslide monitoring and surveying. Finally, an entropy weight gray clustering evaluation method based on data mining analysis is proposed, and the performances of several methods are verified. The results show that the average score of the BRISQUE algorithm in Image Quality Assessment (IQA) is above 0.9, and the average running time is 0.1523 s. The combination of BRISQUE and SRCNN can improve the image quality significantly. After fine-tuning, the recognition accuracy of AlexNet for landslide images can reach about 80%. The evaluation method based on gray clustering can effectively determine the correlation between soil moisture content and slope angle and thereby be applied to the analysis and evaluation of highway landslides. The results are beneficial to the judgment and assessment of highway landslide conditions, which can be extended to research on other geologic hazards.

2019 ◽  
Vol 2019 (1) ◽  
pp. 360-368
Author(s):  
Mekides Assefa Abebe ◽  
Jon Yngve Hardeberg

Different whiteboard image degradations highly reduce the legibility of pen-stroke content as well as the overall quality of the images. Consequently, different researchers addressed the problem through different image enhancement techniques. Most of the state-of-the-art approaches applied common image processing techniques such as background foreground segmentation, text extraction, contrast and color enhancements and white balancing. However, such types of conventional enhancement methods are incapable of recovering severely degraded pen-stroke contents and produce artifacts in the presence of complex pen-stroke illustrations. In order to surmount such problems, the authors have proposed a deep learning based solution. They have contributed a new whiteboard image data set and adopted two deep convolutional neural network architectures for whiteboard image quality enhancement applications. Their different evaluations of the trained models demonstrated their superior performances over the conventional methods.


Author(s):  
Luuk J. Oostveen ◽  
Frederick J. A. Meijer ◽  
Frank de Lange ◽  
Ewoud J. Smit ◽  
Sjoert A. Pegge ◽  
...  

Abstract Objectives To evaluate image quality and reconstruction times of a commercial deep learning reconstruction algorithm (DLR) compared to hybrid-iterative reconstruction (Hybrid-IR) and model-based iterative reconstruction (MBIR) algorithms for cerebral non-contrast CT (NCCT). Methods Cerebral NCCT acquisitions of 50 consecutive patients were reconstructed using DLR, Hybrid-IR and MBIR with a clinical CT system. Image quality, in terms of six subjective characteristics (noise, sharpness, grey-white matter differentiation, artefacts, natural appearance and overall image quality), was scored by five observers. As objective metrics of image quality, the noise magnitude and signal-difference-to-noise ratio (SDNR) of the grey and white matter were calculated. Mean values for the image quality characteristics scored by the observers were estimated using a general linear model to account for multiple readers. The estimated means for the reconstruction methods were pairwise compared. Calculated measures were compared using paired t tests. Results For all image quality characteristics, DLR images were scored significantly higher than MBIR images. Compared to Hybrid-IR, perceived noise and grey-white matter differentiation were better with DLR, while no difference was detected for other image quality characteristics. Noise magnitude was lower for DLR compared to Hybrid-IR and MBIR (5.6, 6.4 and 6.2, respectively) and SDNR higher (2.4, 1.9 and 2.0, respectively). Reconstruction times were 27 s, 44 s and 176 s for Hybrid-IR, DLR and MBIR respectively. Conclusions With a slight increase in reconstruction time, DLR results in lower noise and improved tissue differentiation compared to Hybrid-IR. Image quality of MBIR is significantly lower compared to DLR with much longer reconstruction times. Key Points • Deep learning reconstruction of cerebral non-contrast CT results in lower noise and improved tissue differentiation compared to hybrid-iterative reconstruction. • Deep learning reconstruction of cerebral non-contrast CT results in better image quality in all aspects evaluated compared to model-based iterative reconstruction. • Deep learning reconstruction only needs a slight increase in reconstruction time compared to hybrid-iterative reconstruction, while model-based iterative reconstruction requires considerably longer processing time.


Diagnostics ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 1052
Author(s):  
Leang Sim Nguon ◽  
Kangwon Seo ◽  
Jung-Hyun Lim ◽  
Tae-Jun Song ◽  
Sung-Hyun Cho ◽  
...  

Mucinous cystic neoplasms (MCN) and serous cystic neoplasms (SCN) account for a large portion of solitary pancreatic cystic neoplasms (PCN). In this study we implemented a convolutional neural network (CNN) model using ResNet50 to differentiate between MCN and SCN. The training data were collected retrospectively from 59 MCN and 49 SCN patients from two different hospitals. Data augmentation was used to enhance the size and quality of training datasets. Fine-tuning training approaches were utilized by adopting the pre-trained model from transfer learning while training selected layers. Testing of the network was conducted by varying the endoscopic ultrasonography (EUS) image sizes and positions to evaluate the network performance for differentiation. The proposed network model achieved up to 82.75% accuracy and a 0.88 (95% CI: 0.817–0.930) area under curve (AUC) score. The performance of the implemented deep learning networks in decision-making using only EUS images is comparable to that of traditional manual decision-making using EUS images along with supporting clinical information. Gradient-weighted class activation mapping (Grad-CAM) confirmed that the network model learned the features from the cyst region accurately. This study proves the feasibility of diagnosing MCN and SCN using a deep learning network model. Further improvement using more datasets is needed.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Andrew P. Creagh ◽  
Florian Lipsmeier ◽  
Michael Lindemann ◽  
Maarten De Vos

AbstractThe emergence of digital technologies such as smartphones in healthcare applications have demonstrated the possibility of developing rich, continuous, and objective measures of multiple sclerosis (MS) disability that can be administered remotely and out-of-clinic. Deep Convolutional Neural Networks (DCNN) may capture a richer representation of healthy and MS-related ambulatory characteristics from the raw smartphone-based inertial sensor data than standard feature-based methodologies. To overcome the typical limitations associated with remotely generated health data, such as low subject numbers, sparsity, and heterogeneous data, a transfer learning (TL) model from similar large open-source datasets was proposed. Our TL framework leveraged the ambulatory information learned on human activity recognition (HAR) tasks collected from wearable smartphone sensor data. It was demonstrated that fine-tuning TL DCNN HAR models towards MS disease recognition tasks outperformed previous Support Vector Machine (SVM) feature-based methods, as well as DCNN models trained end-to-end, by upwards of 8–15%. A lack of transparency of “black-box” deep networks remains one of the largest stumbling blocks to the wider acceptance of deep learning for clinical applications. Ensuing work therefore aimed to visualise DCNN decisions attributed by relevance heatmaps using Layer-Wise Relevance Propagation (LRP). Through the LRP framework, the patterns captured from smartphone-based inertial sensor data that were reflective of those who are healthy versus people with MS (PwMS) could begin to be established and understood. Interpretations suggested that cadence-based measures, gait speed, and ambulation-related signal perturbations were distinct characteristics that distinguished MS disability from healthy participants. Robust and interpretable outcomes, generated from high-frequency out-of-clinic assessments, could greatly augment the current in-clinic assessment picture for PwMS, to inform better disease management techniques, and enable the development of better therapeutic interventions.


Electronics ◽  
2021 ◽  
Vol 10 (10) ◽  
pp. 1136
Author(s):  
David Augusto Ribeiro ◽  
Juan Casavílca Silva ◽  
Renata Lopes Rosa ◽  
Muhammad Saadi ◽  
Shahid Mumtaz ◽  
...  

Light field (LF) imaging has multi-view properties that help to create many applications that include auto-refocusing, depth estimation and 3D reconstruction of images, which are required particularly for intelligent transportation systems (ITSs). However, cameras can present a limited angular resolution, becoming a bottleneck in vision applications. Thus, there is a challenge to incorporate angular data due to disparities in the LF images. In recent years, different machine learning algorithms have been applied to both image processing and ITS research areas for different purposes. In this work, a Lightweight Deformable Deep Learning Framework is implemented, in which the problem of disparity into LF images is treated. To this end, an angular alignment module and a soft activation function into the Convolutional Neural Network (CNN) are implemented. For performance assessment, the proposed solution is compared with recent state-of-the-art methods using different LF datasets, each one with specific characteristics. Experimental results demonstrated that the proposed solution achieved a better performance than the other methods. The image quality results obtained outperform state-of-the-art LF image reconstruction methods. Furthermore, our model presents a lower computational complexity, decreasing the execution time.


2020 ◽  
Vol 2020 (1) ◽  
Author(s):  
Guangyi Yang ◽  
Xingyu Ding ◽  
Tian Huang ◽  
Kun Cheng ◽  
Weizheng Jin

Abstract Communications industry has remarkably changed with the development of fifth-generation cellular networks. Image, as an indispensable component of communication, has attracted wide attention. Thus, finding a suitable approach to assess image quality is important. Therefore, we propose a deep learning model for image quality assessment (IQA) based on explicit-implicit dual stream network. We use frequency domain features of kurtosis based on wavelet transform to represent explicit features and spatial features extracted by convolutional neural network (CNN) to represent implicit features. Thus, we constructed an explicit-implicit (EI) parallel deep learning model, namely, EI-IQA model. The EI-IQA model is based on the VGGNet that extracts the spatial domain features. On this basis, the number of network layers of VGGNet is reduced by adding the parallel wavelet kurtosis value frequency domain features. Thus, the training parameters and the sample requirements decline. We verified, by cross-validation of different databases, that the wavelet kurtosis feature fusion method based on deep learning has a more complete feature extraction effect and a better generalisation ability. Thus, the method can simulate the human visual perception system better, and subjective feelings become closer to the human eye. The source code about the proposed EI-IQA model is available on github https://github.com/jacob6/EI-IQA.


Sign in / Sign up

Export Citation Format

Share Document