random noise
Recently Published Documents


TOTAL DOCUMENTS

2459
(FIVE YEARS 446)

H-INDEX

64
(FIVE YEARS 3)

2022 ◽  
Vol 209 ◽  
pp. 109901
Author(s):  
Tie Zhong ◽  
Ming Cheng ◽  
Xintong Dong ◽  
Yue Li ◽  
Ning Wu


2022 ◽  
pp. 1-90
Author(s):  
David Lubo-Robles ◽  
Deepak Devegowda ◽  
Vikram Jayaram ◽  
Heather Bedle ◽  
Kurt J. Marfurt ◽  
...  

During the past two decades, geoscientists have used machine learning to produce a more quantitative reservoir characterization and to discover hidden patterns in their data. However, as the complexity of these models increase, the sensitivity of their results to the choice of the input data becomes more challenging. Measuring how the model uses the input data to perform either a classification or regression task provides an understanding of the data-to-geology relationships which indicates how confident we are in the prediction. To provide such insight, the ML community has developed Local Interpretable Model-agnostic Explanations (LIME), and SHapley Additive exPlanations (SHAP) tools. In this study, we train a random forest architecture using a suite of seismic attributes as input to differentiate between mass transport deposits (MTDs), salt, and conformal siliciclastic sediments in a Gulf of Mexico dataset. We apply SHAP to understand how the model uses the input seismic attributes to identify target seismic facies and examine in what manner variations in the input such as adding band-limited random noise or applying a Kuwahara filter impact the models’ predictions. During our global analysis, we find that the attribute importance is dynamic, and changes based on the quality of the seismic attributes and the seismic facies analyzed. For our data volume and target facies, attributes measuring changes in dip and energy show the largest importance for all cases in our sensitivity analysis. We note that to discriminate between the seismic facies, the ML architecture learns a “set of rules” in multi-attribute space and that overlap between MTDs, salt, and conformal sediments might exist based on the seismic attribute analyzed. Finally, using SHAP at a voxel-scale, we understand why certain areas of interest were misclassified by the algorithm and perform an in-context interpretation to analyze how changes in the geology impact the model’s predictions.



Author(s):  
K. N. Danilovskii ◽  
Loginov G. N.

This article discusses a new approach to processing lateral scanning logging while drilling data based on a combination of three-dimensional numerical modeling and convolutional neural networks. We prepared dataset for training neural networks. Dataset contains realistic synthetic resistivity images and geoelectric layer boundary layouts, obtained based on true values of their spatial orientation parameters. Using convolutional neural networks two algorithms have been developed and programmatically implemented: suppression of random noise and detection of layer boundaries on the resistivity images. The developed algorithms allow fast and accurate processing of large amounts of data, while, due to the absence of full-connection layers in the neural networks’ architectures, it is possible to process resistivity images of arbitrary length.



2022 ◽  
Author(s):  
Siawoosh Mohammadi ◽  
Tobias Streubel ◽  
Leonie Klock ◽  
Antoine Lutti ◽  
Kerrin Pine ◽  
...  

Multi-Parameter Mapping (MPM) is a comprehensive quantitative neuroimaging protocol that enables estimation of four physical parameters (longitudinal and effective transverse relaxation rates R1 and R2*, proton density PD, and magnetization transfer saturation MTsat) that are sensitive to microstructural tissue properties such as iron and myelin content. Their capability to reveal microstructural brain differences, however, is tightly bound to controlling random noise and artefacts (e.g. caused by head motion) in the signal. Here, we introduced a method to estimate the local error of PD, R1, and MTsat maps that captures both noise and artefacts on a routine basis without requiring additional data. To investigate the method's sensitivity to random noise, we calculated the model-based signal-to-noise ratio (mSNR) and showed in measurements and simulations that it correlated linearly with an experimental raw-image-based SNR map. We found that the mSNR varied with MPM protocols, magnetic field strength (3T vs. 7T) and MPM parameters: it halved from PD to R1 and decreased from PD to MT_sat by a factor of 3-4. Exploring the artefact-sensitivity of the error maps, we generated robust MPM parameters using two successive acquisitions of each contrast and the acquisition-specific errors to down-weight erroneous regions. The resulting robust MPM parameters showed reduced variability at the group level as compared to their single-repeat or averaged counterparts. The error and mSNR maps may better inform power-calculations by accounting for local data quality variations across measurements. Code to compute the mSNR maps and robustly combined MPM maps is available in the open-source hMRI toolbox.



Author(s):  
Qi-Feng Sun ◽  
Jia-Yue Xu ◽  
Han-Xiao Zhang ◽  
You-Xiang Duan ◽  
You-Kai Sun

AbstractIn this paper, we propose a random noise suppression and super-resolution reconstruction algorithm for seismic profiles based on Generative Adversarial Networks, in anticipation of reducing the influence of random noise and low resolution on seismic profiles. Firstly, the algorithm used the residual learning strategy to construct a de-noising subnet to accurate separate the interference noise on the basis of protecting the effective signal. Furthermore, it iterated the back-projection unit to complete the reconstruction of the high-resolution seismic sections image, while responsed sampling error to enhance the super-resolution performance of the algorithm. For seismic data characteristics, designed the discriminator to be a fully convolutional neural network, used a larger convolution kernels to extract data features and continuously strengthened the supervision of the generator performance optimization during the training process. The results on the synthetic data and the actual data indicated that the algorithm could improve the quality of seismic cross-section, make ideal signal-to-noise ratio and further improve the resolution of the reconstructed cross-sectional image. Besides, the observations of geological structures such as fractures were also clearer.



2022 ◽  
Vol 8 ◽  
Author(s):  
Runnan He ◽  
Shiqi Xu ◽  
Yashu Liu ◽  
Qince Li ◽  
Yang Liu ◽  
...  

Medical imaging provides a powerful tool for medical diagnosis. In the process of computer-aided diagnosis and treatment of liver cancer based on medical imaging, accurate segmentation of liver region from abdominal CT images is an important step. However, due to defects of liver tissue and limitations of CT imaging procession, the gray level of liver region in CT image is heterogeneous, and the boundary between the liver and those of adjacent tissues and organs is blurred, which makes the liver segmentation an extremely difficult task. In this study, aiming at solving the problem of low segmentation accuracy of the original 3D U-Net network, an improved network based on the three-dimensional (3D) U-Net, is proposed. Moreover, in order to solve the problem of insufficient training data caused by the difficulty of acquiring labeled 3D data, an improved 3D U-Net network is embedded into the framework of generative adversarial networks (GAN), which establishes a semi-supervised 3D liver segmentation optimization algorithm. Finally, considering the problem of poor quality of 3D abdominal fake images generated by utilizing random noise as input, deep convolutional neural networks (DCNN) based on feature restoration method is designed to generate more realistic fake images. By testing the proposed algorithm on the LiTS-2017 and KiTS19 dataset, experimental results show that the proposed semi-supervised 3D liver segmentation method can greatly improve the segmentation performance of liver, with a Dice score of 0.9424 outperforming other methods.



2022 ◽  
Vol 14 (2) ◽  
pp. 263
Author(s):  
Haixia Zhao ◽  
Tingting Bai ◽  
Zhiqiang Wang

Seismic field data are usually contaminated by random or complex noise, which seriously affect the quality of seismic data contaminating seismic imaging and seismic interpretation. Improving the signal-to-noise ratio (SNR) of seismic data has always been a key step in seismic data processing. Deep learning approaches have been successfully applied to suppress seismic random noise. The training examples are essential in deep learning methods, especially for the geophysical problems, where the complete training data are not easy to be acquired due to high cost of acquisition. In this work, we propose a natural images pre-trained deep learning method to suppress seismic random noise through insight of the transfer learning. Our network contains pre-trained and post-trained networks: the former is trained by natural images to obtain the preliminary denoising results, while the latter is trained by a small amount of seismic images to fine-tune the denoising effects by semi-supervised learning to enhance the continuity of geological structures. The results of four types of synthetic seismic data and six field data demonstrate that our network has great performance in seismic random noise suppression in terms of both quantitative metrics and intuitive effects.



2022 ◽  
Vol 22 (1&2) ◽  
pp. 1-16
Author(s):  
Artur Czerwinski

In this article, we investigate the problem of entanglement characterization by polarization measurements combined with maximum likelihood estimation (MLE). A realistic scenario is considered with measurement results distorted by random experimental errors. In particular, by imposing unitary rotations acting on the measurement operators, we can test the performance of the tomographic technique versus the amount of noise. Then, dark counts are introduced to explore the efficiency of the framework in a multi-dimensional noise scenario. The concurrence is used as a figure of merit to quantify how well entanglement is preserved through noisy measurements. Quantum fidelity is computed to quantify the accuracy of state reconstruction. The results of numerical simulations are depicted on graphs and discussed.



2022 ◽  
Vol 29 (1) ◽  
Author(s):  
Fucheng Yu ◽  
Feixiang Wang ◽  
Ke Li ◽  
Guohao Du ◽  
Biao Deng ◽  
...  

Rodents are used extensively as animal models for the preclinical investigation of microvascular-related diseases. However, motion artifacts in currently available imaging methods preclude real-time observation of microvessels in vivo. In this paper, a pixel temporal averaging (PTA) method that enables real-time imaging of microvessels in the mouse brain in vivo is described. Experiments using live mice demonstrated that PTA efficiently eliminated motion artifacts and random noise, resulting in significant improvements in contrast-to-noise ratio. The time needed for image reconstruction using PTA with a normal computer was 250 ms, highlighting the capability of the PTA method for real-time angiography. In addition, experiments with less than one-quarter of photon flux in conventional angiography verified that motion artifacts and random noise were suppressed and microvessels were successfully identified using PTA, whereas conventional temporal subtraction and averaging methods were ineffective. Experiments performed with an X-ray tube verified that the PTA method could also be successfully applied to microvessel imaging of the mouse brain using a laboratory X-ray source. In conclusion, the proposed PTA method may facilitate the real-time investigation of cerebral microvascular-related diseases using small animal models.



2021 ◽  
Vol 35 (6) ◽  
pp. 447-456
Author(s):  
Preet Kamal Kaur ◽  
Kanwal Preet Singh Attwal ◽  
Harmandeep Singh

With the continuous advancements in Information and Communication Technology, healthcare data is stored in the electronic forms and accessed remotely according to the requirements. However, there is a negative impact like unauthorized access, misuse, stealing of the data, which violates the privacy concern of patients. Sensitive information, if not protected, can become the basis for linkage attacks. Paper proposes an improved Privacy-Preserving Data Classification System for Chronic Kidney Disease dataset. Focus of the work is to predict the disease of patients’ while preventing the privacy breach of their sensitive information. To accomplish this goal, a metaheuristic Firefly Optimization Algorithm (FOA) is deployed for random noise generation (instead of fixed noise) and this noise is added to the least significant bits of sensitive data. Then, random forest classifier is applied on both original and perturbed dataset to predict the disease. Even after perturbation, technique preserves required significance of prediction results by maintaining the balance between utility and security of data. In order to validate the results, proposed method is compared with the existing technology on the basis of various evaluation parameters. Results show that proposed technique is suitable for healthcare applications where both privacy protection and accurate prediction are necessary conditions.



Sign in / Sign up

Export Citation Format

Share Document