scholarly journals Fast data-driven learning of parallel MRI sampling patterns for large scale problems

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Marcelo V. W. Zibetti ◽  
Gabor T. Herman ◽  
Ravinder R. Regatte

AbstractIn this study, a fast data-driven optimization approach, named bias-accelerated subset selection (BASS), is proposed for learning efficacious sampling patterns (SPs) with the purpose of reducing scan time in large-dimensional parallel MRI. BASS is applicable when Cartesian fully-sampled k-space measurements of specific anatomy are available for training and the reconstruction method for undersampled measurements is specified; such information is used to define the efficacy of any SP for recovering the values at the non-sampled k-space points. BASS produces a sequence of SPs with the aim of finding one of a specified size with (near) optimal efficacy. BASS was tested with five reconstruction methods for parallel MRI based on low-rankness and sparsity that allow a free choice of the SP. Three datasets were used for testing, two of high-resolution brain images ($$\text {T}_{2}$$ T 2 -weighted images and, respectively, $$\text {T}_{1\rho }$$ T 1 ρ -weighted images) and another of knee images for quantitative mapping of the cartilage. The proposed approach has low computational cost and fast convergence; in the tested cases it obtained SPs up to 50 times faster than the currently best greedy approach. Reconstruction quality increased by up to 45% over that provided by variable density and Poisson disk SPs, for the same scan time. Optionally, the scan time can be nearly halved without loss of reconstruction quality. Quantitative MRI and prospective accelerated MRI results show improvements. Compared with greedy approaches, BASS rapidly learns effective SPs for various reconstruction methods, using larger SPs and larger datasets; enabling better selection of sampling-reconstruction pairs for specific MRI problems.

2020 ◽  
Vol 53 (2) ◽  
pp. 314-325 ◽  
Author(s):  
N. Axel Henningsson ◽  
Stephen A. Hall ◽  
Jonathan P. Wright ◽  
Johan Hektor

Two methods for reconstructing intragranular strain fields are developed for scanning three-dimensional X-ray diffraction (3DXRD). The methods are compared with a third approach where voxels are reconstructed independently of their neighbours [Hayashi, Setoyama & Seno (2017). Mater. Sci. Forum, 905, 157–164]. The 3D strain field of a tin grain, located within a sample of approximately 70 grains, is analysed and compared across reconstruction methods. Implicit assumptions of sub-problem independence, made in the independent voxel reconstruction method, are demonstrated to introduce bias and reduce reconstruction accuracy. It is verified that the two proposed methods remedy these problems by taking the spatial properties of the inverse problem into account. Improvements in reconstruction quality achieved by the two proposed methods are further supported by reconstructions using synthetic diffraction data.


Geophysics ◽  
2016 ◽  
Vol 81 (2) ◽  
pp. R45-R55 ◽  
Author(s):  
Espen Birger Raknes ◽  
Wiktor Weibull

In reverse time migration (RTM) or full-waveform inversion (FWI), forward and reverse time propagating wavefields are crosscorrelated in time to form either the image condition in RTM or the misfit gradient in FWI. The crosscorrelation condition requires both fields to be available at the same time instants. For large-scale 3D problems, it is not possible, in practice, to store snapshots of the wavefields during forward modeling due to extreme storage requirements. We have developed an approximate wavefield reconstruction method that uses particle velocity field recordings on the boundaries to reconstruct the forward wavefields during the computation of the reverse time wavefields. The method is computationally effective and requires less storage than similar methods. We have compared the reconstruction method to a boundary reconstruction method that uses particle velocity and stress fields at the boundaries and to the optimal checkpointing method. We have tested the methods on a 2D vertical transversely isotropic model and a large-scale 3D elastic FWI problem. Our results revealed that there are small differences in the results for the three methods.


2019 ◽  
Vol 9 (18) ◽  
pp. 3758 ◽  
Author(s):  
Xiang Li ◽  
Xiaojie Wang ◽  
Chengli Zhao ◽  
Xue Zhang ◽  
Dongyun Yi

Locating the source that undergoes a diffusion-like process is a fundamental and challenging problem in complex network, which can help inhibit the outbreak of epidemics among humans, suppress the spread of rumors on the Internet, prevent cascading failures of power grids, etc. However, our ability to accurately locate the diffusion source is strictly limited by incomplete information of nodes and inevitable randomness of diffusion process. In this paper, we propose an efficient optimization approach via maximum likelihood estimation to locate the diffusion source in complex networks with limited observations. By modeling the informed times of the observers, we derive an optimal source localization solution for arbitrary trees and then extend it to general graphs via proper approximations. The numerical analyses on synthetic networks and real networks all indicate that our method is superior to several benchmark methods in terms of the average localization accuracy, high-precision localization and approximate area localization. In addition, low computational cost enables our method to be widely applied for the source localization problem in large-scale networks. We believe that our work can provide valuable insights on the interplay between information diffusion and source localization in complex networks.


2015 ◽  
Vol 8 (4) ◽  
pp. 1259-1273 ◽  
Author(s):  
J. Ray ◽  
J. Lee ◽  
V. Yadav ◽  
S. Lefantzi ◽  
A. M. Michalak ◽  
...  

Abstract. Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) and fitting. Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO2 (ffCO2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO2 emissions and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of 2. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.


2021 ◽  
Vol 7 (3) ◽  
pp. 44
Author(s):  
Johannes Leuschner ◽  
Maximilian Schmidt ◽  
Poulami Somanya Ganguly ◽  
Vladyslav Andriiashen ◽  
Sophia Bethany Coban ◽  
...  

The reconstruction of computed tomography (CT) images is an active area of research. Following the rise of deep learning methods, many data-driven models have been proposed in recent years. In this work, we present the results of a data challenge that we organized, bringing together algorithm experts from different institutes to jointly work on quantitative evaluation of several data-driven methods on two large, public datasets during a ten day sprint. We focus on two applications of CT, namely, low-dose CT and sparse-angle CT. This enables us to fairly compare different methods using standardized settings. As a general result, we observe that the deep learning-based methods are able to improve the reconstruction quality metrics in both CT applications while the top performing methods show only minor differences in terms of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). We further discuss a number of other important criteria that should be taken into account when selecting a method, such as the availability of training data, the knowledge of the physical measurement model and the reconstruction speed.


Author(s):  
Candida Mwisomba ◽  
Abdi T. Abdalla ◽  
Idrissa Amour ◽  
Florian Mkemwa ◽  
Baraka Maiseli

Abstract Compressed sensing allows recovery of image signals using a portion of data – a technique that has drastically revolutionized the field of through-the-wall radar imaging (TWRI). This technique can be accomplished through nonlinear methods, including convex programming and greedy iterative algorithms. However, such (nonlinear) methods increase the computational cost at the sensing and reconstruction stages, thus limiting the application of TWRI in delicate practical tasks (e.g. military operations and rescue missions) that demand fast response times. Motivated by this limitation, the current work introduces the use of a numerical optimization algorithm, called Limited Memory Broyden–Fletcher–Goldfarb–Shanno (LBFGS), to the TWRI framework to lower image reconstruction time. LBFGS, a well-known Quasi-Newton algorithm, has traditionally been applied to solve large scale optimization problems. Despite its potential applications, this algorithm has not been extensively applied in TWRI. Therefore, guided by LBFGS and using the Euclidean norm, we employed the regularized least square method to solve the cost function of the TWRI problem. Simulation results show that our method reduces the computational time by 87% relative to the classical method, even under situations of increased number of targets or large data volume. Moreover, the results show that the proposed method remains robust when applied to noisy environment.


Author(s):  
Osama A. Omer

An important part of any computed tomography (CT) system is the reconstruction method, which transforms the measured data into images. Reconstruction methods for CT can be either analytical or iterative. The analytical methods can be exact, by exact projector inversion, or non-exact based on Back projection (BP). The BP methods are attractive because of thier simplicity and low computational cost. But they produce suboptimal images with respect to artifacts, resolution, and noise. This paper deals with improve of the image quality of BP by using super-resolution technique. Super-resolution can be beneficial in improving the image quality of many medical imaging systems without the need for significant hardware alternation. In this paper, we propose to reconstruct a high-resolution image from the measured signals in Sinogram space instead of reconstructing low-resolution images and then post-process these images to get higher resolution image.


2021 ◽  
Author(s):  
Ghazi D. AL-Qahtani ◽  
Noah Berlow

Abstract Multilateral wells are an evolution of horizontal wells in which several wellbore branches radiate from the main borehole. In the last two decades, multilateral wells have been increasingly utilized in producing hydrocarbon reservoirs. The main advantage of using such technology against conventional and single-bore wells comes from the additional access to reservoir rock by maximizing the reservoir contact with fewer resources. Today, multilateral wells are rapidly becoming more complex in both designs and architecture (i.e., extended reach wells, maximum reservoir contact, and extreme reservoir contact wells). Certain multilateral design templates prevail in the industry, such as fork and fishbone types, which tend to be populated throughout the reservoir of interest with no significant changes to the original architecture and, therefore, may not fully realize the reservoir's potential. Placement of optimal multilateral wells is a multivariable problem, which is a function of determining the best well locations and trajectories in a hydrocarbon reservoir with the ultimate objectives of maximizing productivity and recovery. The placement of the multilateral wells can be subject to many constraints such as the number of wells required, maximum length limits, and overall economics. This paper introduces a novel technology for placement of multilateral wells in hydrocarbon reservoirs utilizing a transshipment network optimization approach. This method generates scenarios of multiple wells with different designs honoring the most favorable completion points in a reservoir. In addition, the algorithm was developed to find the most favorable locations and trajectories for the multilateral wells in both local and global terms. A partitioning algorithm is uniquely utilized to reduce the computational cost of the process. The proposed method will not only create different multilateral designs; it will justify the trajectories of every borehole section generated. The innovative method is capable of constructing hundreds of multilateral wells with design variations in large-scale reservoirs. As the complexity of the reservoirs (e.g., active forces that influence fluid mobility) and heterogeneity dictate variability in performance at different area of the reservoir, multilateral wells should be constructed to capture the most productive zones. The new method also allows different levels of branching for the laterals (i.e., laterals can emanate from the motherbore, from other laterals or from subsequent branches). These features set the stage for a new generation of multilateral wells to achieve the most effective reservoir contact.


2005 ◽  
Author(s):  
Manuchehr Soleimani

Electrical resistance tomography (ERT) has great potential to be used for multi-phase flow monitoring. The Image reconstruction in ERT is computationally costly, so the online monitoring is a difficult task. The linear reconstruction methods are currently used as fast methods. The image reconstruction is a nonlinear inverse problem and the linear methods are not sufficient in many cases. The application of a recently proposed non-iterative inversion method for two-phase materials in has been studied. The method works based on Monotonicity property of the resistance matrix in ERT and it requires modest computational cost. In this paper we explain the application of this inversion method. We demonstrate the capabilities and drawbacks of the method by using 2D test examples. A major contribution of this paper is to optimize the software program for the inversion (by doing most of the computations offline), so it can be used for online application.


Sign in / Sign up

Export Citation Format

Share Document