scholarly journals Atmospheric inverse modeling with known physical bounds: an example from trace gas emissions

2014 ◽  
Vol 7 (1) ◽  
pp. 303-315 ◽  
Author(s):  
S. M. Miller ◽  
A. M. Michalak ◽  
P. J. Levi

Abstract. Many inverse problems in the atmospheric sciences involve parameters with known physical constraints. Examples include nonnegativity (e.g., emissions of some urban air pollutants) or upward limits implied by reaction or solubility constants. However, probabilistic inverse modeling approaches based on Gaussian assumptions cannot incorporate such bounds and thus often produce unrealistic results. The atmospheric literature lacks consensus on the best means to overcome this problem, and existing atmospheric studies rely on a limited number of the possible methods with little examination of the relative merits of each. This paper investigates the applicability of several approaches to bounded inverse problems. A common method of data transformations is found to unrealistically skew estimates for the examined example application. The method of Lagrange multipliers and two Markov chain Monte Carlo (MCMC) methods yield more realistic and accurate results. In general, the examined MCMC approaches produce the most realistic result but can require substantial computational time. Lagrange multipliers offer an appealing option for large, computationally intensive problems when exact uncertainty bounds are less central to the analysis. A synthetic data inversion of US anthropogenic methane emissions illustrates the strengths and weaknesses of each approach.

2013 ◽  
Vol 6 (3) ◽  
pp. 4531-4562
Author(s):  
S. M. Miller ◽  
A. M. Michalak ◽  
P. J. Levi

Abstract. Many inverse problems in the atmospheric sciences involve parameters with known physical constraints. Examples include non-negativity (e.g., emissions of some urban air pollutants) or upward limits implied by reaction or solubility constants. However, probabilistic inverse modeling approaches based on Gaussian assumptions cannot incorporate such bounds and thus often produce unrealistic results. The atmospheric literature lacks consensus on the best means to overcome this problem, and existing atmospheric studies rely on a limited number of the possible methods with little examination of the relative merits of each. This paper investigates the applicability of several approaches to bounded inverse problems and is also the first application of Markov chain Monte Carlo (MCMC) to estimation of atmospheric trace gas fluxes. The approaches discussed here are broadly applicable. A common method of data transformations is found to unrealistically skew estimates for the examined example application. The method of Lagrange multipliers and two MCMC methods yield more realistic and accurate results. In general, the examined MCMC approaches produce the most realistic result but can require substantial computational time. Lagrange multipliers offer an appealing alternative for large, computationally intensive problems when exact uncertainty bounds are less central to the analysis. A synthetic data inversion of US anthropogenic methane emissions illustrates the strengths and weaknesses of each approach.


2020 ◽  
Vol 12 (5) ◽  
pp. 851 ◽  
Author(s):  
Jiena He ◽  
J. Ronald Eastman

Many aspects of the earth system are known to have preferred patterns of variability, variously known in the atmospheric sciences as modes or teleconnections. Approaches to discovering these patterns have included principal components analysis and empirical orthogonal teleconnection (EOT) analysis. The latter is very effective but is computationally intensive. Here, we present a sequential autoencoder for teleconnection analysis (SATA). Like EOT, it discovers teleconnections sequentially, with subsequent analyses being based on residual series. However, unlike EOT, SATA uses a basic linear autoencoder as the primary tool for analysis. An autoencoder is an unsupervised neural network that learns an efficient neural representation of input data. With SATA, the input is an image time series and the neural representation is a unidimensional time series. SATA then locates the 0.5% of locations with the strongest correlation with the neural representation and averages their temporal vectors to characterize the teleconnection. Evaluation of the procedure showed that it is several orders of magnitude faster than other approaches to EOT, produces teleconnection patterns that are more strongly correlated to well-known teleconnections, and is particularly effective in finding teleconnections with multiple centers of action (such as dipoles).


2011 ◽  
Vol 11 (04) ◽  
pp. 571-587 ◽  
Author(s):  
WILLIAM ROBSON SCHWARTZ ◽  
HELIO PEDRINI

Fractal image compression is one of the most promising techniques for image compression due to advantages such as resolution independence and fast decompression. It exploits the fact that natural scenes present self-similarity to remove redundancy and obtain high compression rates with smaller quality degradation compared to traditional compression methods. The main drawback of fractal compression is its computationally intensive encoding process, due to the need for searching regions with high similarity in the image. Several approaches have been developed to reduce the computational cost to locate similar regions. In this work, we propose a method based on robust feature descriptors to speed up the encoding time. The use of robust features provides more discriminative and representative information for regions of the image. When the regions are better represented, the search for similar parts of the image can be reduced to focus only on the most likely matching candidates, which leads to reduction on the computational time. Our experimental results show that the use of robust feature descriptors reduces the encoding time while keeping high compression rates and reconstruction quality.


Processes ◽  
2020 ◽  
Vol 8 (9) ◽  
pp. 1184
Author(s):  
Geraldine Cáceres Sepulveda ◽  
Silvia Ochoa ◽  
Jules Thibault

It is paramount to optimize the performance of a chemical process in order to maximize its yield and productivity and to minimize the production cost and the environmental impact. The various objectives in optimization are often in conflict, and one must determine the best compromise solution usually using a representative model of the process. However, solving first-principle models can be a computationally intensive problem, thus making model-based multi-objective optimization (MOO) a time-consuming task. In this work, a methodology to perform the multi-objective optimization for a two-reactor system for the production of acrylic acid, using artificial neural networks (ANNs) as meta-models, is proposed in an effort to reduce the computational time required to circumscribe the Pareto domain. The performance of the meta-model confirmed good agreement between the experimental data and the model-predicted values of the existent relationships between the eight decision variables and the nine performance criteria of the process. Once the meta-model was built, the Pareto domain was circumscribed based on a genetic algorithm (GA) and ranked with the net flow method (NFM). Using the ANN surrogate model, the optimization time decreased by a factor of 15.5.


Geophysics ◽  
2001 ◽  
Vol 66 (1) ◽  
pp. 174-187 ◽  
Author(s):  
William Rodi ◽  
Randall L. Mackie

We investigate a new algorithm for computing regularized solutions of the 2-D magnetotelluric inverse problem. The algorithm employs a nonlinear conjugate gradients (NLCG) scheme to minimize an objective function that penalizes data residuals and second spatial derivatives of resistivity. We compare this algorithm theoretically and numerically to two previous algorithms for constructing such “minimum‐structure” models: the Gauss‐Newton method, which solves a sequence of linearized inverse problems and has been the standard approach to nonlinear inversion in geophysics, and an algorithm due to Mackie and Madden, which solves a sequence of linearized inverse problems incompletely using a (linear) conjugate gradients technique. Numerical experiments involving synthetic and field data indicate that the two algorithms based on conjugate gradients (NLCG and Mackie‐Madden) are more efficient than the Gauss‐Newton algorithm in terms of both computer memory requirements and CPU time needed to find accurate solutions to problems of realistic size. This owes largely to the fact that the conjugate gradients‐based algorithms avoid two computationally intensive tasks that are performed at each step of a Gauss‐Newton iteration: calculation of the full Jacobian matrix of the forward modeling operator, and complete solution of a linear system on the model space. The numerical tests also show that the Mackie‐Madden algorithm reduces the objective function more quickly than our new NLCG algorithm in the early stages of minimization, but NLCG is more effective in the later computations. To help understand these results, we describe the Mackie‐Madden and new NLCG algorithms in detail and couch each as a special case of a more general conjugate gradients scheme for nonlinear inversion.


Geophysics ◽  
2012 ◽  
Vol 77 (4) ◽  
pp. WB219-WB231 ◽  
Author(s):  
P. Kaikkonen ◽  
S. P. Sharma ◽  
S. Mittal

Three-dimensional linearized nonlinear electromagnetic inversion is developed for revealing the subsurface conductivity structure using isolated very low frequency (VLF) and VLF-resistivity anomalies due to conductors that may be arbitrarily directed towards the measuring profiles and the VLF transmitter. We described the 3D model using a set of variables in terms of geometric and physical parameters. These model parameters were then optimized (parametric inversion) to obtain their best estimates to fit the observations. Two VLF transmitters, i.e., the [Formula: see text], [Formula: see text] (“E”) and the [Formula: see text], [Formula: see text] (“H”) polarizations, respectively, can be considered jointly in inversion. After inverting several noise-free and noisy synthetic data, the results revealed that the estimated model parameters and the functionality of the approach were very good and reliable. The inversion procedure also worked well for the field data. The reliability and validity of the results after the field data inversion have been checked using data from a shear zone associated with uranium mineralization.


2013 ◽  
Vol 2013 ◽  
pp. 1-16 ◽  
Author(s):  
Anuj V. Prakash ◽  
Anwesha Chaudhury ◽  
Rohit Ramachandran

Computer-aided modeling and simulation are a crucial step in developing, integrating, and optimizing unit operations and subsequently the entire processes in the chemical/pharmaceutical industry. This study details two methods of reducing the computational time to solve complex process models, namely, the population balance model which given the source terms can be very computationally intensive. Population balance models are also widely used to describe the time evolutions and distributions of many particulate processes, and its efficient and quick simulation would be very beneficial. The first method illustrates utilization of MATLAB's Parallel Computing Toolbox (PCT) and the second method makes use of another toolbox, JACKET, to speed up computations on the CPU and GPU, respectively. Results indicate significant reduction in computational time for the same accuracy using multicore CPUs. Many-core platforms such as GPUs are also promising towards computational time reduction for larger problems despite the limitations of lower clock speed and device memory. This lends credence to the use of highfidelity models (in place of reduced order models) for control and optimization of particulate processes.


Geophysics ◽  
2011 ◽  
Vol 76 (3) ◽  
pp. F173-F183 ◽  
Author(s):  
Maokun Li ◽  
Aria Abubakar ◽  
Jianguo Liu ◽  
Guangdong Pan ◽  
Tarek M. Habashy

We developed a compressed implicit Jacobian scheme for the regularized Gauss-Newton inversion algorithm for reconstructing 3D conductivity distributions from electromagnetic data. In this algorithm, the Jacobian matrix, whose storage usually requires a large amount of memory, is decomposed in terms of electric fields excited by sources located and oriented identically to the physical sources and receivers. As a result, the memory usage for the Jacobian matrix reduces from O(NFNSNRNP) to O[NF(NS + NR)NP], where NF is the number of frequencies, NS is the number of sources, NR is the number of receivers, and NP is the number of conductivity cells to be inverted. When solving the Gauss-Newton linear system of equations using iterative solvers, the multiplication of the Jacobian matrix with a vector is converted to matrix-vector operations between the matrices of the electric fields and the vector. In order to mitigate the additional computational overhead of this scheme, these fields are further compressed using the adaptive cross approximation (ACA) method. The compressed implicit Jacobian scheme provides a good balance between memory usage and computational time and renders the Gauss-Newton algorithm more efficient. We demonstrated the benefits of this scheme using numerical examples including both synthetic and field data for both crosswell and controlled-source electromagnetic (CSEM) applications.


Geophysics ◽  
2005 ◽  
Vol 70 (2) ◽  
pp. G33-G41 ◽  
Author(s):  
L. B. Pedersen ◽  
M. Engels

Recent developments in the speed and quality of data acquisition using the radiomagnetotelluric (RMT) method, whereby large amounts of broadband RMT data can be collected along profiles, have prompted us to develop a strategy for routine inverse modeling using 2D models. We build a rather complicated numerical model containing both 2D and 3D elements believed to be representative for shallow conductors in crystalline basement overlain by a thin sedimentary cover. We then invert the corresponding synthetic data on selected profiles, using both traditional MT approaches, as well as the proposed approach, which is based on the determinant of the MT impedance tensor. We compare the estimated resistivity models with the true models along the selected profiles and find that the traditional approaches often lead to strongly biased models and bad data fit, in contrast to those using the determinant. In this case, much of the bias is removed and the data fit is improved. The determinant of the impedance tensor is independent of the chosen strike direction, and once the a priori model is set, the best fitting model is found to be practically independent of the starting model used. We conclude that the determinant of the impedance tensor is a useful tool for routine inverse modeling.


Sign in / Sign up

Export Citation Format

Share Document