model constraint
Recently Published Documents


TOTAL DOCUMENTS

46
(FIVE YEARS 9)

H-INDEX

6
(FIVE YEARS 1)

Author(s):  
Jian Zhang ◽  
Jingye Li ◽  
Xiaohong Chen ◽  
Yuanqiang Li ◽  
Guangtan Huang ◽  
...  

Summary Seismic inversion is one of the most commonly used methods in the oil and gas industry for reservoir characterization from observed seismic data. Deep learning (DL) is emerging as a data-driven approach that can effectively solve the inverse problem. However, existing deep learning-based methods for seismic inversion utilize only seismic data as input, which often leads to poor stability of the inversion results. Besides, it has always been challenging to train a robust network since the real survey has limited labeled data pairs. To partially overcome these issues, we develop a neural network framework with a priori initial model constraint to perform seismic inversion. Our network uses two parts as one input for training. One is the seismic data, and the other is the subsurface background model. The labels for each input are the actual model. The proposed method is performed by log-to-log strategy. The training dataset is firstly generated based on forward modeling. The network is then pre-trained using the synthetic training dataset, which is further validated using synthetic data that has not been used in the training step. After obtaining the pre-trained network, we introduce the transfer learning strategy to fine-tune the pre-trained network using labeled data pairs from a real survey to acquire better inversion results in the real survey. The validity of the proposed framework is demonstrated using synthetic 2D data including both post-stack and pre-stack examples, as well as a real 3D post-stack seismic data set from the western Canadian sedimentary basin.


Geophysics ◽  
2021 ◽  
pp. 1-61
Author(s):  
Xingye Liu ◽  
Xiaohong Chen ◽  
Min Bai ◽  
Yangkang Chen

Seismic image registration is crucial for the joint interpretation of multi-vintage seismic images in time-lapse reservoir monitoring. Time-shift analysis is a commonly used method to estimate the warping function by creating a time-shift map, where the energy of each time-shift point in the 3D map indicates the probability of a correct registration. We propose a new method to obtain a high-resolution time-shift analysis spectrum, which can help both manual and automatic picking. The time-shift scan map is obtained by trying different local shifts and calculating the local similarity attributes between the shifted and reference images. We propose a high-resolution calculation of the time-shift scan map by applying the non-stationary model constraint in solving the local similarity attributes. The non-stationary model constraint ensures the time-shift scan map to be smooth in all physical dimensions, e.g., time, local shift, and space. In addition, it permits variable smoothing strength across the whole volume, which enables the high resolution of the calculated time-shift scan map. We use an automatic picking algorithm to demonstrate the accuracy of the high-resolution time-shift scan map and its positive influence on the time-lapse image registration. Both synthetic (2D) and real (3D) time-lapse seismic images are used for demonstrating the better registration performance of the proposed method.


2020 ◽  
Vol 20 (22) ◽  
pp. 13701-13719
Author(s):  
Michael Rolletter ◽  
Marion Blocquet ◽  
Martin Kaminski ◽  
Birger Bohn ◽  
Hans-Peter Dorn ◽  
...  

Abstract. The photooxidation of pinonaldehyde, one product of the α-pinene degradation, was investigated in the atmospheric simulation chamber SAPHIR under natural sunlight at low NO concentrations (<0.2 ppbv) with and without an added hydroxyl radical (OH) scavenger. With a scavenger, pinonaldehyde was exclusively removed by photolysis, whereas without a scavenger, the degradation was dominated by reaction with OH. In both cases, the observed rate of pinonaldehyde consumption was faster than predicted by an explicit chemical model, the Master Chemical Mechanism (MCM, version 3.3.1). In the case with an OH scavenger, the observed photolytic decay can be reproduced by the model if an experimentally determined photolysis frequency is used instead of the parameterization in the MCM. A good fit is obtained when the photolysis frequency is calculated from the measured solar actinic flux spectrum, absorption cross sections published by Hallquist et al. (1997), and an effective quantum yield of 0.9. The resulting photolysis frequency is 3.5 times faster than the parameterization in the MCM. When pinonaldehyde is mainly removed by reaction with OH, the observed OH and hydroperoxy radical (HO2) concentrations are underestimated in the model by a factor of 2. Using measured HO2 as a model constraint brings modeled and measured OH concentrations into agreement. This suggests that the chemical mechanism includes all relevant OH-producing reactions but is missing a source for HO2. The missing HO2 source strength of (0.8 to 1.5) ppbv h−1 is similar to the rate of the pinonaldehyde consumption of up to 2.5 ppbv h−1. When the model is constrained by HO2 concentrations and the experimentally derived photolysis frequency, the pinonaldehyde decay is well represented. The photolysis of pinonaldehyde yields 0.18 ± 0.20 formaldehyde molecules at NO concentrations of less than 200 pptv, but no significant acetone formation is observed. When pinonaldehyde is also oxidized by OH under low NO conditions (maximum 80 pptv), yields of acetone and formaldehyde increase over the course of the experiment from 0.2 to 0.3 and from 0.15 to 0.45, respectively. Fantechi et al. (2002) proposed a degradation mechanism based on quantum-chemical calculations, which is considerably more complex than the MCM scheme and contains additional reaction pathways and products. Implementing these modifications results in a closure of the model–measurement discrepancy for the products acetone and formaldehyde, when pinonaldehyde is degraded only by photolysis. In contrast, the underprediction of formed acetone and formaldehyde is worsened compared to model results by the MCM, when pinonaldehyde is mainly degraded in the reaction with OH. This shows that the current mechanisms lack acetone and formaldehyde sources for low NO conditions like in these experiments. Implementing the modifications suggested by Fantechi et al. (2002) does not improve the model–measurement agreement of OH and HO2.


Author(s):  
Tianyu Ren ◽  
Xiaohu Wang ◽  
Qun Li ◽  
Chao Wang ◽  
Jiahan Dong ◽  
...  

2020 ◽  
Vol 2020 ◽  
pp. 1-12 ◽  
Author(s):  
Qiangqiang Xu ◽  
Jianquan Ge ◽  
Tao Yang

At present, two kinds of shortages exist in the research on cooperative combat. One is that radar detection threat (which cannot be ignored) is rarely considered. The other is that limited efforts have been made on the cooperative penetration trajectories under the conditions of long distance, vast airspace, and wide speed range. In order to offset the shortages of the research on cooperative combat, the penetration trajectory optimization method considering the influence of aircraft radar cross-section (RCS) and the cooperative penetration strategy is proposed in this study. Firstly, the RCS data are calculated by the physical optics (PO) method. The radar detection threat model is established considering the influence of the aircraft RCS. Then, a trajectory optimization framework with the dynamic model, constraint conditions, and optimal objectives is formed. Using the hp-adaptive Radau pseudospectral method, the optimal control problem for a single aircraft flight is solved. Finally, a cooperative penetration strategy is proposed to solve the cooperative penetration problem of multiaircraft. The impact time and angle constraints are given, and the virtual target point is introduced for terminal guidance. Two cases are simulated and verified. Simulation results demonstrate that the proposed method is effective. The single aircraft can effectively penetrate, and the multiaircraft can fulfill the requirement of cooperative impact time and angle under the condition of meeting the minimum threat of radar detection.


2019 ◽  
Vol 92 (5) ◽  
pp. 627-634 ◽  
Author(s):  
I Dutcă ◽  
R E McRoberts ◽  
E Næsset ◽  
V N B Blujdea

AbstractTree diameter at breast height (D) and tree height (H) are often used as predictors of individual tree biomass. Because D and H are correlated, the combined variable D2H is frequently used in regression models instead of two separate independent variables, to avoid collinearity related issues. The justification for D2H is that aboveground biomass is proportional to the volume of a cylinder of diameter, D, and height, H. However, the D2H predictor constrains the model to produce parameter estimates for D and H that have a fixed ratio, in this case, 2.0. In this paper we investigate the degree to which the D2H predictor reduces prediction accuracy relative to D and H separately and propose a practical measure, Q-ratio, to guide the decision as to whether D and H should or should not be combined into D2H. Using five training biomass datasets and two fitting approaches, weighted nonlinear regression and linear regression following logarithmic transformations, we showed that the D2H predictor becomes less efficient in predicting aboveground biomass as the Q-ratio deviates from 2.0. Because of the model constraint, the D2H-based model performed less well than the separate variable model by as much as 12 per cent with regard to mean absolute percentage residual and as much as 18 per cent with regard to sum of squares of log accuracy ratios. For the analysed datasets, we observed a wide variation in Q-ratios, ranging from 2.5 to 5.1, and a large decrease in efficiency for the combined variable model. Therefore, we recommend using the Q-ratio as a measure to guide the decision as to whether D and H may be combined further into D2H without the adverse effects of loss in biomass prediction accuracy.


2019 ◽  
Author(s):  
P. Trinh ◽  
R. Brossier ◽  
L. Lemaistre ◽  
L. Métivier ◽  
J. Virieux

Geophysics ◽  
2018 ◽  
Vol 83 (5) ◽  
pp. G107-G118 ◽  
Author(s):  
Xuliang Feng ◽  
Wanyin Wang ◽  
Bingqiang Yuan

The basement of a rift sedimentary basin, often possessing smooth and nonsmooth shapes, is not easily recovered from gravity data by current inversion methods. We have developed a new 3D gravity inversion method to estimate the basement relief of a rift basin. In the inversion process, we have established the objective function by combining the gravity data misfit function, the known depth constraint function, and the model constraint function composed of the [Formula: see text]-norm and [Formula: see text]-norm, respectively. An edge recognition technology based on the normalized vertical derivative of the total horizontal derivative for gravity data is adopted to recognize the discontinuous and continuous parts of the basin and combine the two inputs to form the final model constraint function. The inversion is conducted by minimizing the objective function by the nonlinear conjugate gradient algorithm. We have developed two applications using synthetic gravity anomalies produced from two synthetic rift basins, one with a single graben and one with six differently sized grabens. The test results indicate that the inversion method is a feasible technique to delineate the basement relief of a rift basin. The inversion method is also tested on field data from the Xi’an depression in the middle of the Weihe Basin, Shaanxi Province, China, and the result illustrates its effectiveness.


Sign in / Sign up

Export Citation Format

Share Document