Optical Aberrations Correction in Postprocessing Using Imaging Simulation

2021 ◽  
Vol 40 (5) ◽  
pp. 1-15
Author(s):  
Shiqi Chen ◽  
Huajun Feng ◽  
Dexin Pan ◽  
Zhihai Xu ◽  
Qi Li ◽  
...  

As the popularity of mobile photography continues to grow, considerable effort is being invested in the reconstruction of degraded images. Due to the spatial variation in optical aberrations, which cannot be avoided during the lens design process, recent commercial cameras have shifted some of these correction tasks from optical design to postprocessing systems. However, without engaging with the optical parameters, these systems only achieve limited correction for aberrations. In this work, we propose a practical method for recovering the degradation caused by optical aberrations. Specifically, we establish an imaging simulation system based on our proposed optical point spread function model. Given the optical parameters of the camera, it generates the imaging results of these specific devices. To perform the restoration, we design a spatial-adaptive network model on synthetic data pairs generated by the imaging simulation system, eliminating the overhead of capturing training data by a large amount of shooting and registration. Moreover, we comprehensively evaluate the proposed method in simulations and experimentally with a customized digital-single-lens-reflex camera lens and HUAWEI HONOR 20, respectively. The experiments demonstrate that our solution successfully removes spatially variant blur and color dispersion. When compared with the state-of-the-art deblur methods, the proposed approach achieves better results with a lower computational overhead. Moreover, the reconstruction technique does not introduce artificial texture and is convenient to transfer to current commercial cameras. Project Page: https://github.com/TanGeeGo/ImagingSimulation .

Water ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 107
Author(s):  
Elahe Jamalinia ◽  
Faraz S. Tehrani ◽  
Susan C. Steele-Dunne ◽  
Philip J. Vardon

Climatic conditions and vegetation cover influence water flux in a dike, and potentially the dike stability. A comprehensive numerical simulation is computationally too expensive to be used for the near real-time analysis of a dike network. Therefore, this study investigates a random forest (RF) regressor to build a data-driven surrogate for a numerical model to forecast the temporal macro-stability of dikes. To that end, daily inputs and outputs of a ten-year coupled numerical simulation of an idealised dike (2009–2019) are used to create a synthetic data set, comprising features that can be observed from a dike surface, with the calculated factor of safety (FoS) as the target variable. The data set before 2018 is split into training and testing sets to build and train the RF. The predicted FoS is strongly correlated with the numerical FoS for data that belong to the test set (before 2018). However, the trained model shows lower performance for data in the evaluation set (after 2018) if further surface cracking occurs. This proof-of-concept shows that a data-driven surrogate can be used to determine dike stability for conditions similar to the training data, which could be used to identify vulnerable locations in a dike network for further examination.


2021 ◽  
Vol 4 ◽  
Author(s):  
Michael Platzer ◽  
Thomas Reutterer

AI-based data synthesis has seen rapid progress over the last several years and is increasingly recognized for its promise to enable privacy-respecting high-fidelity data sharing. This is reflected by the growing availability of both commercial and open-sourced software solutions for synthesizing private data. However, despite these recent advances, adequately evaluating the quality of generated synthetic datasets is still an open challenge. We aim to close this gap and introduce a novel holdout-based empirical assessment framework for quantifying the fidelity as well as the privacy risk of synthetic data solutions for mixed-type tabular data. Measuring fidelity is based on statistical distances of lower-dimensional marginal distributions, which provide a model-free and easy-to-communicate empirical metric for the representativeness of a synthetic dataset. Privacy risk is assessed by calculating the individual-level distances to closest record with respect to the training data. By showing that the synthetic samples are just as close to the training as to the holdout data, we yield strong evidence that the synthesizer indeed learned to generalize patterns and is independent of individual training records. We empirically demonstrate the presented framework for seven distinct synthetic data solutions across four mixed-type datasets and compare these then to traditional data perturbation techniques. Both a Python-based implementation of the proposed metrics and the demonstration study setup is made available open-source. The results highlight the need to systematically assess the fidelity just as well as the privacy of these emerging class of synthetic data generators.


2013 ◽  
Vol 28 (5) ◽  
pp. 788-792
Author(s):  
程瑶 CHENG Yao ◽  
鲁进 LU Jin ◽  
孟丽娅 MENG Li-ya

Geophysics ◽  
2019 ◽  
Vol 84 (5) ◽  
pp. S411-S423
Author(s):  
Peng Yong ◽  
Jianping Huang ◽  
Zhenchun Li ◽  
Wenyuan Liao ◽  
Luping Qu

Least-squares reverse time migration (LSRTM), an effective tool for imaging the structures of the earth from seismograms, can be characterized as a linearized waveform inversion problem. We have investigated the performance of three minimization functionals as the [Formula: see text] norm, the hybrid [Formula: see text] norm, and the Wasserstein metric ([Formula: see text] metric) for LSRTM. The [Formula: see text] metric used in this study is based on the dynamic formulation of transport problems, and a primal-dual hybrid gradient algorithm is introduced to efficiently compute the [Formula: see text] metric between two seismograms. One-dimensional signal analysis has demonstrated that the [Formula: see text] metric behaves like the [Formula: see text] norm for two amplitude-varied signals. Unlike the [Formula: see text] norm, the [Formula: see text] metric does not suffer from the differentiability issue for null residuals. Numerical examples of the application of three misfit functions to LSRTM on synthetic data have demonstrated that, compared to the [Formula: see text] norm, the hybrid [Formula: see text] norm and [Formula: see text] metric can accelerate LSRTM and are less sensitive to non-Gaussian noise. For the field data application, the [Formula: see text] metric produces the most reliable imaging results. The hybrid [Formula: see text] norm requires tedious trial-and-error tests for the judicious threshold parameter selection. Hence, the more automatic [Formula: see text] metric is recommended as a robust alternative to the customary [Formula: see text] norm for time-domain LSRTM.


Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 451 ◽  
Author(s):  
Yun-Chieh Fan ◽  
Chih-Yu Wen

Soldier-based simulators have been attracting increased attention recently, with the aim of making complex military tactics more effective, such that soldiers are able to respond rapidly and logically to battlespace situations and the commander’s decisions in the battlefield. Moreover, body area networks (BANs) can be applied to collect the training data in order to provide greater access to soldiers’ physical actions or postures as they occur in real routine training. Therefore, due to the limited physical space of training facilities, an efficient soldier-based training strategy is proposed that integrates a virtual reality (VR) simulation system with a BAN, which can capture body movements such as walking, running, shooting, and crouching in a virtual environment. The performance evaluation shows that the proposed VR simulation system is able to provide complete and substantial information throughout the training process, including detection, estimation, and monitoring capabilities.


Sensors ◽  
2020 ◽  
Vol 20 (12) ◽  
pp. 3382 ◽  
Author(s):  
Hai Chien Pham ◽  
Quoc-Bao Ta ◽  
Jeong-Tae Kim ◽  
Duc-Duy Ho ◽  
Xuan-Linh Tran ◽  
...  

In this study, we investigate a novel idea of using synthetic images of bolts which are generated from a graphical model to train a deep learning model for loosened bolt detection. Firstly, a framework for bolt-loosening detection using image-based deep learning and computer graphics is proposed. Next, the feasibility of the proposed framework is demonstrated through the bolt-loosening monitoring of a lab-scaled bolted joint model. For practicality, the proposed idea is evaluated on the real-scale bolted connections of a historical truss bridge in Danang, Vietnam. The results show that the deep learning model trained by the synthesized images can achieve accurate bolt recognitions and looseness detections. The proposed methodology could help to reduce the time and cost associated with the collection of high-quality training data and further accelerate the applicability of vision-based deep learning models trained on synthetic data in practice.


Geophysics ◽  
2017 ◽  
Vol 82 (4) ◽  
pp. V257-V274
Author(s):  
Necati Gülünay

The diminishing residual matrices (DRM) method can be used to surface-consistently decompose individual trace statics into source and receiver components. The statics to be decomposed may either be first-arrival times after the application of linear moveout associated with a consistent refractor as used in refraction statics or residual statics obtained by crosscorrelating individual traces with corresponding model traces (known as pilot traces) at the same common-midpoint (CMP) location. The DRM method is an iterative process like the well-known Gauss-Seidel (GS) method, but it uses only source and receiver terms. The DRM method differs from the GS method in that half of the average common shot and receiver terms are subtracted simultaneously from the observations at each iteration. DRM makes the under-constrained statics problem a constrained one by implicitly adding a new constraint, the equality of the contribution of shots and receivers to the solution. The average of the shot statics and the average of the receiver statics are equal in the DRM solution. The solution has the smallest difference between shot and receiver statics profiles when the number of shots and the number of receivers in the data are equal. In this case, it is also the smallest norm solution. The DRM method can be derived from the well-known simultaneous iterative reconstruction technique. Simple numerical tests as well as results obtained with a synthetic data set containing only the field statics verify that the DRM solution is the same as the linear inverse theory solution. Both algorithms can solve for the long-wavelength component of the statics if the individual picks contain them. Yet DRM method is much faster. Application of the method to the normal moveout-corrected CMP gathers on a 3D land survey for residual statics calculation found that pick-decompose-apply-stack stages of the DRM method need to be iterated. These iterations are needed because of time and waveform distortions of the pilot traces due to the individual trace statics. The distortions lessen at every external DRM iteration.


Geophysics ◽  
1997 ◽  
Vol 62 (6) ◽  
pp. 1804-1811 ◽  
Author(s):  
Qingbo Liao ◽  
George A. McMechan

The centroid frequency shift method is implemented, tested with synthetic data, and applied to field data from three contiguous crosswell seismic experiments at the Gypsy Pilot in northern Oklahoma. The similtaneous iterative reconstruction technique is used for tomographic estimations of both P‐wave velocity and Q. No amplitude corrections or spreading loss corrections are needed for the Q estimation. The estimated in‐situ velocity and Q distributions correlate well with log data and local lithology. The Q/velocity ratio appears to correlate with the sand/shale ratio (ranging from an average of ∼15 s/km for the sand‐dominated lithologies to an average of ∼8.5 s/km for the shale‐dominated ones), with the result that new information is provided on interwell connectivity.


Geophysics ◽  
1981 ◽  
Vol 46 (5) ◽  
pp. 751-767 ◽  
Author(s):  
Les Hatton ◽  
Ken Larner ◽  
Bruce S. Gibson

Because conventional time‐migration algorithms are founded on the implicit assumption of locally lateral homogeneity, they leave events mispositioned when overburden velocity varies laterally. The ray‐theoretical depth migration procedure of Hubral often can provide adequate first‐order corrections for such position errors. Complex geologic structure, however, can so severely distort wavefronts that resulting time‐migrated sections may be barely interpretable and thus not readily correctable. A more accurate, wave‐theoretical approach to depth migration then becomes essential to image the subsurface properly. This approach, which transforms an unmigrated time section directly into migrated depth, more completely honors the wave equation for a medium in which variations in interval velocity and details of structural shape govern wave propagation. Where geologic structure is complicated, however, we usually lack an accurate velocity model. It is important, therefore, to understand the sensitivity of depth migration to velocity errors and, in particular, to assess whether it is justified to go to the added effort of doing depth migration. We show a synthetic data example in which the wave‐theoretical approach to depth migration properly images deep reflections that are poorly resolved and left distorted by either time migration or ray‐theoretical depth migration. These imaging results are, moreover, surprisingly insensitive to errors introduced into the velocity model. Application to one field data example demonstrates the superior treatment of amplitude and waveform by wave‐theoretical depth migration. In a second data example, deep reflections are so influenced by anomalous overburden structure that the only valid alternative to performing wave‐theoretical depth migration is simply to convert the unmigrated data to depth. When the overburden is laterally variable, conventional time migration of unstacked data can be as destructive to steeply dipping reflections as is CDP stacking prior to migration. A schematic example illustrates that when migration of unstacked data is judged necessary, it should normally be performed as a depth migration.


Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5876
Author(s):  
Mohsen Sharifi Renani ◽  
Abigail M. Eustace ◽  
Casey A. Myers ◽  
Chadd W. Clary

Gait analysis based on inertial sensors has become an effective method of quantifying movement mechanics, such as joint kinematics and kinetics. Machine learning techniques are used to reliably predict joint mechanics directly from streams of IMU signals for various activities. These data-driven models require comprehensive and representative training datasets to be generalizable across the movement variability seen in the population at large. Bottlenecks in model development frequently occur due to the lack of sufficient training data and the significant time and resources necessary to acquire these datasets. Reliable methods to generate synthetic biomechanical training data could streamline model development and potentially improve model performance. In this study, we developed a methodology to generate synthetic kinematics and the associated predicted IMU signals using open source musculoskeletal modeling software. These synthetic data were used to train neural networks to predict three degree-of-freedom joint rotations at the hip and knee during gait either in lieu of or along with previously measured experimental gait data. The accuracy of the models’ kinematic predictions was assessed using experimentally measured IMU signals and gait kinematics. Models trained using the synthetic data out-performed models using only the experimental data in five of the six rotational degrees of freedom at the hip and knee. On average, root mean square errors in joint angle predictions were improved by 38% at the hip (synthetic data RMSE: 2.3°, measured data RMSE: 4.5°) and 11% at the knee (synthetic data RMSE: 2.9°, measured data RMSE: 3.3°), when models trained solely on synthetic data were compared to measured data. When models were trained on both measured and synthetic data, root mean square errors were reduced by 54% at the hip (measured + synthetic data RMSE: 1.9°) and 45% at the knee (measured + synthetic data RMSE: 1.7°), compared to measured data alone. These findings enable future model development for different activities of clinical significance without the burden of generating large quantities of gait lab data for model training, streamlining model development, and ultimately improving model performance.


Sign in / Sign up

Export Citation Format

Share Document