Standardization in gravity reduction

Geophysics ◽  
1991 ◽  
Vol 56 (8) ◽  
pp. 1170-1178 ◽  
Author(s):  
T. R. LaFehr

Gravity reduction standards are needed to improve anomaly quality for interpretation and to facilitate the joining together of different data sets. To the extent possible, data reduction should be quantitative, objective, and comprehensive, leaving ambiguity only to the interpretation process that involves qualitative, subjective, and geological decisions. The term (Bouguer anomaly) describes a field intended to be free of all nongeologic effects—not modified by a partial geologic interpretation. Measured vertical gradients of gravity demonstrate considerable variation but do not suggest, as often reported, that the normal free‐air gradient is in error or needs to be locally adjusted. Such gradients are strongly influenced by terrain and, to a lesser extent, by the same geologic sources which produce Bouguer anomalies. A substantial body of existing literature facilitates the comprehensive treatment of terrain effects, which may be rigorously implemented with current computer technology. Although variations in topographic rock density are a major source of Bouguer anomalies, a constant density appropriate to the area under investigation is normally adopted as a data reduction standard, leaving a treatment of the density variations to the interpretation. A field example from British Columbia illustrates both the variations in vertical gravity gradients which can be encountered and the conclusion that the classical approach to data reduction is practically always suitable to account for the observed effects. Standard data reduction procedures do not (and should not) include reduction‐to‐datum. The interpreter must be aware, however, that otherwise “smooth” regional Bouguer anomalies caused by regional sources do contain high‐frequency components in areas of rugged topography.

2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Yance Feng ◽  
Lei M. Li

Abstract Background Normalization of RNA-seq data aims at identifying biological expression differentiation between samples by removing the effects of unwanted confounding factors. Explicitly or implicitly, the justification of normalization requires a set of housekeeping genes. However, the existence of housekeeping genes common for a very large collection of samples, especially under a wide range of conditions, is questionable. Results We propose to carry out pairwise normalization with respect to multiple references, selected from representative samples. Then the pairwise intermediates are integrated based on a linear model that adjusts the reference effects. Motivated by the notion of housekeeping genes and their statistical counterparts, we adopt the robust least trimmed squares regression in pairwise normalization. The proposed method (MUREN) is compared with other existing tools on some standard data sets. The goodness of normalization emphasizes on preserving possible asymmetric differentiation, whose biological significance is exemplified by a single cell data of cell cycle. MUREN is implemented as an R package. The code under license GPL-3 is available on the github platform: github.com/hippo-yf/MUREN and on the conda platform: anaconda.org/hippo-yf/r-muren. Conclusions MUREN performs the RNA-seq normalization using a two-step statistical regression induced from a general principle. We propose that the densities of pairwise differentiations are used to evaluate the goodness of normalization. MUREN adjusts the mode of differentiation toward zero while preserving the skewness due to biological asymmetric differentiation. Moreover, by robustly integrating pre-normalized counts with respect to multiple references, MUREN is immune to individual outlier samples.


2019 ◽  
Vol 621 ◽  
pp. A59 ◽  
Author(s):  
T. Stolker ◽  
M. J. Bonse ◽  
S. P. Quanz ◽  
A. Amara ◽  
G. Cugno ◽  
...  

Context. The direct detection and characterization of planetary and substellar companions at small angular separations is a rapidly advancing field. Dedicated high-contrast imaging instruments deliver unprecedented sensitivity, enabling detailed insights into the atmospheres of young low-mass companions. In addition, improvements in data reduction and point spread function (PSF)-subtraction algorithms are equally relevant for maximizing the scientific yield, both from new and archival data sets. Aims. We aim at developing a generic and modular data-reduction pipeline for processing and analysis of high-contrast imaging data obtained with pupil-stabilized observations. The package should be scalable and robust for future implementations and particularly suitable for the 3–5 μm wavelength range where typically thousands of frames have to be processed and an accurate subtraction of the thermal background emission is critical. Methods. PynPoint is written in Python 2.7 and applies various image-processing techniques, as well as statistical tools for analyzing the data, building on open-source Python packages. The current version of PynPoint has evolved from an earlier version that was developed as a PSF-subtraction tool based on principal component analysis (PCA). Results. The architecture of PynPoint has been redesigned with the core functionalities decoupled from the pipeline modules. Modules have been implemented for dedicated processing and analysis steps, including background subtraction, frame registration, PSF subtraction, photometric and astrometric measurements, and estimation of detection limits. The pipeline package enables end-to-end data reduction of pupil-stabilized data and supports classical dithering and coronagraphic data sets. As an example, we processed archival VLT/NACO L′ and M′ data of β Pic b and reassessed the brightness and position of the planet with a Markov chain Monte Carlo analysis; we also provide a derivation of the photometric error budget.


2010 ◽  
Vol 66 (6) ◽  
pp. 733-740 ◽  
Author(s):  
Kay Diederichs

An indicator which is calculated after the data reduction of a test data set may be used to estimate the (systematic) instrument error at a macromolecular X-ray source. The numerical value of the indicator is the highest signal-to-noise [I/σ(I)] value that the experimental setup can produce and its reciprocal is related to the lower limit of the mergingRfactor. In the context of this study, the stability of the experimental setup is influenced and characterized by the properties of the X-ray beam, shutter, goniometer, cryostream and detector, and also by the exposure time and spindle speed. Typical values of the indicator are given for data sets from the JCSG archive. Some sources of error are explored with the help of test calculations usingSIM_MX[Diederichs (2009),Acta Cryst.D65, 535–542]. One conclusion is that the accuracy of data at low resolution is usually limited by the experimental setup rather than by the crystal. It is also shown that the influence of vibrations and fluctuations may be mitigated by a reduction in spindle speed accompanied by stronger attenuation.


Geophysics ◽  
1956 ◽  
Vol 21 (1) ◽  
pp. 88-106 ◽  
Author(s):  
Kenneth L. Cook

In 1948 the U. S. Geological Survey, in cooperation with the U. S. Coast and Geodetic Survey, made a regional gravity survey in northeastern Oklahoma and southeastern Kansas in connection with the studies of the deflection of the vertical. About 550 gravity stations were occupied with spacings of 5 to 10 miles in parts of 54 counties, and a Bouguer anomaly map, contoured at intervals of 5 milligals, was drawn. In southeastern Kansas there is a lack of correlation of regional gravity with known regional structural geology. The observed gravity anomalies are apparently caused principally by variations of density in the Precambrian basement and indicate a basement of complex nature, made up of rocks of contrasting properties, with a regional grain striking predominantly west or west‐northwest. In northeastern Oklahoma the several observed regional gravity anomalies indicate different degrees of correlation of regional gravity with regional structural geology. In the Precambrian highland area in Osage, Pawnee, and Creek Counties, there is a lack of correlation, as the gravity anomaly is probably caused chiefly by density contrasts within the basement complex. The anomaly associated with the Hunton arch is probably caused partly by structural relief of the rocks of pre‐Pennsylvanian age and partly by density contrasts within the basement, and thus indicates some correlation. The steep gravity gradients along the outer flanks of the Ozark uplift indicate good correlation with the subsurface geology. The great anomaly over the Arkansas basin, which indicates a close correlation, is probably caused largely—but perhaps not entirely—by downwarping of the basement and pre‐Pennsylvanian rocks.


Geophysics ◽  
2014 ◽  
Vol 79 (4) ◽  
pp. B135-B149 ◽  
Author(s):  
Elahe P. Ardakani ◽  
Douglas R. Schmitt ◽  
Todd D. Bown

The Devonian Grosmont Formation in northeastern Alberta, Canada, is the world’s largest accumulation of heavy oil in carbonate rock with estimated bitumen in place of [Formula: see text]. Much of the reservoir unconformably subcrops beneath Cretaceous sediments. This is an eroded surface modified by kartstification known as the Sub-Mannville Unconformity (SMU). We studied the reanalysis and integration of legacy seismic data sets obtained in the mid-1980s to investigate the structure of this surface. Standard data processing was carried out supplemented by some more modern approaches to noise reduction. The interpretation of these reprocessed data resulted in some key structural maps above and below the SMU. These seismic maps revealed substantially more detail than those constructed solely on the basis of well-log data; in fact, the use of only well-log information would likely result in erroneous interpretations. Although features smaller than about 40 m in radius cannot be easily discerned at the SMU due to wavefield and data sampling limits, the data did reveal the existence of a roughly east–west-trending ridge-valley system. A more minor northeast–southwest-trending linear valley also was apparent. These observations are all consistent with the model of a karsted/eroded carbonate surface. Comparison of the maps for the differing horizons further suggested that deeper horizons may influence the structure of the SMU and even the overlying Mesozoic formations. This suggested that some displacements due to karst cavity collapse or minor faulting within the Grosmont occurred during or after deposition of the younger Mesozoic sediments on top of the Grosmont surface.


2012 ◽  
Vol 3 (1) ◽  
pp. 41 ◽  
Author(s):  
A. Bergamasco ◽  
A. Benetazzo ◽  
S. Carniel ◽  
F.M. Falcieri ◽  
T. Minuzzo ◽  
...  

In order to monitor, describe and understand the marine environment, many research institutions are involved in the acquisition and distribution of ocean data, both from observations and models. Scientists from these institutions are spending too much time looking for, accessing, and reformatting data: they need better tools and procedures to make the science they do more efficient. The U.S. Integrated Ocean Observing System (US-IOOS) is working on making large amounts of distributed data usable in an easy and efficient way. It is essentially a network of scientists, technicians and technologies designed to acquire, collect and disseminate observational and modelled data resulting from coastal and oceanic marine regions investigations to researchers, stakeholders and policy makers. In order to be successful, this effort requires standard data protocols, web services and standards-based tools. Starting from the US-IOOS approach, which is being adopted throughout much of the oceanographic and meteorological sectors, we describe here the CNR-ISMAR Venice experience in the direction of setting up a national Italian IOOS framework using the THREDDS (THematic Real-time Environmental Distributed Data Services) Data Server (TDS), a middleware designed to fill the gap between data providers and data users. The TDS provides services that allow data users to find the data sets pertaining to their scientific needs, to access, to visualize and to use them in an easy way, without downloading files to the local workspace. In order to achieve this, it is necessary that the data providers make their data available in a standard form that the TDS understands, and with sufficient metadata to allow the data to be read and searched in a standard way. The core idea is then to utilize a Common Data Model (CDM), a unified conceptual model that describes different datatypes within each dataset. More specifically, Unidata (<a href="http://www.unidata.ucar.edu" target="_blank">www.unidata.ucar.edu</a>) has developed CDM specifications for many of the different kinds of data used by the scientific community, such as grids, profiles, time series, swath data. These datatypes are aligned the NetCDF Climate and Forecast (CF) Metadata Conventions and with Climate Science Modelling Language (CSML); CF-compliant NetCDF files and GRIB files can be read directly with no modification, while non compliant files can be modified to meet appropriate metadata requirements. Once standardized in the CDM, the TDS makes datasets available through a series of web services such as OPeNDAP or Open Geospatial Consortium Web Coverage Service (WCS), allowing the data users to easily obtain small subsets from large datasets, and to quickly visualize their content by using tools such as GODIVA2 or Integrated Data Viewer (IDV). In addition, an ISO metadata service is available through the TDS that can be harvested by catalogue broker services (e.g. GI-cat) to enable distributed search across federated data servers. Example of TDS datasets can be accessed at the CNR-ISMAR Venice site <a href="http://tds.ve.ismar.cnr.it:8080/thredds/catalog.html" target="_blank">http://tds.ve.ismar.cnr.it:8080/thredds/catalog.html</a>.


2019 ◽  
Vol 2 (2) ◽  
pp. 61
Author(s):  
Wahyu Hidayat ◽  
Wrego Seno Giamboro

Gravity method is a passive geophysical method that provides information on the distribution of rock density below the surface. The gravity method has a weakness at the level of ambiguity in determining the depth of the anomaly. This study aims to determine the depth of the anomaly using Continuous Wavelet Transform (CWT) analysis to overcome the value of ambiguity, so that the results obtained have a high degree of accuracy. The research method is data survey / acquisition and data analysis. This research was conducted in Karangsambung Kebumen, Central Java with the acquisition of gravitational data as many as 56 measurement points. The results of data acquisition then analyzed included reading to mGal, tool height correction, drift, tides, latitude, free air correction, Bouguer correction, and field correction. The results of this correction obtained Complete Bouguer Anomalies (ABL) values which were then reduced to flat fields and regional-residual anomaly filters. The next step is CWT analysis by making incisions on residual anomaly maps. The results showed that the source of the anomaly was between ± 39.2 - 122.9 meters.


Author(s):  
Jaco Immelman

LiDAR-derived DEMs can take weeks to process, thus minimising the effectiveness of timely solutions. In order to reach realistic processing times, data sets need to be reduced. However, data reduction needs to be conducted with the minimum loss of accuracy. This study investigates the effects of data reduction on LiDAR-derived DEMs.


Sign in / Sign up

Export Citation Format

Share Document