Seismic system calibration: 2. Cross-spectral calibration using random binary signals

1979 ◽  
Vol 69 (1) ◽  
pp. 271-288
Author(s):  
Jon Berger ◽  
Duncan Carr Agnew ◽  
Robert L. Parker ◽  
William E. Farrell

abstract We present a rapid and accurate method of calibrating seismic systems using a random binary calibration signal and cross-spectral techniques. The complex transfer function obtained from the cross spectrum is least-squares fit to the ratio of two polynomials in s(s = iω) whose degrees are determined by a linear systems analysis. This provides a compact representation of the system frequency response. We demonstrate its application to two seismic systems, the IDA and SRO seismomenters. This method yields calibrations to an accuracy of better than 1 per cent in amplitude and 1° in phase.

2013 ◽  
Vol 13 (3) ◽  
pp. 132-141 ◽  
Author(s):  
Dongliang Su ◽  
Jian Wu ◽  
Zhiming Cui ◽  
Victor S. Sheng ◽  
Shengrong Gong

This paper proposes a novel invariant local descriptor, a combination of gradient histograms with contrast intensity (CGCI), for image matching and object recognition. Considering the different contributions of sub-regions inside a local interest region to an interest point, we divide the local interest region around the interest point into two main sub-regions: an inner region and a peripheral region. Then we describe the divided regions with gradient histogram information for the inner region and contrast intensity information for the peripheral region respectively. The contrast intensity information is defined as intensity difference between an interest point and other pixels in the local region. Our experimental results demonstrate that the proposed descriptor performs better than SIFT and its variants PCA-SIFT and SURF with various optical and geometric transformations. It also has better matching efficiency than SIFT and its variants PCA-SIFT and SURF, and has the potential to be used in a variety of realtime applications.


2021 ◽  
Vol 368 (6) ◽  
Author(s):  
Liwen Zhang ◽  
Qingyu Lv ◽  
Yuling Zheng ◽  
Xuan Chen ◽  
Decong Kong ◽  
...  

ABSTRACT T-2 is a common mycotoxin contaminating cereal crops. Chronic consumption of food contaminated with T-2 toxin can lead to death, so simple and accurate detection methods in food and feed are necessary. In this paper, we establish a highly sensitive and accurate method for detecting T-2 toxin using AlphaLISA. The system consists of acceptor beads labeled with T-2-bovine serum albumin (BSA), streptavidin-labeled donor beads and biotinylated T-2 antibodies. T-2 in the sample matrix competes with T-2-BSA for antibodies. Adding biotinylated antibodies to the test well followed by T-2 and T-2-BSA acceptor beads yielded a detection range of 0.03–500 ng/mL. The half-maximal inhibitory concentration was 2.28 ng/mL and the coefficient of variation was <10%. In addition, this method had no cross-reaction with other related mycotoxins. This optimized method for extracting T-2 from food and feed samples achieved a recovery rate of approximately 90% in T-2 concentrations as low as 1 ng/mL, better than the performance of a commercial ELISA kit. This competitive AlphaLISA method offers high sensitivity, good specificity, good repeatability and simple operation for detecting T-2 toxin in food and feed.


1987 ◽  
Vol 3 (1) ◽  
pp. 39-51 ◽  
Author(s):  
Ronald E. Anderson

Results from the 1979 Minnesota Computer Literacy Assessment conducted by the Minnesota Educational Computing Consortium, show that high school females performed better than males in some specific areas of programming. The areas of female superiority are those such as problem analysis and algorithmic application where the problems are expressed verbally rather than mathematically. While these findings may result from unique features of computer education in Minnesota, the findings may also be a consequence of the fact that the Minnesota assessment instrument was relatively free of mathematical bias. These findings and those of the 1982 National Assessment of Science on female superiority in “science decision making” imply that women are better than men at tasks usually defined as systems analysis rather than program coding.


2014 ◽  
Vol 27 (3) ◽  
pp. 399-410 ◽  
Author(s):  
Stevica Cvetkovic ◽  
Sasa Nikolic ◽  
Slobodan Ilic

Although many indoor-outdoor image classification methods have been proposed in the literature, most of them have omitted comparison with basic methods to justify the need for complex feature extraction and classification procedures. In this paper we propose a relatively simple but highly accurate method for indoor-outdoor image classification, based on combination of carefully engineered MPEG-7 color and texture descriptors. In order to determine the optimal combination of descriptors in terms of fast extraction, compact representation and high accuracy, we conducted comprehensive empirical tests over several color and texture descriptors. The descriptors combination was used for training and testing of a binary SVM classifier. We have shown that the proper descriptors preprocessing before SVM classification has significant impact on the final result. Comprehensive experimental evaluation shows that the proposed method outperforms several more complex indoor-outdoor image classification techniques on a couple of public datasets.


2003 ◽  
Vol 125 (4) ◽  
pp. 736-739 ◽  
Author(s):  
Chakguy Prakasvudhisarn ◽  
Theodore B. Trafalis ◽  
Shivakumar Raman

Probe-type Coordinate Measuring Machines (CMMs) rely on the measurement of several discrete points to capture the geometry of part features. The sampled points are then fit to verify a specified geometry. The most widely used fitting method, the least squares fit (LSQ), occasionally overestimates the tolerance zone. This could lead to the economical disadvantage of rejecting some good parts and the statistical disadvantage of normal (Gaussian) distribution assumption. Support vector machines (SVMs) represent a relatively new revolutionary approach for determining the approximating function in regression problems. Its upside is that the normal distribution assumption is not required. In this research, support vector regression (SVR), a new data fitting procedure, is introduced as an accurate method for finding the minimum zone straightness and flatness tolerances. Numerical tests are conducted with previously published data and the results are found to be comparable to the published results, illustrating its potential for application in precision data analysis such as used in minimum zone estimation.


1979 ◽  
Vol 69 (1) ◽  
pp. 251-270 ◽  
Author(s):  
W. E. Farrell ◽  
J. Berger

abstract We describe linear models of the IDA (International Deployment of Accelerometers) and SRO (Seismic Research Observatories) feedback seismometers obtained by straightforward analysis of the control systems. The most important property of these models is the theoretical transfer function they yield, relating Earth acceleration to output voltage. The parameterization of the theoretical transfer function is described. Several uses of the system models and transfer function are discussed, but we conclude that the parameterized models do not provide a sufficiently accurate means of describing the installed accelerometer's behavior. A speedier, more automatic and more accurate calibration can be obtained by electrically perturbing the seismometer system. This technique is discussed in Part 2.


2005 ◽  
Vol 22 (8) ◽  
pp. 1294-1304 ◽  
Author(s):  
Jong Jin Park ◽  
Kuh Kim ◽  
Brian A. King ◽  
Stephen C. Riser

Abstract Subsurface ocean currents can be estimated from the positions of drifting profiling floats that are being widely deployed for the international Argo program. The calculation of subsurface velocity depends on how the trajectory of the float while on the surface is treated. The following three aspects of the calculation of drift velocities are addressed: the accurate determination of surfacing and dive times, a new method for extrapolating surface and dive positions from the set of discrete Argos position fixes, and a discussion of the errors in the method. In the new method described herein, the mean drift velocity and the phase and amplitude of inertial motions are derived explicitly from a least squares fit to the set of Argos position fixes for each surface cycle separately. The new method differs from previous methods that include prior assumptions about the statistics of inertial motions. It is concluded that the endpoints of the subsurface trajectory can be estimated with accuracy better than 1.7 km (East Sea/Sea of Japan) and 0.8 km (Indian Ocean). All errors, combined with the error that results from geostrophic shear and extrapolation, should result in individual subsurface velocity estimates with uncertainty of the order of 0.2 cm s−1.


2021 ◽  
Author(s):  
Cao Jing ◽  
Sun Linhua ◽  
Wu Cancan

Abstract A more accurate method of DC processing data to distinguish the anomalous body is important for the prediction and detection of potential risk such as goaf and water inrush. In this paper, we have performed a DC data processing process, which relies on the theory of aggregation-area(C-A). We investigate the apparent resistant and apparent resistant isograms cumulative area as a function to search the threshold as the boundary value. Comparisons of the conventional data processing method to physical simulation that the C-A identified the higher resistance anomalous body better than the lower resistance because its sensitivity. Scoped the higher resistance area almost identical with the physical model, while the lower approach the nearest boundary. The results are in good agreement with the physical model, validating C-A multifractal theory as an effective way for DC accurate interpretation.


2020 ◽  
Vol 27 (1) ◽  
Author(s):  
MB Ibrahim ◽  
KA Gbolagade

The science and art of data compression is presenting information in a compact form. This compact representation of information is generated by recognizing the use of structures that exist in the data. The Lempel-Ziv-Welch (LZW) algorithm is known to be one of the best compressors of text which achieve a high degree of compression. This is possible for text files with lots of redundancies. Thus, the greater the redundancies, the greater the compression achieved. In this paper, the LZW algorithm is further enhanced to achieve a higher degree of compression without compromising its performances through the introduction of an algorithm, called Chinese Remainder Theorem (CRT), is presented. Compression Time and Compression Ratio was used for performance metrics. Simulations was carried out using MATLAB for five (5) text files (of varying sizes) in determining the efficiency of the proposed CRT-LZW technique. This new technique has opened a new development of increasing the speed of compressing data than the traditional LZW. The results show that the CRT-LZW performs better than LZW in terms of computational time by 0.12s to 15.15s, while the compression ratio remains same with 2.56% respectively. The proposed compression time also performed better than some investigative papers implementing LZW-RNS by 0.12s to 2.86s and another by 0.12s to 0.14s. Keywords: Data Compression, Lempel-Ziv-Welch (LZW) algorithm, Enhancement, Chinese Remainder Theorem (CRT), Text files.


Sign in / Sign up

Export Citation Format

Share Document