THE MATCHED FIELD PROCESSING OF BINAURAL DOLPHIN-LIKE SIGNALS FOR THE DETECTION AND IDENTIFICATION OF BURIED TARGETS

2003 ◽  
Vol 11 (04) ◽  
pp. 521-534 ◽  
Author(s):  
A. TOLSTOY ◽  
W. AU

The Matched Field Processing (MFP) approach to be discussed here is intended to extract subtle differences between apparently similar signals. The technique is applied coherently to an array of data, i.e. to two receivers. One of the main advantages to this work is that even though we use MFP, there is no modeling involved. Since the available binaural data are quite limited and show very strong, obviously different returns from all the targets (not the subtle differences realistically expected), we found it necessary to manipulate the data to bring them more into line with expectations. In particular, scattered returns from a drum were reduced, i.e. multiplied by a small constant factor, then added to the scattered returns from bottom-only data using various time shifts. The shifts simulated a family of returns from a low signal-to-noise (S/N) 55 gallon drum target. This family with shifted bottom scattering mimics returns from multiple placements of the targets on the bottom. These new target "data" (comprised of manipulated real data) seem at first glance to be nearly identical to the original bottom-only returns. Thus, the new target data display subtle differences from the bottom-only data. The MFP approach (based on the linear, a.k.a., Bartlett, processor) was then applied to these new "data". They were processed and yielded a target "template" of scattered returns varying as a function of time and frequency characterizing the returns scattered from the drum. Additionally, a similar template was computed for the buried manta-like target data and is seen to be quite different from the drum template. This new type of template can easily be used to detect scattering from particular target types in low S/N situations. It is not proposed that dolphins are using these templates, but, rather, that the templates display scattering characteristics which the dolphins may be using. More data would be extremely useful in determining the templates under a variety of conditions, e.g. for lower S/N levels, different bottom types, targets types, source ranges, depths, and scattering angles, etc.

2009 ◽  
Vol 20 (01) ◽  
pp. 45-55
Author(s):  
REGANT Y. S. HUNG ◽  
H. F. TING

The advance of wireless and mobile technology introduces a new type of Video-on-Demand (VOD) systems, namely the mobile VOD systems, that provide VOD services to mobile clients. It is a challenge to design broadcasting protocols for such systems because of the following special requirements: (1) fixed maximum bandwidth requirement: the maximum bandwidth required for broadcasting a movie should be fixed and independent of the number of requests, (2) load adaptivity: the total bandwidth usage should be dependent on the number of requests; the fewer the requests the smaller the total bandwidth usage, and (3) clients sensitivity: the system should be able to support clients with a wide range of heterogeneous capabilities. In the literature, there are some partial solutions that give protocols meeting one or two of the above requirements. In this paper, we give the first protocol that meets all of the three requirements. The performance of our protocol is optimal up to a small constant factor.


2014 ◽  
Vol 8 (2) ◽  
Author(s):  
Ahmed El-Mowafy ◽  
Congwei Hu

AbstractThis study presents validation of BeiDou measurements in un-differenced standalone mode and experimental results of its application for real data. A reparameterized form of the unknowns in a geometry-free observation model was used. Observations from each satellite are independently screened using a local modeling approach. Main advantages include that there is no need for computation of inter-system biases and no satellite navigation information are needed. Validation of the triple-frequency BeiDou data was performed in static and kinematic modes, the former at two continuously operating reference stations in Australia using data that span two consecutive days and the later in a walking mode for three hours. The use of the validation method parameters for numerical and graphical diagnostics of the multi-frequency BeiDou observations are discussed. The precision of the system’s observations was estimated using an empirical method that utilizes the characteristics of the validation statistics. The capability of the proposed method is demonstrated in detection and identification of artificial errors inserted in the static BeiDou data and when implemented in a single point positioning processing of the kinematic test.


2021 ◽  
Author(s):  
Daniel C. Bowden ◽  
Sara Klaasen ◽  
Eileen Martin ◽  
Patrick Paitz ◽  
Andreas Fichtner

<p>As fibre-optic DAS deployments become more common, researchers are turning to tried-and-true methods of locating or characterizing seismic sources such as beamforming. However, the strain measurement from DAS intrinsically carries its own sensitivities to both wave type and polarization (Martin et al. 2018, Paitz 2020 doctoral thesis). Additionally, a measurement along a conventional fibre-optic cable only provides one component of motion, and so certain azimuths may be blind to certain types of seismic sources, unless the cable layout can be designed to be oriented in multiple directions.</p><p>In this work, we explore the development and application of a beamforming algorithm that explicitly searches for multiple wavetypes. This builds on 3-component beamforming or Matched Field Processing (MFP) algorithms by Riahi et al. (2013), and Gal et al. (2018), where in addition to gridsearching over possible source azimuths, a distinct gridsearch is performed for each possible wavetype of interest. This does not solve the problem that a given cable orientation might be less sensitive to certain directions, but at least an array-response function can be robustly defined for each type of seismic excitation. This might help further distinguish whether beamforming observations are dominated by primary sources or by secondary scattering (van der Ende and Ampuero, 2020 preprint).</p><p>Much of this work uses analytic theory and synthetic examples. Time permitting, the enhanced algorithm will also be applied to data from the Mt. Meager experiment to explore its feasibility and efficacy with real data (EGU contribution from Klaasen et. al, 2021).</p>


Geophysics ◽  
2009 ◽  
Vol 74 (4) ◽  
pp. J35-J48 ◽  
Author(s):  
Bernard Giroux ◽  
Abderrezak Bouchedda ◽  
Michel Chouteau

We introduce two new traveltime picking schemes developed specifically for crosshole ground-penetrating radar (GPR) applications. The main objective is to automate, at least partially, the traveltime picking procedure and to provide first-arrival times that are closer in quality to those of manual picking approaches. The first scheme is an adaptation of a method based on cross-correlation of radar traces collated in gathers according to their associated transmitter-receiver angle. A detector is added to isolate the first cycle of the radar wave and to suppress secon-dary arrivals that might be mistaken for first arrivals. To improve the accuracy of the arrival times obtained from the crosscorrelation lags, a time-rescaling scheme is implemented to resize the radar wavelets to a common time-window length. The second method is based on the Akaike information criterion(AIC) and continuous wavelet transform (CWT). It is not tied to the restrictive criterion of waveform similarity that underlies crosscorrelation approaches, which is not guaranteed for traces sorted in common ray-angle gathers. It has the advantage of being automated fully. Performances of the new algorithms are tested with synthetic and real data. In all tests, the approach that adds first-cycle isolation to the original crosscorrelation scheme improves the results. In contrast, the time-rescaling approach brings limited benefits, except when strong dispersion is present in the data. In addition, the performance of crosscorrelation picking schemes degrades for data sets with disparate waveforms despite the high signal-to-noise ratio of the data. In general, the AIC-CWT approach is more versatile and performs well on all data sets. Only with data showing low signal-to-noise ratios is the AIC-CWT superseded by the modified crosscorrelation picker.


Author(s):  
Brendan Juba ◽  
Hai S. Le

Practitioners of data mining and machine learning have long observed that the imbalance of classes in a data set negatively impacts the quality of classifiers trained on that data. Numerous techniques for coping with such imbalances have been proposed, but nearly all lack any theoretical grounding. By contrast, the standard theoretical analysis of machine learning admits no dependence on the imbalance of classes at all. The basic theorems of statistical learning establish the number of examples needed to estimate the accuracy of a classifier as a function of its complexity (VC-dimension) and the confidence desired; the class imbalance does not enter these formulas anywhere. In this work, we consider the measures of classifier performance in terms of precision and recall, a measure that is widely suggested as more appropriate to the classification of imbalanced data. We observe that whenever the precision is moderately large, the worse of the precision and recall is within a small constant factor of the accuracy weighted by the class imbalance. A corollary of this observation is that a larger number of examples is necessary and sufficient to address class imbalance, a finding we also illustrate empirically.


1998 ◽  
Vol 06 (01n02) ◽  
pp. 269-289 ◽  
Author(s):  
Purnima Ratilal ◽  
Peter Gerstoft ◽  
Joo Thiam Goh ◽  
Keng Pong Yeo

Estimation of the integral geoacoustic properties of the sea floor based on real data drawn from a shallow water site is presented. Two independent inversion schemes are used to deduce these properties. The first is matched-field processing of the pressure field on a vertical line array due to a projected source. The second approach is the inversion of ambient noise on a vertical array. Matched-field processing has shown to be successful in the inversion of high quality field data. Here, we show that it is also feasible with a more practical and less expensive data collection scheme. It will also be shown that low frequency inversion is more robust to variation and fluctuation in the propagating medium, whereas high frequencies are more sensitive to mismatches in a varying medium. A comparison is made of the estimates obtained from the two techniques and also with available historical data of the trial site.


2020 ◽  
Author(s):  
Nader Alharbi

Abstract This research presents a modified Singular Spectrum Analysis (SSA) approach for the analysis of COVID-19 in Saudi Arabia. We have proposed this approach and developed it in [1–3] for separability and grouping step in SSA, which plays an important role for reconstruction and forecasting in the SSA. The modified SSA mainly enables us to identify the number of the interpretable components required for separability, signal extraction and noise reduction. The approach was examined using different number of simulated and real data with different structures and signal to noise ratio. In this study we examine its capability in analysing COVID-19 data. Then, we use Vector SSA for predicting new data points and the peak of this pandemic. The results shows that the approach can be used as a promising one in decomposing and forecasting the daily cases of COVID-19 in Saudi Arabia.


2014 ◽  
Vol 599-601 ◽  
pp. 1021-1024
Author(s):  
Xue Mei Du ◽  
Wei Lei ◽  
Nai Jia Wang ◽  
Jing Yu Ding

Traditional reliability test of electronic products has the feature of many contents, long cycle and rising test fees. This paper analyzes the feasibility of combination of Taguchi method and reliability test. Then, the process of optimizing reliability test is proposed with emphasis on use of Taguchi method such as orthogonal table and signal-to-noise ratio (SNR). Finally, the process is applied to the reliability test on the shell of a new type of portable electronic product developed by C Company, verifying that the new process is operable and can reduce test numbers, time and cost while guaranteeing effectiveness of reliability test compared with the traditional approach.


1996 ◽  
Vol 13 (3) ◽  
pp. 207-211 ◽  
Author(s):  
Daya M. Rawson ◽  
Jeremy Bailey ◽  
Paul J. Francis

AbstractThe use of artificial neural networks (ANNs) as a classifier of digital spectra is investigated. Using both simulated and real data, it is shown that neural networks can be trained to discriminate between the spectra of different classes of active galactic nucleus (AGN) with realistic sample sizes and signal-to-noise ratios. By working in the Fourier domain, neural nets can classify objects without knowledge of their redshifts.


Sign in / Sign up

Export Citation Format

Share Document