scholarly journals Time-Domain Data Fusion Using Weighted Evidence and Dempster–Shafer Combination Rule: Application in Object Classification

Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5187 ◽  
Author(s):  
Md Nazmuzzaman Khan ◽  
Sohel Anwar

To apply data fusion in time-domain based on Dempster–Shafer (DS) combination rule, an 8-step algorithm with novel entropy function is proposed. The 8-step algorithm is applied to time-domain to achieve the sequential combination of time-domain data. Simulation results showed that this method is successful in capturing the changes (dynamic behavior) in time-domain object classification. This method also showed better anti-disturbing ability and transition property compared to other methods available in the literature. As an example, a convolution neural network (CNN) is trained to classify three different types of weeds. Precision and recall from confusion matrix of the CNN are used to update basic probability assignment (BPA) which captures the classification uncertainty. Real data of classified weeds from a single sensor is used test time-domain data fusion. The proposed method is successful in filtering noise (reduce sudden changes—smoother curves) and fusing conflicting information from the video feed. Performance of the algorithm can be adjusted between robustness and fast-response using a tuning parameter which is number of time-steps( t s ).

2018 ◽  
Vol 12 (7-8) ◽  
pp. 76-83
Author(s):  
E. V. KARSHAKOV ◽  
J. MOILANEN

Тhe advantage of combine processing of frequency domain and time domain data provided by the EQUATOR system is discussed. The heliborne complex has a towed transmitter, and, raised above it on the same cable a towed receiver. The excitation signal contains both pulsed and harmonic components. In fact, there are two independent transmitters operate in the system: one of them is a normal pulsed domain transmitter, with a half-sinusoidal pulse and a small "cut" on the falling edge, and the other one is a classical frequency domain transmitter at several specially selected frequencies. The received signal is first processed to a direct Fourier transform with high Q-factor detection at all significant frequencies. After that, in the spectral region, operations of converting the spectra of two sounding signals to a single spectrum of an ideal transmitter are performed. Than we do an inverse Fourier transform and return to the time domain. The detection of spectral components is done at a frequency band of several Hz, the receiver has the ability to perfectly suppress all sorts of extra-band noise. The detection bandwidth is several dozen times less the frequency interval between the harmonics, it turns out thatto achieve the same measurement quality of ground response without using out-of-band suppression you need several dozen times higher moment of airborne transmitting system. The data obtained from the model of a homogeneous half-space, a two-layered model, and a model of a horizontally layered medium is considered. A time-domain data makes it easier to detect a conductor in a relative insulator at greater depths. The data in the frequency domain gives more detailed information about subsurface. These conclusions are illustrated by the example of processing the survey data of the Republic of Rwanda in 2017. The simultaneous inversion of data in frequency domain and time domain can significantly improve the quality of interpretation.


Geophysics ◽  
2021 ◽  
pp. 1-50
Author(s):  
German Garabito ◽  
José Silas dos Santos Silva ◽  
Williams Lima

In land seismic data processing, the prestack time migration (PSTM) image remains the standard imaging output, but a reliable migrated image of the subsurface depends on the accuracy of the migration velocity model. We have adopted two new algorithms for time-domain migration velocity analysis based on wavefield attributes of the common-reflection-surface (CRS) stack method. These attributes, extracted from multicoverage data, were successfully applied to build the velocity model in the depth domain through tomographic inversion of the normal-incidence-point (NIP) wave. However, there is no practical and reliable method for determining an accurate and geologically consistent time-migration velocity model from these CRS attributes. We introduce an interactive method to determine the migration velocity model in the time domain based on the application of NIP wave attributes and the CRS stacking operator for diffractions, to generate synthetic diffractions on the reflection events of the zero-offset (ZO) CRS stacked section. In the ZO data with diffractions, the poststack time migration (post-STM) is applied with a set of constant velocities, and the migration velocities are then selected through a focusing analysis of the simulated diffractions. We also introduce an algorithm to automatically calculate the migration velocity model from the CRS attributes picked for the main reflection events in the ZO data. We determine the precision of our diffraction focusing velocity analysis and the automatic velocity calculation algorithms using two synthetic models. We also applied them to real 2D land data with low quality and low fold to estimate the time-domain migration velocity model. The velocity models obtained through our methods were validated by applying them in the Kirchhoff PSTM of real data, in which the velocity model from the diffraction focusing analysis provided significant improvements in the quality of the migrated image compared to the legacy image and to the migrated image obtained using the automatically calculated velocity model.


2013 ◽  
Vol 2013 ◽  
pp. 1-11 ◽  
Author(s):  
Jia-Rou Liu ◽  
Po-Hsiu Kuo ◽  
Hung Hung

Large-p-small-ndatasets are commonly encountered in modern biomedical studies. To detect the difference between two groups, conventional methods would fail to apply due to the instability in estimating variances int-test and a high proportion of tied values in AUC (area under the receiver operating characteristic curve) estimates. The significance analysis of microarrays (SAM) may also not be satisfactory, since its performance is sensitive to the tuning parameter, and its selection is not straightforward. In this work, we propose a robust rerank approach to overcome the above-mentioned diffculties. In particular, we obtain a rank-based statistic for each feature based on the concept of “rank-over-variable.” Techniques of “random subset” and “rerank” are then iteratively applied to rank features, and the leading features will be selected for further studies. The proposed re-rank approach is especially applicable for large-p-small-ndatasets. Moreover, it is insensitive to the selection of tuning parameters, which is an appealing property for practical implementation. Simulation studies and real data analysis of pooling-based genome wide association (GWA) studies demonstrate the usefulness of our method.


1988 ◽  
Vol 42 (5) ◽  
pp. 715-721 ◽  
Author(s):  
Francis R. Verdun ◽  
Carlo Giancaspro ◽  
Alan G. Marshall

A frequency-domain Lorentzian spectrum can be derived from the Fourier transform of a time-domain exponentially damped sinusoid of infinite duration. Remarkably, it has been shown that even when such a noiseless time-domain signal is truncated to zero amplitude after a finite observation period, one can determine the correct frequency of its corresponding magnitude-mode spectral peak maximum by fitting as few as three spectral data points to a magnitude-mode Lorentzian spectrum. In this paper, we show how the accuracy of such a procedure depends upon the ratio of time-domain acquisition period to exponential damping time constant, number of time-domain data points, computer word length, and number of time-domain zero-fillings. In particular, we show that extended zero-filling (e.g., a “zoom” transform) actually reduces the accuracy with which the spectral peak position can be determined. We also examine the effects of frequency-domain random noise and roundoff errors in the fast Fourier transformation (FFT) of time-domain data of limited discrete data word length (e.g., 20 bit/word at single and double precision). Our main conclusions are: (1) even in the presence of noise, a three-point fit of a magnitude-mode spectrum to a magnitude-mode Lorentzian line shape can offer an accurate estimate of peak position in Fourier transform spectroscopy; (2) the results can be more accurate (by a factor of up to 10) when the FFT processor operates with floating-point (preferably double-precision) rather than fixed-point arithmetic; and (3) FFT roundoff errors can be made negligible by use of sufficiently large (> 16 K) data sets.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3521 ◽  
Author(s):  
Funa Zhou ◽  
Po Hu ◽  
Shuai Yang ◽  
Chenglin Wen

Rotating machinery usually suffers from a type of fault, where the fault feature extracted in the frequency domain is significant, while the fault feature extracted in the time domain is insignificant. For this type of fault, a deep learning-based fault diagnosis method developed in the frequency domain can reach high accuracy performance without real-time performance, whereas a deep learning-based fault diagnosis method developed in the time domain obtains real-time diagnosis with lower diagnosis accuracy. In this paper, a multimodal feature fusion-based deep learning method for accurate and real-time online diagnosis of rotating machinery is proposed. The proposed method can directly extract the potential frequency of abnormal features involved in the time domain data. Firstly, multimodal features corresponding to the original data, the slope data, and the curvature data are firstly extracted by three separate deep neural networks. Then, a multimodal feature fusion is developed to obtain a new fused feature that can characterize the potential frequency feature involved in the time domain data. Lastly, the fused new feature is used as the input of the Softmax classifier to achieve a real-time online diagnosis result from the frequency-type fault data. A simulation experiment and a case study of the bearing fault diagnosis confirm the high efficiency of the method proposed in this paper.


2018 ◽  
Vol 8 (1) ◽  
pp. 44
Author(s):  
Lutfiah Ismail Al turk

In this paper, a Nonhomogeneous Poisson Process (NHPP) reliability model based on the two-parameter Log-Logistic (LL) distribution is considered. The essential model’s characteristics are derived and represented graphically. The parameters of the model are estimated by the Maximum Likelihood (ML) and Non-linear Least Square (NLS) estimation methods for the case of time domain data. An application to show the flexibility of the considered model are conducted based on five real data sets and using three evaluation criteria. We hope this model will help as an alternative model to other useful reliability models for describing real data in reliability engineering area.


Sign in / Sign up

Export Citation Format

Share Document