ALGORITHM FOR DETERMINING FREQUENCY OF A HARMONIC SIGNAL USING STOCHASTIC SAMPLING

Author(s):  
Irina Nikolaevna Zaitseva ◽  
Vitaly Nikolaevich Ugol'kov

The paper deals with the development of an algorithm for determining frequency of harmonic signals using a probabilistic-statistical method. The main feature of this algorithm is a short time of addressing to the investigated signal, which much shorter than signal period, according to three integrated sample collections with digital processing. Instantaneous values of the investigated signal in each sampling are based on stochastic discretization over time, according to the uniform distribution law. The main advantages of the algorithm are the short time of access to the signal under study and high accuracy of frequency measurement, which is essential for the infralow frequency signals with a duration period measured in minutes, hours, days, etc. There has been performed a numerical experiment in order to evaluate an error in determining frequency of such signals, depending on the accuracy of their sampling by real analog-to-digital converters. The paper shows that the error of frequency determined by the developed algorithm makes a few hundredths of a percent and scarcely depends on accuracy of a signal discretization by a certain level. The error obtained corresponds to discretization accuracy under conversion into accepted values of analogue-to-digital converters from 6- to 16-bit analogue-to-digital converters. The present algorithm may find practical use in radio technical processing of infralow frequency signals in acoustics, hydro-acoustics, seismic acoustics, underwater and underground communication.

2020 ◽  
Vol 17 (36) ◽  
pp. 213-222
Author(s):  
Irina N ZAITSEVA

Determining the parameters of a harmonic signal is one of the most common types of measurements in radio engineering, communication engineering, electronics and automation systems. The research and development of new methods for measuring the harmonic signal parameters are relevant. This work studied algorithm errors for determining the phase shift of harmonic signals using stochastic sampling. The relevance of this study is dictated by increasing requirements for the accuracy and speed of measuring equipment, the reduction of time it takes to decide on the presence of a signal while searching for it, that make it necessary to use statistically optimal methods for measuring signal parameters. The work aimed to develop an algorithm and estimate its errors for the possibility of practical implementation of the algorithm for processing infra-lowfrequency radio signals during stochastic sampling. According to the uniform distribution law, the instantaneous values in each sample of the signals under investigation are based on stochastic sampling in time. Mathematical modeling of algorithm errors for determining the phase shift of signals with harmonics, and depending on harmonics compared to the first (main) harmonic of the signal under investigation during the sampling by real analog-to-digital converters have been carried out. The obtained values of the algorithm errors for determining the phase shift of the main harmonic are within an acceptable range (30%); at harmonics amplitudes (up to the 3rd harmonic) within 20%. The computing experiment results for estimating the algorithm errors confirm the possibility of obtaining high accuracy in determining the phase shift of harmonic signals. This algorithm can be used for processing infra-low-frequency radio signals with sufficient accuracy in acoustics, hydroacoustics, seismic acoustics, underwater, and underground communication.


2019 ◽  
Vol 214 ◽  
pp. 02006 ◽  
Author(s):  
Nico Madysa

The design of readout electronics for the LAr calorimeters of the ATLAS detector to be operated at the future High-Luminosity LHC (HL-LHC) requires a detailed simulation of the full readout chain in order to find optimal solutions for the analog and digital processing of the detector signals. Due to the long duration of the LAr calorimeter pulses relative to the LHC bunch crossing time, out-of-time signal pileup needs to be taken into account. For this purpose, the simulation framework AREUS has been developed. It models analog-to-digital conversion, gain selection, and digital signal processing at bit precision, including digitization noise and detailed electronics effects. Trigger and object reconstruction algorithms are taken into account in the optimization process. The software implementation of AREUS, the concepts of its main functional blocks, as well as optimization considerations will be presented. Various approaches to introduce parallelism into AREUS will be compared against each other.


2017 ◽  
Vol 26 (2) ◽  
pp. 118
Author(s):  
Jelena Dikun ◽  
Emel Onal

The aim of this paper is to point out the advantages of the use of the time-frequency analysis in the digital processing of waveforms recorded in high voltage impulse tests. Impulse voltage tests are essential to inspect and test insulation integrity of high voltage apparatus. On the other hand, generated impulse currents are used for different test applications such as investigation of high current effects, electromagnetic interference (EMI) testing, etc. Obtained voltage and current waveforms usually have some sort of interferences originated from the different sources. These interferences have to be removed from the original impulse data in order to evaluate the waveform characteristics precisely. When the interference level is high enough, it might not be possible to distinguish signal parameters from the recorded data. Conventional filtering methods cannot be useful for some interference like white noise. In that case, time-frequency filtering methods might be necessary. In this study, the wavelet analysis, which is a powerful time-frequency signal processing tool, is used to recognize the noise of impulse current and voltage data. Thus, the noise sources can be determined by short time Fourier Transform, and a coherence approach is used to determine the bandwidth of noises.


This paper analyzes the Digital Pre-Distortion linearization technique using a low-precision Analog-to-Digital Converter (ADC). The output of a power amplifier exhibits various spurious emissions, spectral regrowth and intermodulation distortion (IMD) products due to its non-linear behavior. So, to preserve the performance of power amplifier, linearization becomes mandatory. Digital Pre-Distortion does the training on the output of the power amplifier (distorted signal) and generates exactly the inverse characteristics to that of power amplifier. Their cascading results into a linear response. In practical systems, the output of power amplifier has to go through an analog-to-digital converter for digital processing and a low-resolution ADC results in the degradation of the signal and affects the DPD performance. But a low-resolution ADC not only reduces the computational complexity in the digital processing but it also provides lower power consumption and costs less because less hardware would be required. In this work, the aim is to find the precision up to which ADC resolution can be reduced without affecting the DPD performance in a significant manner. This paper evaluates the performance of two DPD systems - Full-band DPD and Sub-band DPD and from simulations, it is observed that for a full-band DPD, 1-bit ADC can be reliably used and for a sub-band DPD, single bit to 4-bits ADC can be used.


10.12737/7654 ◽  
2014 ◽  
Vol 3 (4) ◽  
pp. 73-87 ◽  
Author(s):  
Берестин ◽  
D. Berestin ◽  
Зимин ◽  
M. Zimin ◽  
Гавриленко ◽  
...  

The paper proposes the method of determining the membership of the object to chaotic systems on the basis of structural risk minimization. It is presented examples that demonstrate the effectiveness of the methodology to model data. The methodology is based on Chebyshev polynomials that can make an informed choice of uniform distribution law or motivated to prefer a different probability density. The main feature of chaotic systems is the presence of a uniform law of distribution, which is typical for systems of the third type - the foundations of modern theory of chaos and self-organization. It is significant that in the theory of chaos and self-organization at short time intervals τ always be uneven distribution. However, it is impossible to keep the system for a long time in this state.


Author(s):  
A.A. Kichigin ◽  
B.I. Shakhtarin

The research of reduction the leakage signal effect in short range radiolocation system is considered. LFM signal is used as a probe signal. The structure of the differential frequency signal and the structure of operating harmonic are shown for the spectral method of processing the useful signal. To reduce the leakage signal effect, it is proposed to use subsampling of the useful signal with a variable delay of the clock signal of the analog-to-digital converter. The dependence between the level of the parasitic component of the operating harmonic signal, due to the leakage signal effect, and the delay of the clock signal of the analog-digital converter, is given. The clock signal of the analog-digital converter is generated by a microcontroller timer. The timer is initialized in pulse width modulation mode. The required delay of the clock signal of the analog-digital converter is selected by changing the threshold of the timer. For testing the algorithm, a microwave module K-LC5 and a prototype board with a microcontroller STM32F407 are used.


Author(s):  
K. K. Christenson

The detectors on modern electron microscopes often have a very high bandwidth which makes their outputs appear noisy on a short time scale. This noise is not a problem when photographing an image because the film integrates the signal. But it causes an annoying blurring of the trace in slow y-modulated line scans and can result in large errors if the signal is measured with a fast analog to digital converter (ADC)(Figs, 1,2).The noise in digital measurements can be reduced by reading the detector may times and taking the numerical average of the readings. This requires no extra hardware but it increases the time required to acquire a linescan or image and ignores the fundamental problem that the ADC wastes most of the information available in the signal by only sampling for a very small portion of its total cycle time.


Sign in / Sign up

Export Citation Format

Share Document