Noise reduction in linear variable differential transformer data of recoil motion measurement by numerical methods

Author(s):  
R M Bhatnagar

The measurement of recoil distance versus time by various methods such as the recoil potentiometer, photo electric transducer, slide wire, accelerometer, revolving drum system, and linear variable differential transformer (LVDT) has been used for gross muzzle brake efficiency measurements and recoil system performance evaluation by the calculation of recoil velocities. For a long recoil-length gun system, a combination of recoil potentiometer and LVDT is used extensively. In order to dispense with the use of recoil potentiometer in the above combination, the article proposes the use of the least-square-fit-based Richardson's extrapolation method and mean square velocity calculation for the accurate determination of free recoil velocity. The mean square velocity calculation is based on Parseval's theorem. The proposed method is based on the comparative evaluation of second- and third-order finite difference method, Richardson's fourth-order method, and the least-square-fit-based Richardson's extrapolation. The least-square-fit-based Richardson's extrapolation gives the lowest value of residual entropy. This is because the maximum likelihood estimators for Gauss probability distribution function and least-square estimators for the coefficients of polynomial representing recoil velocity time curve are coincident. The results of each of the four methods combined with the mean square velocity method were compared, and the least-square-fit-based Richardson's extrapolation was found to be accurate and consistent. The method can be used even when low pass filter is included in the LVDT circuit for stand-alone use.

1986 ◽  
Vol 55 (1) ◽  
pp. 13-22 ◽  
Author(s):  
H. Querfurth

The present experiments investigated the signal transfer in the isolated frog muscle spindle by using pseudorandom noise (PRN) as the analytical probe. In order to guarantee that the random stimulus covered the entire dynamic range of the receptor, PRN stimuli of different intensities were applied around a constant mean length, or PRN stimuli of the same intensity were used while varying the mean length of the spindle. Subthreshold receptor potentials, local responses, and propagated action potentials were recorded simultaneously from the first Ranvier node of the afferent stem fiber, thus providing detailed insight into the spike-initiating process within a sensory receptor. Relevant features of the PRN stimulus were evaluated by a preresponse averaging technique. Up to tau = 2 ms before each action potential the encoder selected a small set of steeply rising stretch transients. A second component of the preresponse stimulus ensemble (tau = 2-5 ms) opposed the overall stretch bias. Since each steeply rising stretch transient evoked a steeply rising receptor potential that guaranteed the critical slope condition of the encoding site, this stimulus profile was most effective in initiating action potentials. The dynamic range of the muscle spindle receptor extended from resting length, L0, to about L0 + 100 microns. At the lower limit (L0) the encoding membrane was depolarized to its firing level and discharged action potentials spontaneously. When random stretches larger than the upper region of the dynamic range were applied, the spindle discharged at the maximum impulse rate and displayed no depolarization block or "overstretch" phenomenon. Random stretches applied within the dynamic range evoked regular discharge patterns that were firmly coupled to the PRN. The afferent discharge rate increased, and the precision of phase-locking improved when the intensity of the PRN stimulus was increased around a constant mean stretch; or the mean prestretch level was raised to higher values while the intensity of the PRN stimulus was kept constant. In the case when the PRN stimulus covered the entire dynamic range, the temporal pattern of the afferent discharge remained constant for at least 10 consecutive sequences of PRN. A spectral analysis of the discharge patterns averaged over several sequences of PRN was employed. At the same stimulus intensity the response spectra displayed low-pass filter characteristics with a 10-dB bandwidth of 300 Hz and a high-frequency slope of -12 dB/oct. Increasing the mean intensity of the PRN stimulus or raising the prestretch level increased the response power.(ABSTRACT TRUNCATED AT 400 WORDS)


2017 ◽  
Vol 7 (6) ◽  
pp. 2177-2183
Author(s):  
N. Khodabakhshi-Javinani ◽  
H. Askarian Abyaneh

Over the last decades, with the increase in the use of harmonic source devices, the filtering process has received more attention than ever before. Digital relays operate according to accurate thresholds and precise setting values. In signal flow graphs of relays, the low-pass filter plays a crucial role in pre-filtering and purifying waveforms performance estimating techniques to estimate the expected impedances, currents, voltage etc. The main process is conducted in the CPU through methods such as Man and Morrison, Fourier, Walsh-based techniques, least-square methods etc. To purify waveforms polluted with low-order harmonics, it is necessary to design and embed cutting frequency in a narrow band which would be costly. In this article, a technique is presented which is able to eliminate specified harmonics, noise and DC offset, attenuate whole harmonic order and hand low-pass filtered signals to CPU. The proposed method is evaluated by eight case studies and compared with first and second order low-pass filter.


Author(s):  
Awoingo Adonijah Maxwell ◽  
Isaac Didi Essi

This study focuses on Monte Carlo Methods in parameter estimation of production function. The ordinary least square (OLS) method is used to estimate the unknown parameters. The Monte Carlo simulation methods are used for the data generating process. The Cobb-Douglas production model with multiplicative error term is fitted to the data generated. From tables 1.1 to 1.3, the mean square error (MSE) of 1 are 0.007678, 0.001972 and 0.001253 respectively for sample sizes 20, 40 and 80. Our finding showed that the mean square error (MSE) value varies with the sum of the powers of the input variables.


Designs ◽  
2021 ◽  
Vol 5 (4) ◽  
pp. 65
Author(s):  
Amritha Kodakkal ◽  
Rajagopal Veramalla ◽  
Narasimha Raju Kuthuri ◽  
Surender Reddy Salkuti

A power generating system should be able to generate and feed quality power to the loads which are connected to it. This paper suggests a very efficient controlling technique, supported by an effective optimization method, for the control of voltage and frequency of the electrical output of an isolated wind power harnessing unit. The wind power unit is modelled using MATLAB/SIMULINK. The Leaky least mean square algorithm with a step size is used by the proposed controller. The Least Mean Square (LMS) algorithm is of adaptive type, which works on the online modification of the weights. LMS algorithm tunes the filter coefficients such that the mean square value of the error is the least. This avoids the use of a low pass filter to clean the voltage and current signals which makes the algorithm simpler. An adaptive algorithm which is generally used in signal processing is applied in power system applications and the process is further simplified by using optimization techniques. That makes the proposed method very unique. Normalized LMS algorithm suffers from drift problem. The Leaky factor is included to solve the drift in the parameters which is considered as a disadvantage in the normalized LMS algorithm. The selection of suitable values of leaky factor and the step size will help in improving the speed of convergence, reducing the steady-state error and improving the stability of the system. In this study, the leaky factor, step size and controller gains are optimized by using optimization techniques. The optimization has made the process of controller tuning very easy, which otherwise was carried out by the trial-and-error method. Different techniques were used for the optimization and on result comparison, the Antlion algorithm is found to be the most effective. The controller efficiency is tested for loads that are linear and nonlinear and for varying wind speeds. It is found that the controller is very efficient in maintaining the system parameters under normal and faulty conditions. The simulated results are validated experimentally by using dSpace 1104. The laboratory results further confirm the efficiency of the proposed controller.


Forests ◽  
2020 ◽  
Vol 11 (12) ◽  
pp. 1311
Author(s):  
Mihaela Paun ◽  
Nevine Gunaime ◽  
Bogdan M. Strimbu

Estimation using a suboptimal method can lead to imprecise models, with cascading effects in complex models, such as climate change or pollution. The goal of this study is to compare the solutions supplied by different algorithms used to model ozone pollution. Using Box and Tiao (1975) study, we have predicted ozone concentration in Los Angeles with an ARIMA and an autoregressive process. We have solved the ARIMA process with three algorithms (i.e., maximum likelihood, like Box and Tiao, conditional least square and unconditional least square) and the autoregressive process with four algorithms (i.e., Yule–Walker, iterative Yule–Walker, maximum likelihood, and unconditional least square). Our study shows that Box and Tiao chose the appropriate algorithm according to the AIC but not according to the mean square error. Furthermore, Yule–Walker, which is the default algorithm in many software, has the least reliable results, suggesting that the method of solving complex models could alter the findings. Finally, the model selection depends on the technical details and on the applicability of the model, as the ARIMA model is suitable from the AIC perspective but an autoregressive model could be preferred from the mean square error viewpoint. Our study shows that time series analysis should consider not only the model shape but also the model estimation, to ensure valid results.


Materials ◽  
2020 ◽  
Vol 13 (13) ◽  
pp. 3040
Author(s):  
Julie Marteau ◽  
Raphaël Deltombe ◽  
Maxence Bigerelle

Roping or ridging is a visual defect affecting the surface of ferritic stainless steels, assessed using visual inspection of the surfaces. The aim of this study was to quantify the morphological signature of roping to link roughness results with five levels of roping identified with visual inspection. First, the multiscale analysis of roughness showed that the texture aspect ratio Str computed with a low-pass filter of 32 µm gave a clear separation between the acceptable levels of roping and the non-acceptable levels (rejected sheets). To obtain a gradation description of roping instead of a binary description, a methodology based on the use of the autocorrelation function was created. It consisted of several steps: a low-pass filtering of the autocorrelation function at 150 µm, the segmentation of the autocorrelation into four stabilized portions, and finally, the computation of isotropy and the root-mean-square roughness Sq on the obtained quarters of function. The use of the isotropy combined with the root-mean-square roughness Sq led to a clear separation of the five levels of roping: the acceptable levels of roping corresponded to strong isotropy (values larger than 10%) coupled with low root-mean-square roughness Sq. Both methodologies can be used to quantitatively describe surface morphology of roping in order to improve our understanding of the roping phenomenon.


2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
S Mehta ◽  
S Niklitschek ◽  
F Fernandez ◽  
C Villagran ◽  
J Avila ◽  
...  

Abstract Background With the sudden advent of Artificial Intelligence (AI), incorporation of these technologies into key aspects of our working environment has become an ever so delicate task, especially so when dealing with time-sensitive and potentially lethal scenarios such as ST-Elevation Myocardial Infarction (STEMI) management. By further expanding into our successful experiences with AI-guided algorithms for STEMI detection, we implemented an innovative ensemble method into our methodology as we seek to improve the algorithm's predictive capabilities. Purpose Through the ensemble method, we combined two ML techniques to boost our previous experiments' accuracy and reliability. Methods Database: EKG records obtained from Latin America Telemedicine Infarct Network (Mexico, Colombia, Argentina, and Brazil) from April 2014 to December 2019. Dataset: Two separate datasets were used to train and test two sets of AI algorithms. The first comprised of 11,567 records and the second 7,286 records, each composed of 12-lead EKG records of 10-second length with sampling frequency of 500 Hz, including the following balanced classes: unconfirmed & angiographically confirmed STEMI (first model); angiographically confirmed STEMI only (second model); and, for both models, we included branch blocks, non-specific ST-T abnormalities, normal, and abnormal (200+ CPT codes, excluding the ones included in other classes). Label per record was manually checked by cardiologists to ensure precision (Ground truth). Pre-processing: First and last 250 samples were discarded to avoid a standardization pulse. An order 5 digital low pass filter with a 35 Hz cut-off was applied. For each record, the mean was subtracted to each individual lead. Classification: The determined classes were STEMI and Not-STEMI (A combination of randomly sampled normal, branch blocks, non-specific ST-T abnormalities and abnormal records – 25% of each subclass). Training & Testing: The last dense layer outputs a probability for each record of being STEMI or Not-STEMI. These probabilities were calculated for each model (Model 1 trained with Complete STEMI dataset and Model 2 trained with confirmed STEMI only dataset) and aggregated using the mean aggregation to generate the final label for each record. A 1-D Convolutional Neural Network was trained and tested with a dataset proportion of 90%/10%; respectively. Results are reported for both testing datasets (Complete and confirmed STEMI only records). Results Complete STEMI Dataset: Accuracy: 96.5% Sensitivity: 96.2% Specificity: 96.9% – Confirmed STEMI only Dataset: Accuracy: 98.5% Sensitivity: 98.3% Specificity: 98.6%' Conclusion(s) While Model 1 and Model 2 achieved similar performances with promising results on their own, applying a combination of both through the ensemble model exhibits a clear improvement in performance when applied to both datasets. This provides a blueprint for advanced automated STEMI detection through wearable devices. Funding Acknowledgement Type of funding source: None


2005 ◽  
Vol 20 (1) ◽  
pp. 64-73 ◽  
Author(s):  
Aleksandar Zigic

Two presented methods were developed to improve classical preset time count rate meters by using adapt able signal processing tools. An optimized detection algorithm that senses the change of mean count rate was implemented in both methods. Three low-pass filters of various structures with adaptable parameters to implement the control of the mean count rate error by suppressing the fluctuations in a controllable way, were considered and one of them implemented in both methods. An adaptation algorithm for preset time interval calculation executed after the low-pass filter was devised and implemented in the first method. This adaptation algorithm makes it possible to obtain shorter preset time intervals for higher stationary mean count rate. The adaptation algorithm for preset time interval calculation executed before the low-pass filter was devised and implemented in the second method. That adaptation algorithm enables sensing of a rapid change of the mean count rate before fluctuations suppression is carried out. Some parameters were fixed to their optimum values after appropriate optimization procedure. Low-pass filters have variable number of stationary coefficients depending on the specified error and the mean count rate. They implement control of the mean count rate error by suppressing fluctuations in a controllable way. The simulated and realized methods, using the developed algorithms, guarantee that the response time shall not exceed 2 s for the mean count rate higher than 2 s-1 and that controllable mean count rate error shall be within the range of ?4% to ?10%.


2011 ◽  
Vol 396-398 ◽  
pp. 1008-1022
Author(s):  
Mahmoud M. Tash ◽  
F.H Samuel ◽  
Saleh Alkahtani

Abstract Heat treated Al-Si-Mg and Al-Si-Cu-Mg cast alloys, belonging to the Al-Si alloy system and represented respectively by 356 and 319 alloys containing mainly α-Fe-intermetallic and related to hardness levels of (100±10 HB), were selected for the machinability study, due to the high demand of these alloys in the automobile industry In this paper, one was provided with an introduction to the force and moment calculations that were used to evaluate the drilling processes as are outlined in a previous work.[1] A new technique was developed whereby a low pass filter was incorporated in the signal processing algorithm which was used in calculating the mean cutting force and moment during the drilling processes. All signals were independently monitored, digitized and recorded into Lab View. Universal Kistler DynoWare software was used for force measurements and data processing of cutting force and moments. Matlab programs were developed for data processing and for calculating the mean value of cutting force and moment and their standard deviations in drilling tests. The raw cutting force data were analysed using the application of a low pass filter and following the detection of points within each cycle in the signal in the drilling tests. 1600 sample points per cycle were acquired for calculating the mean value of cutting feed force (Fz) and 1200 sample points per cycle for the other five components of force and moment (Fx, Fy, Mx, My, and Mz) in each signal (115 cycle or hole/signal) however, only 200 sample points per cycle were used for standard deviation or peak-to-valley calculations. The low Mg-content 319 alloys (0.1%) yielded the longest tool life, more than two times that of 356 alloys (0.3%Mg) and one and half times longer than the high Mg-content 319 alloys (0.28%). It is customary to rate the machinability of the 319 alloy higher than 356 one and the machinability of the low Mg-content 319 alloy higher than the high Mg-content one.


1976 ◽  
Vol 31 (3-4) ◽  
pp. 357-361 ◽  
Author(s):  
G. K. Pandey ◽  
H. Dreizler

The ground state rotational spectra of 80Se and 78Se species of the hexadeutero dimethyl selenide have been measured in the region from 5 to 40 GHz. In both the cases, rotational and centrifugal distortion constants have been determined by a least square fit to about thirty transition frequencies. For the (CD3)2 80Se molecule, fourteen rotational transitions in the excited torsional states ṽ = l1, and ṽ = l2 were also recorded, out of which nine appeared as well resolved triplets. The potential barrier parameter V3 and the angle a between one of the ‘top axes’ and the ‘b axis’ have been determined by a least square fit of the mean value of the observed splittings in the ṽ = l1 and l2 states. The methyl top moment of inertia Iα was kept fixed at 6.35 amu Å2 , which is half of the observed inertia defect in the molecule.


Sign in / Sign up

Export Citation Format

Share Document