Data Rectification and Detection of Trend Shifts in Jet Engine Gas Path Measurements Using Median Filters and Fuzzy Logic

Author(s):  
Ranjan Ganguli

Filtering methods are explored for removing noise from data while preserving sharp edges that many indicate a trend shift in gas turbine measurements. Linear filters are found to be have problems with removing noise while preserving features in the signal. The nonlinear hybrid median filter is found to accurately reproduce the root signal from noisy data. Simulated faulty data and fault free gas path measurement data are passed through median filters and health residuals for the data set are created. The health residual is a scalar norm of the gas path measurement deltas and is used to partition the faulty engine from the healthy engine using fuzzy sets. The fuzzy detection system is developed and tested with noisy data and with filtered data. It is found from tests with simulated fault free and faulty data that fuzzy trend shift detection based on filtered data is very accurate with no false alarms and negligible missed alarms.

2002 ◽  
Vol 124 (4) ◽  
pp. 809-816 ◽  
Author(s):  
R. Ganguli

Filtering methods are explored for removing noise from data while preserving sharp edges that many indicate a trend shift in gas turbine measurements. Linear filters are found to be have problems with removing noise while preserving features in the signal. The nonlinear hybrid median filter is found to accurately reproduce the root signal from noisy data. Simulated faulty data and fault-free gas path measurement data are passed through median filters and health residuals for the data set are created. The health residual is a scalar norm of the gas path measurement deltas and is used to partition the faulty engine from the healthy engine using fuzzy sets. The fuzzy detection system is developed and tested with noisy data and with filtered data. It is found from tests with simulated fault-free and faulty data that fuzzy trend shift detection based on filtered data is very accurate with no false alarms and negligible missed alarms.


2020 ◽  
Vol 222 (3) ◽  
pp. 1805-1823 ◽  
Author(s):  
Yangkang Chen ◽  
Shaohuan Zu ◽  
Yufeng Wang ◽  
Xiaohong Chen

SUMMARY In seismic data processing, the median filter is usually applied along the structural direction of seismic data in order to attenuate erratic or spike-like noise. The performance of a structure-oriented median filter highly depends on the accuracy of the estimated local slope from the noisy data. When local slope contains significant error, which is usually the case for noisy data, the structure-oriented median filter will still cause severe damages to useful energy. We propose a type of structure-oriented median filter that can effectively attenuate spike-like noise even when the local slope is not accurately estimated, which we call structure-oriented space-varying median filter. A structure-oriented space-varying median filter can adaptively squeeze and stretch the window length of the median filter when applied in the locally flattened dimension of an input seismic data in order to deal with the dipping events caused by inaccurate slope estimation. We show the key difference among different types of median filters in detail and demonstrate the principle of the structure-oriented space-varying median filter method. We apply the structure-oriented space-varying median filter method to remove the spike-like blending noise arising from the simultaneous source acquisition. Synthetic and real data examples show that structure-oriented space-varying median filter can significantly improve the signal preserving performance for curving events in the seismic data. The structure-oriented space-varying median filter can also be easily embedded into an iterative deblending procedure based on the shaping regularization framework and can help obtain much improved deblending performance.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 11
Author(s):  
Domonkos Haffner ◽  
Ferenc Izsák

The localization of multiple scattering objects is performed while using scattered waves. An up-to-date approach: neural networks are used to estimate the corresponding locations. In the scattering phenomenon under investigation, we assume known incident plane waves, fully reflecting balls with known diameters and measurement data of the scattered wave on one fixed segment. The training data are constructed while using the simulation package μ-diff in Matlab. The structure of the neural networks, which are widely used for similar purposes, is further developed. A complex locally connected layer is the main compound of the proposed setup. With this and an appropriate preprocessing of the training data set, the number of parameters can be kept at a relatively low level. As a result, using a relatively large training data set, the unknown locations of the objects can be estimated effectively.


2019 ◽  
Vol 22 (13) ◽  
pp. 2907-2921 ◽  
Author(s):  
Xinwen Gao ◽  
Ming Jian ◽  
Min Hu ◽  
Mohan Tanniru ◽  
Shuaiqing Li

With the large-scale construction of urban subways, the detection of tunnel defects becomes particularly important. Due to the complexity of tunnel environment, it is difficult for traditional tunnel defect detection algorithms to detect such defects quickly and accurately. This article presents a deep learning FCN-RCNN model that can detect multiple tunnel defects quickly and accurately. The algorithm uses a Faster RCNN algorithm, Adaptive Border ROI boundary layer and a three-layer structure of the FCN algorithm. The Adaptive Border ROI boundary layer is used to reduce data set redundancy and difficulties in identifying interference during data set creation. The algorithm is compared with single FCN algorithm with no Adaptive Border ROI for different defect types. The results show that our defect detection algorithm not only addresses interference due to segment patching, pipeline smears and obstruction but also the false detection rate decreases from 0.371, 0.285, 0.307 to 0.0502, respectively. Finally, corrected by cylindrical projection model, the false detection rate is further reduced from 0.0502 to 0.0190 and the identification accuracy of water leakage defects is improved.


Author(s):  
Joost den Haan

The aim of the study is to devise a method to conservatively predict a tidal power generation based on relatively short current profile measurement data sets. Harmonic analysis on a low quality tidal current profile measurement data set only allowed for the reliable estimation of a limited number of constituents leading to a poor prediction of tidal energy yield. Two novel, but very different approaches were taken: firstly a quasi response function is formulated which combines the currents profiles into a single current. Secondly, a three dimensional vectorial tidal forcing model was developed aiming to support the harmonic analysis with upfront knowledge of the actual constituents. The response based approach allowed for a reasonable prediction. The vectorial tidal forcing model proved to be a viable start for a full featuring numerical model; even in its initial simplified form it could provide more insight than the conventional tidal potential models.


1982 ◽  
Vol 72 (1) ◽  
pp. 93-111
Author(s):  
R. E. Habermann

abstract Changes in the rate of occurrence of smaller events have been recognized in the rupture zones of upcoming large earthquakes in several postearthquake and one preearthquake study. A data set in which a constant portion of the events in any magnitude band are consistently reported through time is crucial for the recognition of seismicity rate changes which are real (related to some process change in the earth). Such a data set is termed a homogeneous data set. The consistency of reporting of earthquakes in the NOAA Hypocenter Data File (HDF) since 1963 is evaluated by examining the cumulative number of events reported as a function of time for the entire world in eight magnitude bands. It is assumed that the rate of occurrence of events in the entire world is roughly constant on the time scale examined here because of the great size of the worldwide earthquake production system. The rate of reporting of events with magnitudes above mb = 4.5 has been constant or increasing since 1963. Significant decreases in the number of events reported per month in the magnitude bands below mb = 4.4 occurred during 1968 and 1976. These decreases are interpreted as indications of decreases in detection of events for two reasons. First, they occur at times of constant rates of occurrence and reporting of larger events. Second, the decrease during the late 1960's has also been recognized in the teleseismic data reported by the International Seismological Centre (ISC). This suggests that the decrease in the number of small events reported was related to facets of the earthquake reporting system which the ISC and NOAA share. The most obvious candidate is the detection system. During 1968, detection decreased in the United States, Central and South America, and portions of the South Pacific. This decrease is probably due to the closure of the VELA arrays, BMO, TFO, CPO, UBO, and WMO. During 1976, detection decreased in most of the seismically active regions of the western hemisphere, as well as in the region between Kamchatka and Guam. The cause of this detection decrease is unclear. These detection decreases seriously affect the amount of homogeneous background period available for the study of teleseismic seismicity rate changes. If events below the minimum magnitude of homogeneity are eliminated from the teleseismic data sets the resulting small numbers of events render many regions unsuitable for study. Many authors have reported seismicity rate decreases as possible precursors to great earthquakes. Few of these authors have considered detection decreases as possible explanations for their results. This analysis indicates that such considerations cannot be avoided in studies of teleseismic data.


1998 ◽  
Vol 10 (3) ◽  
pp. 731-747 ◽  
Author(s):  
Volker Tresp ◽  
Reimar Hofmann

We derive solutions for the problem of missing and noisy data in nonlinear time-series prediction from a probabilistic point of view. We discuss different approximations to the solutions—in particular, approximations that require either stochastic simulation or the substitution of a single estimate for the missing data. We show experimentally that commonly used heuristics can lead to suboptimal solutions. We show how error bars for the predictions can be derived and how our results can be applied to K-step prediction. We verify our solutions using two chaotic time series and the sunspot data set. In particular, we show that for K-step prediction, stochastic simulation is superior to simply iterating the predictor.


1998 ◽  
Vol 120 (3) ◽  
pp. 489-495 ◽  
Author(s):  
S. J. Hu ◽  
Y. G. Liu

Autocorrelation in 100 percent measurement data results in false alarms when the traditional control charts, such as X and R charts, are applied in process monitoring. A popular approach proposed in the literature is based on prediction error analysis (PEA), i.e., using time series models to remove the autocorrelation, and then applying the control charts to the residuals, or prediction errors. This paper uses a step function type mean shift as an example to investigate the effect of prediction error analysis on the speed of mean shift detection. The use of PEA results in two changes in the 100 percent measurement data: (1) change in the variance, and (2) change in the magnitude of the mean shift. Both changes affect the speed of mean shift detection. These effects are model parameter dependent and are obtained quantitatively for AR(1) and ARMA(2,1) models. Simulations and examples from automobile body assembly processes are used to demonstrate these effects. It is shown that depending on the parameters of the AMRA models, the speed of detection could be increased or decreased significantly.


Author(s):  
Mochamad Zaeynuri Setiawan ◽  
Fachrudin Hunaini ◽  
Mohamad Mukhsim

The phenomenon that often arises in a substation is the problem of partial discharge in outgoing cable insulation. Partial discharge is a jump of positive and negative ions that are not supposed to meet so that it can cause a spark jump. If a partial discharge is left too long it can cause insulation failure, the sound of snakes like hissing and the most can cause a flashover on the outgoing cable. Then a partial discharge detection prototype was made in the cable insulation in order to anticipate the isolation interference in the outgoing cable. Can simplify the work of substation operators to check the reliability of insulation on the outgoing side of each cubicle. So it was compiled as a method for measuring sound waves caused by partial discharge in the process of measuring using a microphone sensor, the Arduino Mega 2560 module as a microcontroller, the LCD TFT as a monitoring and the MicroSD card module as its storage. The microphone sensor is a sensor that has a high sensitivity to sound, has 2 analog and digital readings, and is easily designed with a microcontroller. Basically the unit of measure measured at partial discharge is Decibels. The results of the prototype can be applied to the cubicle and the way it works is to match the prototype to the outgoing cubicle cable then measure from the cable boots connector to the bottom of the outgoing cable with a distance of 1 meter. Then the measurement results will be monitored on the TFT LCD screen in the form of measurement results, graphs and categories on partial discharge. In this design the measurement data made by the microphone can be stored with microSD so that it can make an evaluation of partial discharge handling in outgoing cable insulation.


Agromet ◽  
2011 ◽  
Vol 25 (1) ◽  
pp. 24
Author(s):  
Satyanto Krido Saptomo

<em>Artificial neural network (ANN) approach was used to model energy dissipation process into sensible heat and latent heat (evapotranspiration) fluxes. The ANN model has 5 inputs which are leaf temperature T<sub>l</sub>, air temperature T<sub>a</sub>, net radiation R<sub>n</sub>, wind speed u<sub>c</sub> and actual vapor pressure e<sub>a</sub>. Adjustment of ANN was conducted using back propagation technique, employing measurement data of input and output parameters of the ANN. The estimation results using the adjusted ANN shows its capability in resembling the heat dissipation process by giving outputs of sensible and latent heat fluxes closed to its respective measurement values as the measured input values are given.  The ANN structure presented in this paper suits for modeling similar process over vegetated surfaces, but the adjusted parameters are unique. Therefore observation data set for each different vegetation and adjustment of ANN are required.</em>


Sign in / Sign up

Export Citation Format

Share Document