Deep Temporal Filter: An LSTM based approach to filter noise from TDC based SPAD Receiver

Author(s):  
Gurjeet Singh ◽  
Sunmiao ◽  
Patrick Chiang
Keyword(s):  
2021 ◽  
Vol 13 (13) ◽  
pp. 2433
Author(s):  
Shu Yang ◽  
Fengchao Peng ◽  
Sibylle von Löwis ◽  
Guðrún Nína Petersen ◽  
David Christian Finger

Doppler lidars are used worldwide for wind monitoring and recently also for the detection of aerosols. Automatic algorithms that classify the lidar signals retrieved from lidar measurements are very useful for the users. In this study, we explore the value of machine learning to classify backscattered signals from Doppler lidars using data from Iceland. We combined supervised and unsupervised machine learning algorithms with conventional lidar data processing methods and trained two models to filter noise signals and classify Doppler lidar observations into different classes, including clouds, aerosols and rain. The results reveal a high accuracy for noise identification and aerosols and clouds classification. However, precipitation detection is underestimated. The method was tested on data sets from two instruments during different weather conditions, including three dust storms during the summer of 2019. Our results reveal that this method can provide an efficient, accurate and real-time classification of lidar measurements. Accordingly, we conclude that machine learning can open new opportunities for lidar data end-users, such as aviation safety operators, to monitor dust in the vicinity of airports.


Author(s):  
Timothy C. Allison ◽  
J. Jeffrey Moore

The effectiveness of fatigue and life prediction methods depends heavily on accurate knowledge of the static and dynamic stresses acting on a structure. Although stress fields may be calculated from the finite element shape functions if a finite element model is constructed and analyzed, in many cases the cost of constructing and analyzing a finite element model is prohibitive. Modeling errors can severely affect the accuracy of stress simulations. This paper presents an empirical method for predicting a transient dynamic stress response of a structure based on measured load and strain data that can be collected during vibration tests. The method applies the proper orthogonal decomposition to a measured data set to filter noise and reduce the size of the identification problem and then employs a matrix deconvolution technique to decouple and identify the reduced coordinate impulse response functions for the structure. The method is applied to simulation data from an axial compressor blade model and produces accurate stress predictions compared to finite element results.


2017 ◽  
Vol 33 (3) ◽  
pp. 875-894 ◽  
Author(s):  
Tadahiro Kishida ◽  
Danilo Di Giacinto ◽  
Giuseppe Iaccarino

Numerous time series for small-to-moderate-magnitude (SMM) earthquakes have been recorded in many regions. A uniformly-processed ground-motion database is essential in the development of regional ground-motion models. An automated processing protocol is useful in developing the database for these earthquakes especially when the number of recordings is substantial. This study compares a manual and an automated ground-motion processing methods using SMM earthquakes. The manual method was developed by the Pacific Earthquake Engineering Research Center to build the database of time series and associated ground-motion parameters. The automated protocol was developed to build a database of pseudo-spectral acceleration for the Kiban-Kyoshin network recordings. Two significant differences were observed when the two methods were applied to identical acceleration time series. First, the two methods differed in the criteria for the acceptance or rejection of the time series in the database. Second, they differed in the high-pass corner frequency used to filter noise from the acceleration time series. The influences of these differences were investigated on ground-motion parameters to elucidate the quality of ground-motion database for SMM earthquakes.


2013 ◽  
Vol 17 (6) ◽  
pp. 2121-2129 ◽  
Author(s):  
N. F. Liu ◽  
Q. Liu ◽  
L. Z. Wang ◽  
S. L. Liang ◽  
J. G. Wen ◽  
...  

Abstract. Land-surface albedo plays a critical role in the earth's radiant energy budget studies. Satellite remote sensing provides an effective approach to acquire regional and global albedo observations. Owing to cloud coverage, seasonal snow and sensor malfunctions, spatiotemporally continuous albedo datasets are often inaccessible. The Global LAnd Surface Satellite (GLASS) project aims at providing a suite of key land surface parameter datasets with high temporal resolution and high accuracy for a global change study. The GLASS preliminary albedo datasets are global daily land-surface albedo generated by an angular bin algorithm (Qu et al., 2013). Like other products, the GLASS preliminary albedo datasets are affected by large areas of missing data; beside, sharp fluctuations exist in the time series of the GLASS preliminary albedo due to data noise and algorithm uncertainties. Based on the Bayesian theory, a statistics-based temporal filter (STF) algorithm is proposed in this paper to fill data gaps, smooth albedo time series, and generate the GLASS final albedo product. The results of the STF algorithm are smooth and gapless albedo time series, with uncertainty estimations. The performance of the STF method was tested on one tile (H25V05) and three ground stations. Results show that the STF method has greatly improved the integrity and smoothness of the GLASS final albedo product. Seasonal trends in albedo are well depicted by the GLASS final albedo product. Compared with MODerate resolution Imaging Spectroradiometer (MODIS) product, the GLASS final albedo product has a higher temporal resolution and more competence in capturing the surface albedo variations. It is recommended that the quality flag should be always checked before using the GLASS final albedo product.


Author(s):  
Vladimir F. Telezhkin ◽  
◽  
Bekhruz B. Saidov ◽  

In this paper, we investigate the problem of improving data quality using the Kalman filter in Matlab Simulink. Recently, this filter has become one of the most common algorithms for filtering and processing data in the implementation of control systems (including automated control systems) and the creation of software systems for digital filtering from noise and interference, for example, speech signals. It is also widely used in many fields of science and technology. Due to its simplicity and efficiency, it can be found in GPS receivers, in devices for processing sensor readings for various purposes, etc. It is known that one of the important tasks that should be solved in systems for processing sensor readings is the ability to detect and filter noise. Sensor noise leads to unstable measurement data. This, of course, ultimately leads to a decrease in the accuracy and performance of the control device. One of the methods that can be used to solve the problem of optimal filtering is the development of cybernetic algorithms based on the Kalman and Wiener filters. The filtering process can be carried out in two forms, namely: hardware and software algorithms. Hardware filtering can be built electronically. However, it is less efficient as it requires additional circuitry in the system. To overcome this obstacle, you can use filtering in the form of programming algorithms in a single method. In addition to the fact that it does not require electronic hardware circuitry, the filtering performed is even more accurate because it uses a computational process. The paper analyzes the results of applying the Kalman filter to eliminate errors when measuring the coordinates of the tracked target, to obtain a "smoothed" trajectory and shows the results of the filter development process when processing an electrocardiogram. The development of the Kalman filter algorithm is based on the procedure of recursive assessment of the measured state of the research object.


Sign in / Sign up

Export Citation Format

Share Document