scholarly journals Improving Signal Statistics Using a Regression Ground Clutter Filter. Part 1: Theory and Simulations

Author(s):  
J.C. Hubbert ◽  
G. Meymaris ◽  
U. Romatschke ◽  
M. Dixon

AbstractGround clutter filtering is an important and necessary step for quality control of ground-based weather radars. In this two-part paper ground clutter mitigation is addressed using a time-domain regression filter. Clutter filtering is now widely accomplished with spectral processing where the times series of data corresponding to a radar resolution volume are transformed with a Discrete Fourier Transform after which the zero and near-zero velocity clutter components are eliminated by setting them to zero. Subsequently for reectivity, velocity and spectrum width estimates, interpolation techniques are used to recover some of the power loss due to the clutter filter, which has been shown to reduce bias. The spectral technique requires that the I (in-phase) and Q (quadrature) time series be windowed in order to reduce clutter power leakage away from zero and near-zero velocities. Unfortunately, window functions such as the Hamming, Hann and Blackman attenuate the time series signal by 4.01, 4.19 and 5.23 dB for 64-point times series, respectively, and thereby effectively reduce the number of independent samples available for estimating the radar parameters of any underlying weather echo. Here in Part 1 a regression filtering technique is investigated, via simulated data, which does not require the use of such window functions and thus provides for better weather signal statistics. In Part 2 (Hubbert et al. 2021) the technique is demonstrated using both S-Pol and NEXRAD data. It is shown that the regression filter rejects clutter as effectively as the spectral technique but has the distinct advantage that estimates of the radar variables are greatly improved. The technique is straightforward and can be executed in real time.

Author(s):  
Patricia Melin ◽  
Oscar Castillo

In this article, the evolution in space and in time of the coronavirus pandemic is studied by utilizing a neural network with a self-organizing nature for the spatial analysis of data, and a fuzzy fractal method for capturing the temporal trends of the time series of the countries. Self-organizing neural networks possess the capability for clustering countries in the space domain based on their similar characteristics with respect to their coronavirus cases. In this form enabling finding the countries that are having similar behavior and thus can benefit from utilizing the same methods in fighting the virus propagation. To validate the approach, publicly available datasets of coronavirus cases worldwide have been used. In addition, a fuzzy fractal approach is utilized for the temporal analysis of time series of the countries. Then, a hybrid combination of both the self-organizing maps and the fuzzy fractal approach is proposed for efficient COVID-19 forecasting of the countries. Relevant conclusions have emerged from this study, that may be of great help in putting forward the best possible strategies in fighting the virus pandemic. A lot of the existing works concerned with the Coronavirus have look at the problem mostly from the temporal viewpoint that is of course relevant, but we strongly believe that the combination of both aspects of the problem is relevant to improve the forecasting ability. The most relevant contribution of this article is the proposal of combining neural networks with a self-organizing nature for clustering countries with high similarity and the fuzzy fractal approach for being able to forecast the times series and help in planning control actions for the Coronavirus pandemic.


Author(s):  
Aloisio S. N. Filho ◽  
Thiago Barros Murari ◽  
Marcelo A. Moret

In this paper evaluates the effects in the gasoline prices after the Brazilian downstream oil chain liberation, in late 1990s. That stage meant that the Brazilian govern, that no longer setting the maximum and minimum values of all fuels. For this purpose, the gasoline type C prices were collected from fifteen relevant cities in five economic regions of Brazil, between the years 2005 and 2014. The sequences of computational techniques were applied on these datasets. The stationary and linearity for variation prices time series were analyzed in all cities and, also, the correlations among all cities in order to recognize the times series patterns. Furthermore, the Cumulative Sum control (CUMSUM) chart was used to detect smaller parameter shifts on the distribution time series. Our results reveled distinct patterns for middle of 2005 and the middle of 2006, and also for the first months of 2011 and the middle of 2012. Reinforcing the idea of the Brazilian retail and distribution are governed strongly by exogenous factors. This makes a conventional analysis difficult to be used. Once, the Brazilian downstream fuel chain suggests to be a complexity system.


2017 ◽  
Vol 17 (1) ◽  
pp. 7-19
Author(s):  
Mariusz Doszyń

Abstract The main aim of the article is to propose a forecasting procedure that could be useful in the case of randomly distributed zero-inflated time series. Many economic time series are randomly distributed, so it is not possible to estimate any kind of statistical or econometric models such as, for example, count data regression models. This is why in the article a new forecasting procedure based on the stochastic simulation is proposed. Before it is used, the randomness of the times series should be considered. The hypothesis stating the randomness of the times series with regard to both sales sequences or sales levels is verified. Moreover, in the article the ex post forecast error that could be computed also for a zero-inflated time series is proposed. All of the above mentioned parts were invented by the author. In the empirical example, the described procedure was applied to forecast the sales of products in a company located in the vicinity of Szczecin (Poland), so real data were analysed. The accuracy of the forecast was verified as well.


2011 ◽  
Vol 7 (4) ◽  
pp. 567-570 ◽  
Author(s):  
JAIME ROS

Abstract:In his comprehensive analysis of the relationship between institutions and economic growth, Ha-Joon Chang, in his article ‘Institutions and Economic Development: Theory, Policy and History’, reviews the empirical evidence on this relationship emphasizing the contrast between the conclusions that one can derive from the time-series evidence and the claims often made in favor of ‘liberalized institutions’ based on the results of cross-section studies. Does the time-series evidence contradict the results of cross-section studies regarding the relationship between institutions and growth? In this comment, I argue that in stressing the contrast between these two kinds of evidence, Chang falls short of a full criticism, consistent with his theoretical analysis, of cross-section studies while at the same time failing to infer what the time-series evidence really shows.


Author(s):  
Mehmet Sayal

A time series is a sequence of data values that are recorded with equal or varying time intervals. Time series data usually includes timestamps that indicate the time at which each individual value in the times series is recorded. Time series data is usually transmitted in the form of a data stream, i.e., continuous flow of data values. Source of time series data can be any system that measures and records data values over the course of time. Some examples of time series data may be recorded from stock values, blood pressure of a patient, temperature of a room, amount of a product in the inventory, and amount of precipitation in a region. Proper analysis and mining of time series data may yield valuable knowledge about the underlying characteristics of the data source. Time series analysis and mining has applications in many domains, such as financial, biomedical, and meteorological applications, because time series data may be generated by various sources in different domains.


2019 ◽  
Vol 624 ◽  
pp. A106 ◽  
Author(s):  
T. Appourchaux ◽  
T. Corbard

Context. The recent claims of g-mode detection have restarted the search for these potentially extremely important modes. The claimed detection of g modes was obtained from the analysis of the power spectrum of the time series of round-trip travel time of p modes. Aims. The goal of this paper is to reproduce these results on which the claims are based for confirming or invalidating the detection of g modes with the method used to make the claims. Methods. We computed the time series of round-trip travel time using the procedure given in Fossat et al. (2017, A&A, 604, A40), and used different variations of the times series for comparison. We used the recently calibrated GOLF data (published in Paper I) with different sampling, different photomultipliers, different length of data for reproducing the analysis. We also correlated the power spectrum with an asymptotic model of g-mode frequencies in a similar manner to Fossat and Schmider (2018, A&A, 612, L1). We devised a scheme for optimising the correlation both for pure noise and for the GOLF data. Results. We confirm the analysis performed in Fossat et al. (2017) but draw different conclusions. Their claims of detection of g modes cannot be confirmed when changing parameters such as sampling interval, length of time series, or photomultipliers. Other instrument such as GONG and BiSON do not confirm their detection. We also confirm the analysis performed in Fossat and Schmider (2018), but again draw different conclusions. For GOLF, the correlation of the power spectrum with the asymptotic model of g-mode frequencies for l = 1 and l = 2 show a high correlation at lag=0 and at lag corresponding to the rotational splitting νl, but the same occurs for pure noise due to the large number of peaks present in the model. In addition, other very different parameters defining the asymptotic model also provide a high correlation at these lags. We conclude that the detection performed in Fossat and Schmider (2018) is an artefact of the methodology.


2020 ◽  
Author(s):  
Senol Çelik ◽  
Handan Ankarali ◽  
Ozge Pasin

AbstractBackgroundThe aim of this study is to explain the changes of outbreak indicators for coronavirus in China with nonlinear models and time series analysis. There are lots of methods for modelling. But we want to determine the best mathematical model and the best time series method among other models.MethodsThe data was obtained between January 22 and April 21, 2020 from China records. The number of total cases and the number of total deaths were used for the calculations. For modelling Weibull, Negative Exponential, Von Bertalanffy, Janoscheck, Lundqvist-Korf and Sloboda models were used and AR, MA, ARMA, Holt, Brown and Damped models were used for time series. The determination coefficient (R2), Pseudo R2 and mean square error were used for nonlinear modelling as criteria for determining the model that best describes the number of cases, the number of total deaths and BIC (Bayesian Information Criteria) was used for time series.ResultsAccording to our results, the Sloboda model among the growth curves and ARIMA (0,2,1) model among the times series models were most suitable models for modelling of the number of total cases. In addition Lundqvist-Korf model among the growth curves and Holt linear trend exponential smoothing model among the times series models were most suitable model for modelling of the number of total deaths. Our time series models forecast that the number of total cases will 83311 on 5 May and the number of total deaths will be 5273.ConclusionsBecause results of the modelling has providing information on measures to be taken and giving prior information for subsequent similar situations, it is of great importance modeling outbreak indicators for each country separately.


2021 ◽  
Vol 13 (11) ◽  
pp. 2075
Author(s):  
J. David Ballester-Berman ◽  
Maria Rastoll-Gimenez

The present paper focuses on a sensitivity analysis of Sentinel-1 backscattering signatures from oil palm canopies cultivated in Gabon, Africa. We employed one Sentinel-1 image per year during the 2015–2021 period creating two separated time series for both the wet and dry seasons. The first images were almost simultaneously acquired to the initial growth stage of oil palm plants. The VH and VV backscattering signatures were analysed in terms of their corresponding statistics for each date and compared to the ones corresponding to tropical forests. The times series for the wet season showed that, in a time interval of 2–3 years after oil palm plantation, the VV/VH ratio in oil palm parcels increases above the one for forests. Backscattering and VV/VH ratio time series for the dry season exhibit similar patterns as for the wet season but with a more stable behaviour. The separability of oil palm and forest classes was also quantitatively addressed by means of the Jeffries–Matusita distance, which seems to point to the C-band VV/VH ratio as a potential candidate for discrimination between oil palms and natural forests, although further analysis must still be carried out. In addition, issues related to the effect of the number of samples in this particular scenario were also analysed. Overall, the outcomes presented here can contribute to the understanding of the radar signatures from this scenario and to potentially improve the accuracy of mapping techniques for this type of ecosystems by using remote sensing. Nevertheless, further research is still to be done as no classification method was performed due to the lack of the required geocoded reference map. In particular, a statistical assessment of the radar signatures should be carried out to statistically characterise the observed trends.


2021 ◽  
Vol 13 (15) ◽  
pp. 8295
Author(s):  
Patricia Melin ◽  
Oscar Castillo

In this article, the evolution in both space and time of the COVID-19 pandemic is studied by utilizing a neural network with a self-organizing nature for the spatial analysis of data, and a fuzzy fractal method for capturing the temporal trends of the time series of the countries considered in this study. Self-organizing neural networks possess the capability to cluster countries in the space domain based on their similar characteristics, with respect to their COVID-19 cases. This form enables the finding of countries that have a similar behavior, and thus can benefit from utilizing the same methods in fighting the virus propagation. In order to validate the approach, publicly available datasets of COVID-19 cases worldwide have been used. In addition, a fuzzy fractal approach is utilized for the temporal analysis of the time series of the countries considered in this study. Then, a hybrid combination, using fuzzy rules, of both the self-organizing maps and the fuzzy fractal approach is proposed for efficient coronavirus disease 2019 (COVID-19) forecasting of the countries. Relevant conclusions have emerged from this study that may be of great help in putting forward the best possible strategies in fighting the virus pandemic. Many of the existing works concerned with COVID-19 look at the problem mostly from a temporal viewpoint, which is of course relevant, but we strongly believe that the combination of both aspects of the problem is relevant for improving the forecasting ability. The main idea of this article is combining neural networks with a self-organizing nature for clustering countries with a high similarity and the fuzzy fractal approach for being able to forecast the times series. Simulation results of COVID-19 data from countries around the world show the ability of the proposed approach to first spatially cluster the countries and then to accurately predict in time the COVID-19 data for different countries with a fuzzy fractal approach.


2012 ◽  
Vol 8 (1) ◽  
pp. 89-115 ◽  
Author(s):  
V. K. C. Venema ◽  
O. Mestre ◽  
E. Aguilar ◽  
I. Auer ◽  
J. A. Guijarro ◽  
...  

Abstract. The COST (European Cooperation in Science and Technology) Action ES0601: advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random independent break-type inhomogeneities with normally distributed breakpoint sizes were added to the simulated datasets. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study. After the deadline at which details of the imposed inhomogeneities were revealed, 22 additional solutions were submitted. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Training the users on homogenization software was found to be very important. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that automatic algorithms can perform as well as manual ones.


Sign in / Sign up

Export Citation Format

Share Document