Optimal algorithm for distinguishing radio navigation signals by a set of “spaced” correlators

2021 ◽  
Vol 8 ◽  
pp. 11-23
Author(s):  
V.N. Kharisov ◽  
D.A. Eremeev

The classical algorithm for signal distinction, signal detecting and estimating signal parameters consists in analyzing discrete parameter values using a correlator. The value of the parameter with the maximum absolute value of the correlator is taken as an estimate. Obviously, this is accompanied by losses in sensitivity and noise immunity, since the specified discrete parameter values do not accurately correspond to the true parameter values of the real signal. In this case, the accuracy of the parameter estimation, even at large signal-to-noise ratios, is limited by the value of the correlators placement interval. Therefore, it is of interest to optimally use the entire set of correlators for parameter estimation and signal detection. The article presents the derivation of algorithm for distinguishing signals by a given parameter by a set of "spaced" correlators. Unlike the classical algorithm, it uses decisive statistics not by one, but by a pair of neighboring correlators, detuned by the correlation interval. In this case, at first, the number of the interval between correlators is estimated according to the maximum of the decisive statistics, and then the value of the parameter is refined within this interval. Additionally, the algorithm allows you to estimate the signal amplitude. The proposed algorithm is compared with the classical one. By means of simulation, the dependences on the energy potential of the average probability of signal distinction for both algorithms are plotted. It is shown that the proposed algorithm has a higher probability of correct distinction than the classical algorithm. It is also shown that the maximum and average energy losses of the distinction algorithm based on a set of "spaced" correlators are less than the losses of the classical algorithm. Thus, the proposed algorithm for distinction signals by a set of "spaced" correlators has greater noise immunity and accuracy of estimating the desired parameter than the classical distinction algorithm.

Author(s):  
Jacob Laurel ◽  
Sasa Misailovic

AbstractProbabilistic Programming offers a concise way to represent stochastic models and perform automated statistical inference. However, many real-world models have discrete or hybrid discrete-continuous distributions, for which existing tools may suffer non-trivial limitations. Inference and parameter estimation can be exceedingly slow for these models because many inference algorithms compute results faster (or exclusively) when the distributions being inferred are continuous. To address this discrepancy, this paper presents Leios. Leios is the first approach for systematically approximating arbitrary probabilistic programs that have discrete, or hybrid discrete-continuous random variables. The approximate programs have all their variables fully continualized. We show that once we have the fully continuous approximate program, we can perform inference and parameter estimation faster by exploiting the existing support that many languages offer for continuous distributions. Furthermore, we show that the estimates obtained when performing inference and parameter estimation on the continuous approximation are still comparably close to both the true parameter values and the estimates obtained when performing inference on the original model.


2014 ◽  
Vol 26 (3) ◽  
pp. 472-496 ◽  
Author(s):  
Levin Kuhlmann ◽  
Michael Hauser-Raspe ◽  
Jonathan H. Manton ◽  
David B. Grayden ◽  
Jonathan Tapson ◽  
...  

Bayesian spiking neurons (BSNs) provide a probabilistic interpretation of how neurons perform inference and learning. Online learning in BSNs typically involves parameter estimation based on maximum-likelihood expectation-maximization (ML-EM) which is computationally slow and limits the potential of studying networks of BSNs. An online learning algorithm, fast learning (FL), is presented that is more computationally efficient than the benchmark ML-EM for a fixed number of time steps as the number of inputs to a BSN increases (e.g., 16.5 times faster run times for 20 inputs). Although ML-EM appears to converge 2.0 to 3.6 times faster than FL, the computational cost of ML-EM means that ML-EM takes longer to simulate to convergence than FL. FL also provides reasonable convergence performance that is robust to initialization of parameter estimates that are far from the true parameter values. However, parameter estimation depends on the range of true parameter values. Nevertheless, for a physiologically meaningful range of parameter values, FL gives very good average estimation accuracy, despite its approximate nature. The FL algorithm therefore provides an efficient tool, complementary to ML-EM, for exploring BSN networks in more detail in order to better understand their biological relevance. Moreover, the simplicity of the FL algorithm means it can be easily implemented in neuromorphic VLSI such that one can take advantage of the energy-efficient spike coding of BSNs.


2017 ◽  
Vol 60 (5) ◽  
pp. 1699-1712
Author(s):  
Subodh Acharya ◽  
Melanie Correll ◽  
James W. Jones ◽  
Kenneth J. Boote ◽  
Phillip D. Alderman ◽  
...  

Abstract. Parameter estimation is a critical step in successful application of dynamic crop models to simulate crop growth and yield under various climatic and management scenarios. Although inverse modeling parameterization techniques significantly improve the predictive capabilities of models, whether these approaches can recover the true parameter values of a specific genotype or cultivar is seldom investigated. In this study, we applied a Markov Chain Monte-Carlo (MCMC) method to the DSSAT dry bean model to estimate (recover) the genotype-specific parameters (GSPs) of 150 synthetic recombinant inbred lines (RILs) of dry bean. The synthetic parents of the population were assigned contrasting GSP values obtained from a database, and each of these GSPs was associated with several quantitative trait loci. A standard inverse modeling approach that simultaneously estimated all GSPs generated a set of values that could reproduce the original synthetic observations, but many of the estimated GSP values significantly differed from the original values. However, when parameter estimation was carried out sequentially in a stepwise manner, according to the genetically controlled plant development process, most of the estimated parameters had values similar to the original values. Developmental parameters were more accurately estimated than those related to dry mass accumulation. This new approach appears to reduce the problem of equifinality in parameter estimation, and it is especially relevant if attempts are made to relate parameter values to individual genes. Keywords: Crop models, Equifinality, Genotype-specific parameters, Markov chain Monte-Carlo, Parameterization.


2006 ◽  
Vol 41 (1) ◽  
pp. 72-83 ◽  
Author(s):  
Zhe Zhang ◽  
Eric R. Hall

Abstract Parameter estimation and wastewater characterization are crucial for modelling of the membrane enhanced biological phosphorus removal (MEBPR) process. Prior to determining the values of a subset of kinetic and stoichiometric parameters used in ASM No. 2 (ASM2), the carbon, nitrogen and phosphorus fractions of influent wastewater at the University of British Columbia (UBC) pilot plant were characterized. It was found that the UBC wastewater contained fractions of volatile acids (SA), readily fermentable biodegradable COD (SF) and slowly biodegradable COD (XS) that fell within the ASM2 default value ranges. The contents of soluble inert COD (SI) and particulate inert COD (XI) were somewhat higher than ASM2 default values. Mixed liquor samples from pilot-scale MEBPR and conventional enhanced biological phosphorus removal (CEBPR) processes operated under parallel conditions, were then analyzed experimentally to assess the impact of operation in a membrane-assisted mode on the growth yield (YH), decay coefficient (bH) and maximum specific growth rate of heterotrophic biomass (µH). The resulting values for YH, bH and µH were slightly lower for the MEBPR train than for the CEBPR train, but the differences were not statistically significant. It is suggested that MEBPR simulation using ASM2 could be accomplished satisfactorily using parameter values determined for a conventional biological phosphorus removal process, if MEBPR parameter values are not available.


Genetics ◽  
2000 ◽  
Vol 155 (3) ◽  
pp. 1429-1437
Author(s):  
Oliver G Pybus ◽  
Andrew Rambaut ◽  
Paul H Harvey

Abstract We describe a unified set of methods for the inference of demographic history using genealogies reconstructed from gene sequence data. We introduce the skyline plot, a graphical, nonparametric estimate of demographic history. We discuss both maximum-likelihood parameter estimation and demographic hypothesis testing. Simulations are carried out to investigate the statistical properties of maximum-likelihood estimates of demographic parameters. The simulations reveal that (i) the performance of exponential growth model estimates is determined by a simple function of the true parameter values and (ii) under some conditions, estimates from reconstructed trees perform as well as estimates from perfect trees. We apply our methods to HIV-1 sequence data and find strong evidence that subtypes A and B have different demographic histories. We also provide the first (albeit tentative) genetic evidence for a recent decrease in the growth rate of subtype B.


2012 ◽  
Vol 44 (3) ◽  
pp. 441-453 ◽  
Author(s):  
Denis A. Hughes ◽  
Evison Kapangaziwiri ◽  
Jane Tanner

The most appropriate scale to use for hydrological modelling depends on the model structure, the purpose of the results and the resolution of available data used to quantify parameter values and provide the climatic forcing. There is little consensus amongst the community of model users on the appropriate model complexity and number of model parameters that are needed for satisfactory simulations. These issues are not independent of modelling scale, the methods used to quantify parameter values, nor the purpose of use of the simulations. This paper reports on an investigation of spatial scale effects on the application of an approach to quantify the parameter values (with uncertainty) of a rainfall-runoff model with a relatively large number of parameters. The quantification approach uses estimation equations based on physical property data and is applicable to gauged and ungauged basins. Within South Africa the physical property data are available at a finer spatial resolution than is typically used for hydrological modelling. The results suggest that reducing the model spatial scale offers some advantages. Potential disadvantages are related to the need for some subjective interpretation of the available physical property data, as well as inconsistencies in some of the parameter estimation equations.


2015 ◽  
Vol 2015 ◽  
pp. 1-12 ◽  
Author(s):  
Chao Zhang ◽  
Ru-bin Wang ◽  
Qing-xiang Meng

Parameter optimization for the conceptual rainfall-runoff (CRR) model has always been the difficult problem in hydrology since watershed hydrological model is high-dimensional and nonlinear with multimodal and nonconvex response surface and its parameters are obviously related and complementary. In the research presented here, the shuffled complex evolution (SCE-UA) global optimization method was used to calibrate the Xinanjiang (XAJ) model. We defined the ideal data and applied the method to observed data. Our results show that, in the case of ideal data, the data length did not affect the parameter optimization for the hydrological model. If the objective function was selected appropriately, the proposed method found the true parameter values. In the case of observed data, we applied the technique to different lengths of data (1, 2, and 3 years) and compared the results with ideal data. We found that errors in the data and model structure lead to significant uncertainties in the parameter optimization.


2019 ◽  
Vol 3 ◽  
Author(s):  
Charlotte Olivia Brand ◽  
James Patrick Ounsley ◽  
Daniel Job Van der Post ◽  
Thomas Joshua Henry Morgan

This paper introduces a statistical technique known as “posterior passing” in which the results of past studies can be used to inform the analyses carried out by subsequent studies. We first describe the technique in detail and show how it can be implemented by individual researchers on an experiment by experiment basis. We then use a simulation to explore its success in identifying true parameter values compared to current statistical norms (ANOVAs and GLMMs). We find that posterior passing allows the true effect in the population to be found with greater accuracy and consistency than the other analysis types considered. Furthermore, posterior passing performs almost identically to a data analysis in which all data from all simulated studies are combined and analysed as one dataset. On this basis, we suggest that posterior passing is a viable means of implementing cumulative science. Furthermore, because it prevents the accumulation of large bodies of conflicting literature, it alleviates the need for traditional meta-analyses. Instead, posterior passing cumulatively and collaboratively provides clarity in real time as each new study is produced and is thus a strong candidate for a new, cumulative approach to scientific analyses and publishing.


2015 ◽  
Vol 3 (1) ◽  
pp. 1
Author(s):  
Niklas Andersson ◽  
Per-Ola Larsson ◽  
Johan Åkesson ◽  
Niclas Carlsson ◽  
Staffan Skålén ◽  
...  

A polyethylene plant at Borealis AB is modelled in the Modelica language and considered for parameter estimations at grade transitions. Parameters have been estimated for both the steady-state and the dynamic case using the JModelica.org platform, which offers tools for steady-state parameter estimation and supports simulation with parameter sensitivies. The model contains 31 candidate parameters, giving a huge amount of possible parameter combinations. The best parameter sets have been chosen using a parameter-selection algorithm that identified parameter sets with poor numerical properties. The parameter-selection algorithm reduces the number of parameter sets that is necessary to explore. The steady-state differs from the dynamic case with respect to parameter selection. Validations of the parameter estimations in the dynamic case show a significant reduction in an objective value used to evaluate the quality of the solution from that of the nominal reference, where the nominal parameter values are used.


2021 ◽  
Vol 11 (1) ◽  
pp. 1093-1104
Author(s):  
Enock Michael ◽  
Dominicus Danardono Dwi Prija Tjahjana ◽  
Aditya Rio Prabowo

Abstract This study aimed to compare the graphical method (GM) and standard deviation method (SDM), based on analyses and efficient Weibull parameters by estimating future wind energy potential in the coastline region of Dar es Salaam, Tanzania. Hence, the conclusion from the numerical method comparisons will also determine suitable wind turbines that are cost-effective for the study location. The wind speed data for this study were collected by the Tanzania Meteorological Authority Dar es Salaam station over the period of 2017 to 2019. The two numerical methods introduced in this study were both found to be appropriate for Weibull distribution parameter estimation in the study area. However, the SDM gave a higher value of the Weibull parameter estimation than the GM. Furthermore, the five selected commercial wind turbine models that were simulated in terms of performance were based on a capacity factor using the SDM and were both over 25% the recommended capacity factor value. The Polaris P50-500 commercial wind turbine is recommend as a suitable wind turbine to be installed in the study area due to its maximum annual capacity factor value over 3 years.


Sign in / Sign up

Export Citation Format

Share Document