posteriori probability
Recently Published Documents


TOTAL DOCUMENTS

236
(FIVE YEARS 49)

H-INDEX

27
(FIVE YEARS 3)

2022 ◽  
Vol 1215 (1) ◽  
pp. 012006
Author(s):  
V.V. Bogomolov

Abstract A method is proposed for long baseline navigation of autonomous underwater vehicles (AUV) to be used in the case of a large a priori position uncertainty. The new modified method is based on the iterated Kalman filter (IKF) working with different initial linearization points. The final solution is calculated by clustering and weighting the IKF results. This approach allows position estimates to be determined in accordance with the global maximum of posteriori probability density of coordinates. The test results obtained with the use of three beacons and an underwater vehicle are presented.


2021 ◽  
Vol 51 (5) ◽  
pp. 91-100
Author(s):  
V. K. Kalichkin ◽  
T. A. Luzhnykh ◽  
V. S. Riksen ◽  
N. V. Vasilyeva ◽  
V. A. Shpak

The possibilities and feasibility of using the Bayesian network of trust and logistic regression to predict the content of nitrate nitrogen in the 0-40 cm soil layer before sowing have been investigated. Data from long-term multifactor field experience at the Siberian Research Institute of Farming and Agricultural Chemization of SFSCA RAS for 2013-2018 were used to train the models. The experiment was established on leached chernozem in the central forest-steppe subzone in 1981 in the Novosibirsk region. Considering the characteristics of the statistical sample (observation and analysis data), the main predictors of the models affecting nitrate nitrogen content in soil were identified. The Bayesian trust network is constructed as an acyclic graph, in which the main (basic) nodes and their relationships are denoted. Network nodes are represented by qualitative and quantitative plot parameters (soil subtype, forecrop, tillage, weather conditions) with corresponding gradations (events). The network assigns a posteriori probability of events for the target node (nitrate-nitrogen content in the 0-40 cm soil layer) as a result of experts completing the conditional probability table, taking into account the analysis of empirical data. Two scenarios were analyzed to test the sustainability of the network and satisfactory results were obtained. The result of the logistic regression is the coefficients characterizing the closeness of the relationship between the dependent variable and the predictors. The coefficient of determination of the logistic regression is 0.7. This indicates that the quality of the model can be considered acceptable for forecasting. A comparative assessment of the predictive capabilities of the trained models is given. The overall proportion of correct predictions for the Bayesian confidence network is 84%, for logistic regression it is 87%.


2021 ◽  
Vol 2131 (2) ◽  
pp. 022090
Author(s):  
E G Chub ◽  
V A Pogorelov

Abstract The described method of structure identification of the state vector of a telecommunication system stochastic model is based on a posteriori probability density approximation (APDA) by a system of a posteriori moments. An assumption of possible APDA approximation by a class of Pearson distributions resulted in a closed system of moment equations. Implementation of optimal non-linear stochastic object control techniques helped to solve the problem of structural identification. Introduction of the proposed approach into contemporary telecommunication systems will not impose additional requirements on the calculating equipment, thus making this method well-suited for a wide range of applications.


Atmosphere ◽  
2021 ◽  
Vol 12 (12) ◽  
pp. 1573
Author(s):  
Rachel Pelley ◽  
David Thomson ◽  
Helen Webster ◽  
Michael Cooke ◽  
Alistair Manning ◽  
...  

We present a Bayesian inversion method for estimating volcanic ash emissions using satellite retrievals of ash column load and an atmospheric dispersion model. An a priori description of the emissions is used based on observations of the rise height of the volcanic plume and a stochastic model of the possible emissions. Satellite data are processed to give column loads where ash is detected and to give information on where we have high confidence that there is negligible ash. An atmospheric dispersion model is used to relate emissions and column loads. Gaussian distributions are assumed for the a priori emissions and for the errors in the satellite retrievals. The optimal emissions estimate is obtained by finding the peak of the a posteriori probability density under the constraint that the emissions are non-negative. We apply this inversion method within a framework designed for use during an eruption with the emission estimates (for any given emission time) being revised over time as more information becomes available. We demonstrate the approach for the 2010 Eyjafjallajökull and 2011 Grímsvötn eruptions. We apply the approach in two ways, using only the ash retrievals and using both the ash and clear sky retrievals. For Eyjafjallajökull we have compared with an independent dataset not used in the inversion and have found that the inversion-derived emissions lead to improved predictions.


2021 ◽  
Vol 43 ◽  
pp. 111-122
Author(s):  
Xue Ping Fan ◽  
Sen Wang ◽  
Yue Fei Liu

The existing bridges are subjected to time-variant loading and resistance degradation processes. How to update resistance probability distribution functions with resistance degradation model and proof load effects has become one of the research hotspots in bridge engineering field. To solve with the above issue, this paper proposed the general particle simulation algorithms of complex Bayesian formulas for bridge resistance updating. Firstly, the complex Bayesian formulas for updating resistance probability model are built. For overcoming the difficulty for the analytic calculation of complex Bayesian formulas, the general particle simulation methods are provided to obtain the particles of complex Bayesian formulas; then, with the improved expectation maximization optimization algorithm obtained with the combination of K-MEANS algorithm and Expectation Maximization (EM) algorithm, the above simulated particles can be used to estimate the posteriori probability density functions of resistance probability model; finally, a numerical example is provided to illustrate the feasibility and application of the proposed algorithms.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5351
Author(s):  
Mohammed Jajere Adamu ◽  
Li Qiang ◽  
Rabiu Sale Zakariyya ◽  
Charles Okanda Nyatega ◽  
Halima Bello Kawuwa ◽  
...  

This paper addresses the main crucial aspects of physical (PHY) layer channel coding in uplink NB-IoT systems. In uplink NB-IoT systems, various channel coding algorithms are deployed due to the nature of the adopted Long-Term Evolution (LTE) channel coding which presents a great challenge at the expense of high decoding complexity, power consumption, error floor phenomena, while experiencing performance degradation for short block lengths. For this reason, such a design considerably increases the overall system complexity, which is difficult to implement. Therefore, the existing LTE turbo codes are not recommended in NB-IoT systems and, hence, new channel coding algorithms need to be employed for LPWA specifications. First, LTE-based turbo decoding and frequency-domain turbo equalization algorithms are proposed, modifying the simplified maximum a posteriori probability (MAP) decoder and minimum mean square error (MMSE) Turbo equalization algorithms were appended to different Narrowband Physical Uplink Shared Channel (NPUSCH) subcarriers for interference cancellation. These proposed methods aim to minimize the complexity of realizing the traditional MAP turbo decoder and MMSE estimators in the newly NB-IoT PHY layer features. We compare the system performance in terms of block error rate (BLER) and computational complexity.


2021 ◽  
Author(s):  
Nithya Ramakrishnan ◽  
Sibi Raj B Pillai ◽  
Ranjith Padinhateeri

Beyond the genetic code, there is another layer of information encoded as chemical modifications on histone proteins positioned along the DNA. Maintaining these modifications is crucial for survival and identity of cells. How the information encoded in the histone marks gets inherited, given that only half the parental nucleosomes are transferred to each daughter chromatin, is a puzzle. Mapping DNA replication and reconstruction of modifications to equivalent problems in communication of information, we ask how well enzymes can recover the parental modifications, if they were ideal computing machines. Studying a parameter regime where realistic enzymes can function, our analysis predicts that, pragmatically, enzymes may implement a threshold-k filling algorithm which fills unmodified regions of length at most k. This algorithm, motivated from communication theory, is derived from the maximum a` posteriori probability (MAP) decoding which identifies the most probable modification sequence based on available observations. Simulations using our method pro- duce modification patterns similar to what has been observed in recent experiments. We also show that our results can be naturally extended to explain inheritance of spatially distinct antagonistic modifications.


PLoS ONE ◽  
2021 ◽  
Vol 16 (3) ◽  
pp. e0249269
Author(s):  
Hasnain Raza ◽  
Syed Azhar Ali Zaidi ◽  
Aamir Rashid ◽  
Shafiq Haider

Area efficient and high speed forward error correcting codes decoder are the demand of many high speed next generation communication standards. This paper explores a low complexity decoding algorithm of low density parity check codes, called the min-sum iterative construction a posteriori probability (MS-IC-APP), for this purpose. We performed the error performance analysis of MS-IC-APP for a (648,1296) regular QC-LDPC code and proposed an area and throughput optimized hardware implementation of MS-IC-APP. We proposed to use the layered scheduling of MS-IC-APP and performed other optimizations at architecture level to reduce the area and to increase the throughput of the decoder. Synthesis results show 6.95 times less area and 4 times high throughput as compared to the standard min-sum decoder. The area and throughput are also comparable to the improved variants of hard-decision bit-flipping (BF) decoders, whereas, the simulation results show a coding gain of 2.5 over the best implementation of BF decoder in terms of error performance.


Sign in / Sign up

Export Citation Format

Share Document