a posteriori probability
Recently Published Documents


TOTAL DOCUMENTS

196
(FIVE YEARS 39)

H-INDEX

22
(FIVE YEARS 3)

2021 ◽  
Vol 51 (5) ◽  
pp. 91-100
Author(s):  
V. K. Kalichkin ◽  
T. A. Luzhnykh ◽  
V. S. Riksen ◽  
N. V. Vasilyeva ◽  
V. A. Shpak

The possibilities and feasibility of using the Bayesian network of trust and logistic regression to predict the content of nitrate nitrogen in the 0-40 cm soil layer before sowing have been investigated. Data from long-term multifactor field experience at the Siberian Research Institute of Farming and Agricultural Chemization of SFSCA RAS for 2013-2018 were used to train the models. The experiment was established on leached chernozem in the central forest-steppe subzone in 1981 in the Novosibirsk region. Considering the characteristics of the statistical sample (observation and analysis data), the main predictors of the models affecting nitrate nitrogen content in soil were identified. The Bayesian trust network is constructed as an acyclic graph, in which the main (basic) nodes and their relationships are denoted. Network nodes are represented by qualitative and quantitative plot parameters (soil subtype, forecrop, tillage, weather conditions) with corresponding gradations (events). The network assigns a posteriori probability of events for the target node (nitrate-nitrogen content in the 0-40 cm soil layer) as a result of experts completing the conditional probability table, taking into account the analysis of empirical data. Two scenarios were analyzed to test the sustainability of the network and satisfactory results were obtained. The result of the logistic regression is the coefficients characterizing the closeness of the relationship between the dependent variable and the predictors. The coefficient of determination of the logistic regression is 0.7. This indicates that the quality of the model can be considered acceptable for forecasting. A comparative assessment of the predictive capabilities of the trained models is given. The overall proportion of correct predictions for the Bayesian confidence network is 84%, for logistic regression it is 87%.


2021 ◽  
Vol 2131 (2) ◽  
pp. 022090
Author(s):  
E G Chub ◽  
V A Pogorelov

Abstract The described method of structure identification of the state vector of a telecommunication system stochastic model is based on a posteriori probability density approximation (APDA) by a system of a posteriori moments. An assumption of possible APDA approximation by a class of Pearson distributions resulted in a closed system of moment equations. Implementation of optimal non-linear stochastic object control techniques helped to solve the problem of structural identification. Introduction of the proposed approach into contemporary telecommunication systems will not impose additional requirements on the calculating equipment, thus making this method well-suited for a wide range of applications.


Atmosphere ◽  
2021 ◽  
Vol 12 (12) ◽  
pp. 1573
Author(s):  
Rachel Pelley ◽  
David Thomson ◽  
Helen Webster ◽  
Michael Cooke ◽  
Alistair Manning ◽  
...  

We present a Bayesian inversion method for estimating volcanic ash emissions using satellite retrievals of ash column load and an atmospheric dispersion model. An a priori description of the emissions is used based on observations of the rise height of the volcanic plume and a stochastic model of the possible emissions. Satellite data are processed to give column loads where ash is detected and to give information on where we have high confidence that there is negligible ash. An atmospheric dispersion model is used to relate emissions and column loads. Gaussian distributions are assumed for the a priori emissions and for the errors in the satellite retrievals. The optimal emissions estimate is obtained by finding the peak of the a posteriori probability density under the constraint that the emissions are non-negative. We apply this inversion method within a framework designed for use during an eruption with the emission estimates (for any given emission time) being revised over time as more information becomes available. We demonstrate the approach for the 2010 Eyjafjallajökull and 2011 Grímsvötn eruptions. We apply the approach in two ways, using only the ash retrievals and using both the ash and clear sky retrievals. For Eyjafjallajökull we have compared with an independent dataset not used in the inversion and have found that the inversion-derived emissions lead to improved predictions.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5351
Author(s):  
Mohammed Jajere Adamu ◽  
Li Qiang ◽  
Rabiu Sale Zakariyya ◽  
Charles Okanda Nyatega ◽  
Halima Bello Kawuwa ◽  
...  

This paper addresses the main crucial aspects of physical (PHY) layer channel coding in uplink NB-IoT systems. In uplink NB-IoT systems, various channel coding algorithms are deployed due to the nature of the adopted Long-Term Evolution (LTE) channel coding which presents a great challenge at the expense of high decoding complexity, power consumption, error floor phenomena, while experiencing performance degradation for short block lengths. For this reason, such a design considerably increases the overall system complexity, which is difficult to implement. Therefore, the existing LTE turbo codes are not recommended in NB-IoT systems and, hence, new channel coding algorithms need to be employed for LPWA specifications. First, LTE-based turbo decoding and frequency-domain turbo equalization algorithms are proposed, modifying the simplified maximum a posteriori probability (MAP) decoder and minimum mean square error (MMSE) Turbo equalization algorithms were appended to different Narrowband Physical Uplink Shared Channel (NPUSCH) subcarriers for interference cancellation. These proposed methods aim to minimize the complexity of realizing the traditional MAP turbo decoder and MMSE estimators in the newly NB-IoT PHY layer features. We compare the system performance in terms of block error rate (BLER) and computational complexity.


2021 ◽  
Author(s):  
Nithya Ramakrishnan ◽  
Sibi Raj B Pillai ◽  
Ranjith Padinhateeri

Beyond the genetic code, there is another layer of information encoded as chemical modifications on histone proteins positioned along the DNA. Maintaining these modifications is crucial for survival and identity of cells. How the information encoded in the histone marks gets inherited, given that only half the parental nucleosomes are transferred to each daughter chromatin, is a puzzle. Mapping DNA replication and reconstruction of modifications to equivalent problems in communication of information, we ask how well enzymes can recover the parental modifications, if they were ideal computing machines. Studying a parameter regime where realistic enzymes can function, our analysis predicts that, pragmatically, enzymes may implement a threshold-k filling algorithm which fills unmodified regions of length at most k. This algorithm, motivated from communication theory, is derived from the maximum a` posteriori probability (MAP) decoding which identifies the most probable modification sequence based on available observations. Simulations using our method pro- duce modification patterns similar to what has been observed in recent experiments. We also show that our results can be naturally extended to explain inheritance of spatially distinct antagonistic modifications.


PLoS ONE ◽  
2021 ◽  
Vol 16 (3) ◽  
pp. e0249269
Author(s):  
Hasnain Raza ◽  
Syed Azhar Ali Zaidi ◽  
Aamir Rashid ◽  
Shafiq Haider

Area efficient and high speed forward error correcting codes decoder are the demand of many high speed next generation communication standards. This paper explores a low complexity decoding algorithm of low density parity check codes, called the min-sum iterative construction a posteriori probability (MS-IC-APP), for this purpose. We performed the error performance analysis of MS-IC-APP for a (648,1296) regular QC-LDPC code and proposed an area and throughput optimized hardware implementation of MS-IC-APP. We proposed to use the layered scheduling of MS-IC-APP and performed other optimizations at architecture level to reduce the area and to increase the throughput of the decoder. Synthesis results show 6.95 times less area and 4 times high throughput as compared to the standard min-sum decoder. The area and throughput are also comparable to the improved variants of hard-decision bit-flipping (BF) decoders, whereas, the simulation results show a coding gain of 2.5 over the best implementation of BF decoder in terms of error performance.


Author(s):  
Yu. I. Buryak ◽  
A. A. Skrynnikov

The article deals with the problem of reducing the volume of tests of complex systems by using a priori data on the reliability of their elements. At the preliminary stage, the a priori distribution of the probability of failure of the system as a whole is determined. To do this, the results of element tests are processed and the parameters of the a posteriori probability distribution of element failure are determined based on the Bayesian procedure. The type of distribution law (beta distribution) is chosen from the conjugacy condition. Statistical modeling of the system failure probability of a known structural-logical reliability scheme is performed for random values of the failure probabilities of each element, set in accordance with the obtained distribution law. The system failure probability distribution law is formed as a mixture of beta distributions; the advantage of this distribution law is a fairly high accuracy of the simulation data description and conjugacy to the binomial distribution. The parameters of a mixture of beta distributions are determined using the EM (Expectation-Maximization) algorithm. The quality of selection of the desired distribution density is checked using the nonparametric Kolmogorov criterion. When testing the system, after each experiment, the a posteriori density of the probability distribution is recalculated; it is represented as a mixture of beta distributions with a constant proportion of components. The parameters of each element of the mixture are easily determined by the results of the experiment. As a point Bayesian estimate, the average value calculated from the a posteriori distribution is taken, the confidence interval for a given confidence probability is found as the central interval. An example is given and the possibility of minimizing the number of tests is shown.


Author(s):  
S.G. Vorona ◽  
S.N. Bulychev

The article deals with the issue of stealth of radio-electronic means, energy and structural, radio-electronic masking and ways of its implementation. The structure of the unknown signal for exploration and its parameters, as well as the a posteriori probability of each signal associated with the a priori likelihood function and the cases of its solution. The advantages and disadvantages of broadband signals and their characteristics used in modern radars are considered. On the basis of which conclusions are drawn: LFM radio pulse and a single FCM pulse, used in target tracking modes, has high resolution capabilities in range and radial velocity. The ACF of the FCM pulse has side lobes that raise the target detection threshold, as a result of which radar targets with a weak echo signal can be missed. The considered signals do not provide energy and structural stealth of the radar operation.


Sign in / Sign up

Export Citation Format

Share Document