SATCOM Jamming Resiliency under Non-Uniform Probability of Attacks

Author(s):  
Lan K. Nguyen ◽  
Duy H. N. Nguyen ◽  
Nghi H. Tran ◽  
Clayton Bosler ◽  
David Brunnenmeyer
Keyword(s):  
2018 ◽  
pp. 239-251
Author(s):  
Richard W. Hamming
Keyword(s):  

2020 ◽  
Vol 19 (3) ◽  
Author(s):  
PER NILSSON

This study examines informal hypothesis testing in the context of drawing inferences of underlying probability distributions. Through a small-scale teaching experiment of three lessons, the study explores how fifth-grade students distinguish a non-uniform probability distribution from uniform probability distributions in a data-rich learning environment, and what role processes of data production play in their investigations. The study outlines aspects of students’ informal understanding of hypothesis testing. It shows how students with no formal education can follow the logic that a small difference in samples can be the effect of randomness, while a large difference implies a real difference in the underlying process. The students distinguish the mode and the size of differences in frequencies as signals in data and used these signals to give data-based reasons in processes of informal hypothesis testing. The study also highlights the role of data production and points to a need for further research on the role of data production in an informal approach to the teaching and learning of statistical inference. First published December 2020 at Statistics Education Research Journal: Archives


2016 ◽  
Vol 6 (1) ◽  
pp. 1
Author(s):  
Ali Parsian

Let \(S\) be a nonempty set and \(F\) consists of all \(Z_{2}\) characteristic functions defined on \(S\). We are supposed to introduce a ring isomorphic to \((P(S),\triangle,\cap)\), whose set is \(F\). Then, assuming a finitely additive function $m$ defined on \(P(S)\), we change \(P(S)\) to a pseudometric space \((P(S),d_{m})\) in which its pseudometric is defined by \(m\). Among other things, we investigate the concepts of convergence and continuity in the induced pseudometric space. Moreover, a theorem on the measure of some kinds of elements in \((P(S),m)\) will be established. At the end, as an application in probability theory, the probability of some events in the space of permutations with uniform probability will be determined. Some illustrative examples are included to show the usefulness and applicability of results.


2011 ◽  
Vol 2011 ◽  
pp. 1-18
Author(s):  
You Gao ◽  
Huafeng Yu

A new construction of authentication codes with arbitration and multireceiver from singular symplectic geometry over finite fields is given. The parameters are computed. Assuming that the encoding rules are chosen according to a uniform probability distribution, the probabilities of success for different types of deception are also computed.


2017 ◽  
Vol 09 (04) ◽  
pp. 717-738 ◽  
Author(s):  
Sourav Chatterjee

Uniform probability distributions on [Formula: see text] balls and spheres have been studied extensively and are known to behave like product measures in high dimensions. In this note we consider the uniform distribution on the intersection of a simplex and a sphere. Certain new and interesting features, such as phase transitions and localization phenomena emerge.


Author(s):  
Andrea Berdondini

ABSTRACT: This article describes an optimization method concerning entropy encoding applicable to a source of independent and identically-distributed random variables. The algorithm can be explained with the following example: let us take a source of i.i.d. random variables X with uniform probability density and cardinality 10. With this source, we generate messages of length 1000 which will be encoded in base 10. We call XG the set containing all messages that can be generated from the source. According to Shannon's first theorem, if the average entropy of X, calculated on the set XG, is H(X)≈0.9980, the average length of the encoded messages will be 1000* H(X)=998. Now, we increase the length of the message by one and calculate the average entropy concerning the 10% of the sequences of length 1001 having less entropy. We call this set XG10. The average entropy of X10, calculated on the XG10 set, is H(X10)≈0.9964, consequently, the average length of the encoded messages will be 1001* H(X10)=997.4 . Now, we make the difference between the average length of the encoded sequences belonging to the two sets ( XG and XG10) 998.0-997.4 = 0.6. Therefore, if we use the XG10 set, we reduce the average length of the encoded message by 0.6 values ​​in base ten. Consequently, the average information per symbol becomes 997.4/1000=0.9974, which turns out to be less than the average entropy of X H(X)≈0.998. We can use the XG10 set instead of the X10 set, because we can create a biunivocal correspondence between all the possible sequences generated by our source and ten percent of the sequences with less entropy of the messages having length 1001. In this article, we will show that this transformation can be performed by applying random variations on the sequences generated by the source.


2010 ◽  
Vol DMTCS Proceedings vol. AN,... (Proceedings) ◽  
Author(s):  
Kento Nakada ◽  
Shuji Okamura

International audience The purpose of this paper is to present an algorithm which generates linear extensions for a generalized Young diagram, in the sense of D. Peterson and R. A. Proctor, with uniform probability. This gives a proof of a D. Peterson's hook formula for the number of reduced decompositions of a given minuscule elements. \par Le but de ce papier est présenter un algorithme qui produit des extensions linéaires pour un Young diagramme généralisé dans le sens de D. Peterson et R. A. Proctor, avec probabilité constante. Cela donne une preuve de la hook formule d'un D. Peterson pour le nombre de décompositions réduites d'un éléments minuscules donné.


Sign in / Sign up

Export Citation Format

Share Document