Ensuring the secrecy of the radar

Author(s):  
S.G. Vorona ◽  
S.N. Bulychev

The article deals with the issue of stealth of radio-electronic means, energy and structural, radio-electronic masking and ways of its implementation. The structure of the unknown signal for exploration and its parameters, as well as the a posteriori probability of each signal associated with the a priori likelihood function and the cases of its solution. The advantages and disadvantages of broadband signals and their characteristics used in modern radars are considered. On the basis of which conclusions are drawn: LFM radio pulse and a single FCM pulse, used in target tracking modes, has high resolution capabilities in range and radial velocity. The ACF of the FCM pulse has side lobes that raise the target detection threshold, as a result of which radar targets with a weak echo signal can be missed. The considered signals do not provide energy and structural stealth of the radar operation.

Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 3991
Author(s):  
Lorenzo Mucchi ◽  
Luca Simone Ronga ◽  
Sara Jayousi

Reducing energy consumption is one of the most important task of the approaching Internet of Things (IoT) paradigm. Existing communication standards, such as 3G/4G, use complex protocols (active mode, sleep modes) in order to address the waste of energy. These protocols are forced to transmit when one frame is only partially filled with information symbols. The hard task to adapt the power-saving mode with low latency to the discontinuity of the source is mainly due to the fact that the receiver cannot know a priori when the source has something to transmit. In this paper, we propose a modified signalling/constellation which can save energy by mapping a zero-energy symbol in the information source. This paper addresses the fundamentals of this new technique: the maximum a posteriori probability (MAP) criterion, the probability of error, the (energy) entropy, the (energy) capacity as well as the energy cost of the proposed technique are derived for the binary signalling case.


1984 ◽  
Vol 23 (03) ◽  
pp. 147-153
Author(s):  
B. Schneider

SummaryThe concept of Bayesian statistics, based on the model of random parameters with appropriate a priori distributions, is discussed and applied to the analysis of clinical studies (i. e. treatment comparisons). It is shown that assumptions about the a priori distribution can be eliminated if analysis is restricted to the class of conjugate prior distributions and the number of data is sufficiently high. For treatment comparisons the concept of “preferences” is introduced, i.e. the a posteriori probability for special rankings of the effect parameters. This concept is an alternative to hypothesis testing and error probabilities which is meaningless in Bayesian models. With this concept it is not necessary to formulate the hypotheses before the study or fix sample size or stopping rules in advance. It is also not necessary to restrict the analysis to the test of one or few hypotheses. On the other hand, the physician will not get error probabilities for his statements but “preferences” of the relevant rankings of the treatments.


2018 ◽  
pp. 45-49
Author(s):  
P. S. Galkin ◽  
V. N. Lagutkin

The algorithm of estimation and compensation of ionosphere influence on the measurement of parameters of the motion of space objects in two-position radar system with account of radio physical effects depending on elevation angles and the operating frequency is developed. It is assumed that the observed space object is traсked object, the orbital parameters which are well known, including the dependence of the velocity of the point on the orbit, and the uncertainty of the current coordinates of the object is caused mainly by forecast error of its position of in orbit (longitudinal error). To estimate the true position of space object in the orbit and the parameter, determining the influence of the ionosphere, a joint optimal processing of measurement of ranges to the object, obtained by two separated radars, taking into account the relevant ionospheric propagation delays and available a priori data on observable object trajectory. Estimation of unknown parameters are obtained on the basis of the criterion of maximum a posteriori probability density for these parameters, taking into account the measured and a priori data. The task of searching for maximum a posteriori probability density is reduced to task of searching of minimum weighted sum of squares, for the solution of which the cascade algorithm of iteration through is implemented in the work. Estimation accuracy of the position of space objects in orbit after compensation of ionosphere influence have been studied by Monte-Carlo method. Dependencies of mean square error of the position estimation of space objects upon elevation angles, operation frequency and solar activity have been obtained. It is shown that the effectiveness of the algorithm increases with the spatial base of measurements (for a fixed orbit of the object).


2000 ◽  
Vol 08 (02) ◽  
pp. 259-270 ◽  
Author(s):  
CHRISTOPH F. MECKLENBRÄUKER ◽  
PETER GERSTOFT

Selection of a suitable objective function is an integral part of the inverse problem, and poor selection can have a strong influence on the inverse result. Objective functions are here derived for many practical occasions such as for single frequency and broadband, with and without knowledge of source strength, and with and without the received signal phase. These objective functions are all derived from a unified approach based on maximum likelihood and additive Gaussian noise models. The assumptions for the objective function are thus clear and the resulting estimator has good properties. From a Bayesian point of view, the solution to the inverse problem is the a posteriori probability distribution of the unknown parameters, which can be found from the likelihood function. Thus using objective functions based on likelihood functions facilitates computing the a posteriori distributions.


Author(s):  
J. H. Pacheco-Sánchez ◽  
R. D. Vera-Torres ◽  
R. Alejo

Bayesian learning is applied on two class systems. Partitioning a big sample made up of many elements of two classes of indistinguishable objects, we indistinctly pursue from 2 to 5 training sets called hypotheses in the probability field, with a plausible rate of object from each hypothesis. Objects are taken one by one from the sample. The basic aim faced is to predict one type of objects in the following occasion in which an agent takes one of them from the original sample to test it. We obtain the graph of a posteriori probability for each hypothesis of one of the objects. A prediction that the following object is specifically one of them is acquired in one probability curve by means of training previously accomplished. This methodology is applied on manufacture of glass bottles of two classes: good or crash. The main interest is to predict which machine produced one detected crash bottle because bottles turn to be indistinguishable when they are reviewed. This is solved by fixing a priori probabilities and taking into account all possible probability distribution combinations in the classes.


2002 ◽  
Vol 12 ◽  
pp. 255-256 ◽  
Author(s):  
J. Virtanen ◽  
K. Muinonen ◽  
E. Bowell

AbstractWe consider initial determination of orbits for trans-neptunian objects (TNOs), a topical theme because of the rapidly growing TNO population and the challenges in recovering lost TNOs. We apply the method of initial phase-space ranging of orbits to the poorly observed TNOs. The rigorous a posteriori probability density of the TNO orbital elements is examined using a Monte Carlo technique by varying the TNO topocentric ranges corresponding to the observation dates. We can optionally adopt a Bayesian approach to select the region of phase space containing the most plausible orbits. This is accomplished by incorporating semimajor axes, eccentricities, inclinations, and absolute magnitudes of multi-apparition TNOs as a priori information. The resulting a posteriori distributions permit ephemeris and ephemeris uncertainty prediction for TNO recovery observations.


2019 ◽  
pp. 64-75
Author(s):  
Ирина Карловна Васильева ◽  
Анатолий Владиславович Попов

The subject matter of the article is the methods of automatic clustering of remote sensing data under conditions of a priori uncertainty regarding the number of observed object classes and the statistical characteristics of the signatures of classes. The aim is to develop a method for approximating multimodal empirical distributions of observational data to construct decision rules for pixel-by-pixel statistical classification procedures, as well as to investigate the effectiveness of this method for automatically classifying objects on synthesized and real images. The tasks to be solved are: to develop and implement a procedure for splitting a mixture of basic distributions, while ensuring the following requirements: the absence of a preliminary data analysis stage in order to select optimal initial approximations; a good convergence of the method and the ability to automatically refine the list of classes by combining indistinguishable or poorly distinguishable components of the mixture into a single cluster; to synthesize test images with a specified number of objects and known data distributions for each object; to evaluate the effectiveness of the developed method for automatic classification by the criterion of the probability of correct recognition; to evaluate the results of automatic clustering of real images. The methods used are methods of stochastic simulation, methods of approximation of empirical distributions, statistical methods of recognition, methods of probability theory and mathematical statistics. The following results have been obtained. A method for automatic splitting of a mixture of Gaussian distributions to construct decision thresholds according to the maximal a posteriori probability criterion was proposed. The results of the automatic forming the list of classes and their probabilistic descriptions, as well as the results of the clustering both test images and satellite ones are given. It is shown that the developed method is quite effective and can be used to determine the number of objects’ classes as well as their stochastic characteristics’ mathematical description for pattern recognition tasks and cluster analysis. Conclusions. The scientific novelty of the results obtained is that the proposed approach makes it possible directly during the “unsupervised” training procedure to evaluate the distinguishability of classes and exclude indistinguishable objects from the list of classes.


Author(s):  
A. A. Lobaty ◽  
A. Y. Bumai

The problem of evaluating the information which is present in random signals from various sources-meters is considered. It is assumed that the random process which is evaluated and the meter output according mathematical description of the problem are vector random processes. In this case, the dimension of the vector of the measurement can be larger than the dimension of the vector of the process being evaluated. The analysis of analytical methods and algorithms of the estimation that based on the determination of the main probabilistic characteristics of a random process by both the a priori and the a posteriori methods with various optimality criteria is carried out. Based on the analysis, the problem of complexing of the meters of the random process is considered according the proposed criterion for the maximum of posterior verisimilitude, combining the criterion of maximum verisimilitude and the criterion of maximum the a posteriori probability, general methodology complexing is developed. Proposed example of the complexing shows the efficiency of the proposed method. This approach to the construction of the algorithms of the evaluation for multidimensional random processes allows to increase the accuracy of estimation, since it takes into account additional information and its complex processing.


Geophysics ◽  
2016 ◽  
Vol 81 (2) ◽  
pp. E89-E101 ◽  
Author(s):  
Jieyi Zhou ◽  
André Revil ◽  
Abderrahim Jardani

Inverse modeling of geophysical data involves the recovery of a subsurface structural model and the distribution of petrophysical properties. Independent information regarding the subsurface structure is usually available, with some uncertainty, from the expertise of a geologist and possibly accounting for sedimentary and tectonic processes. We have used the available structural information to construct a model covariance matrix and to perform a structure-constrained inversion of the geophysical data to obtain a geophysical tomogram [Formula: see text]. We have considered that the geologic models [Formula: see text] were built from random variables and were described with a priori probability density function in the Bayesian framework. We have explored for the a posteriori probability density of the geologic models (i.e., the structure of the guiding image) with the Markov-chain Monte Carlo method, and we inverted at the same time, in a deterministic framework, the geophysical data. The sampling of the geologic models was performed in a stochastic framework, and each geologic model [Formula: see text] was used to invert the geophysical model [Formula: see text] using image-guided inversion. The adaptive metropolis algorithm was used to find the proposal distributions of [Formula: see text] reproducing the geophysical data and the geophysical information. In other words, we have tried to find a compromise between the a priori geologic information and the geophysical data to get, as end products, an updated geologic model and a geophysical tomogram. To demonstrate our approach, we used here electrical resistivity tomography as a technique to identify a correct geologic model and its a posteriori probability density. The approach was tested using one synthetic example (with three horizontal layers displaced by a normal fault) and one field case corresponding to a sinkhole in a three-layer structure. In both cases, we were able to select the most plausible geologic models that agreed with a priori information and the geophysical data.


Author(s):  
FREDRIK EKDAHL ◽  
PER PERSSON ◽  
PIA SANDVIK WIKLUND

Unreplicated factorial designs are widely used for designed experimentation in industry. In the analysis of designed experiments, the experimental factors influencing the response must be identified and separated from those that do not. An abundance of procedures intended to perform this selection have been introduced in the literature. A recent study indicated that the procedure due to Box and Meyer outperforms the lot of the other selection procedures in terms of efficiency and robustness. The procedure of Box and Meyer rests on a quasi-Bayesian foundation and utilizes generic domain knowledge, in the form of a common-for-all-factors a priori probability, that a factor significantly influences the response, to calculate an a posteriori probability for each factor. This paper suggests a strategy for introducing more elaborate domain knowledge about the experimental factors in the procedure of Box and Meyer, aiming to further improve its performance.


Sign in / Sign up

Export Citation Format

Share Document