Best Attack Position Model for BVR Multi-Target Air Combat

2014 ◽  
Vol 1016 ◽  
pp. 511-515
Author(s):  
Rong Yang ◽  
Fang Ming Huang ◽  
Hua Jun Gong

Refer to the characteristics of BVR air combat and multi-target attack for the Fourth Generation Fighters, this paper constructs and computes the model of probability distributions in multi-target kill zone, the model of the best attack path/attack position of multi-target attack. The model of probability distributions in multi-target kill zone considers heading angle and approaching angle of target, distance between fighter and target, maximum off-boresight launching angle and killing angle. The model of the best attack path/attack position considers damage probability to targets by missiles, threat degree to fighter of targets, and threat degree to fighter of residual targets. The paper calculates the simulation data according to the models, analyzes probability distributions in multi-target kill zone of missiles, the best attack path/attack position. The models and simulated results show that the method, which uses kill zone probability model, can improve the damage probability and reduce threat degree from enemy targets.

2021 ◽  
Vol 2 (2) ◽  
pp. 60-67
Author(s):  
Rashidul Hasan Rashidul Hasan

The estimation of a suitable probability model depends mainly on the features of available temperature data at a particular place. As a result, existing probability distributions must be evaluated to establish an appropriate probability model that can deliver precise temperature estimation. The study intended to estimate the best-fitted probability model for the monthly maximum temperature at the Sylhet station in Bangladesh from January 2002 to December 2012 using several statistical analyses. Ten continuous probability distributions such as Exponential, Gamma, Log-Gamma, Beta, Normal, Log-Normal, Erlang, Power Function, Rayleigh, and Weibull distributions were fitted for these tasks using the maximum likelihood technique. To determine the model’s fit to the temperature data, several goodness-of-fit tests were applied, including the Kolmogorov-Smirnov test, Anderson-Darling test, and Chi-square test. The Beta distribution is found to be the best-fitted probability distribution based on the largest overall score derived from three specified goodness-of-fit tests for the monthly maximum temperature data at the Sylhet station.


1982 ◽  
Vol 14 (01) ◽  
pp. 68-94 ◽  
Author(s):  
D. Gary Harlow ◽  
S. Leigh Phoenix

The focus of this paper is on obtaining a conservative but tight bound on the probability distribution for the strength of a fibrous material. The model is the chain-of-bundles probability model, and local load sharing is assumed for the fiber elements in each bundle. The bound is based upon the occurrence of two or more adjacent broken fiber elements in a bundle. This event is necessary but not sufficient for failure of the material. The bound is far superior to a simple weakest link bound based upon the failure of the weakest fiber element. For large materials, the upper bound is a Weibull distribution, which is consistent with experimental observations. The upper bound is always conservative, but its tightness depends upon the variability in fiber element strength and the volume of the material. In cases where the volume of material and the variability in fiber strength are both small, the bound is believed to be virtually the same as the true distribution function for material strength. Regarding edge effects on composite strength, only when the number of fibers is very small is a correction necessary to reflect the load-sharing irregularities at the edges of the bundle.


2009 ◽  
Vol 24 (6) ◽  
pp. 1573-1591 ◽  
Author(s):  
Mark DeMaria ◽  
John A. Knaff ◽  
Richard Knabb ◽  
Chris Lauer ◽  
Charles R. Sampson ◽  
...  

Abstract The National Hurricane Center (NHC) Hurricane Probability Program (HPP) was implemented in 1983 to estimate the probability that the center of a tropical cyclone would pass within 60 n mi of a set of specified points out to 72 h. Other than periodic updates of the probability distributions, the HPP remained unchanged through 2005. Beginning in 2006, the HPP products were replaced by those from a new program that estimates probabilities of winds of at least 34, 50, and 64 kt, and incorporates uncertainties in the track, intensity, and wind structure forecasts. This paper describes the new probability model and a verification of the operational forecasts from the 2006–07 seasons. The new probabilities extend to 120 h for all tropical cyclones in the Atlantic and eastern, central, and western North Pacific to 100°E. Because of the interdependence of the track, intensity, and structure forecasts, a Monte Carlo method is used to generate 1000 realizations by randomly sampling from the operational forecast center track and intensity forecast error distributions from the past 5 yr. The extents of the 34-, 50-, and 64-kt winds for the realizations are obtained from a simple wind radii model and its underlying error distributions. Verification results show that the new probability model is relatively unbiased and skillful as measured by the Brier skill score, where the skill baseline is the deterministic forecast from the operational centers converted to a binary probabilistic forecast. The model probabilities are also well calibrated and have high confidence based on reliability diagrams.


2004 ◽  
Vol 94 (9) ◽  
pp. 1027-1030 ◽  
Author(s):  
A. L. Mila ◽  
A. L. Carriquiry

Bayesian methods are currently much discussed and applied in several disciplines from molecular biology to engineering. Bayesian inference is the process of fitting a probability model to a set of data and summarizing the results via probability distributions on the parameters of the model and unobserved quantities such as predictions for new observations. In this paper, after a short introduction of Bayesian inference, we present the basic features of Bayesian methodology using examples from sequencing genomic fragments and analyzing microarray gene-expressing levels, reconstructing disease maps, and designing experiments.


1982 ◽  
Vol 14 (1) ◽  
pp. 68-94 ◽  
Author(s):  
D. Gary Harlow ◽  
S. Leigh Phoenix

The focus of this paper is on obtaining a conservative but tight bound on the probability distribution for the strength of a fibrous material. The model is the chain-of-bundles probability model, and local load sharing is assumed for the fiber elements in each bundle. The bound is based upon the occurrence of two or more adjacent broken fiber elements in a bundle. This event is necessary but not sufficient for failure of the material. The bound is far superior to a simple weakest link bound based upon the failure of the weakest fiber element. For large materials, the upper bound is a Weibull distribution, which is consistent with experimental observations. The upper bound is always conservative, but its tightness depends upon the variability in fiber element strength and the volume of the material. In cases where the volume of material and the variability in fiber strength are both small, the bound is believed to be virtually the same as the true distribution function for material strength. Regarding edge effects on composite strength, only when the number of fibers is very small is a correction necessary to reflect the load-sharing irregularities at the edges of the bundle.


2018 ◽  
Vol 6 (1) ◽  
pp. 49-75 ◽  
Author(s):  
Ronda Strauch ◽  
Erkan Istanbulluoglu ◽  
Sai Siddhartha Nudurupati ◽  
Christina Bandaragoda ◽  
Nicole M. Gasparini ◽  
...  

Abstract. We develop a hydroclimatological approach to the modeling of regional shallow landslide initiation that integrates spatial and temporal dimensions of parameter uncertainty to estimate an annual probability of landslide initiation based on Monte Carlo simulations. The physically based model couples the infinite-slope stability model with a steady-state subsurface flow representation and operates in a digital elevation model. Spatially distributed gridded data for soil properties and vegetation classification are used for parameter estimation of probability distributions that characterize model input uncertainty. Hydrologic forcing to the model is through annual maximum daily recharge to subsurface flow obtained from a macroscale hydrologic model. We demonstrate the model in a steep mountainous region in northern Washington, USA, over 2700 km2. The influence of soil depth on the probability of landslide initiation is investigated through comparisons among model output produced using three different soil depth scenarios reflecting the uncertainty of soil depth and its potential long-term variability. We found elevation-dependent patterns in probability of landslide initiation that showed the stabilizing effects of forests at low elevations, an increased landslide probability with forest decline at mid-elevations (1400 to 2400 m), and soil limitation and steep topographic controls at high alpine elevations and in post-glacial landscapes. These dominant controls manifest themselves in a bimodal distribution of spatial annual landslide probability. Model testing with limited observations revealed similarly moderate model confidence for the three hazard maps, suggesting suitable use as relative hazard products. The model is available as a component in Landlab, an open-source, Python-based landscape earth systems modeling environment, and is designed to be easily reproduced utilizing HydroShare cyberinfrastructure.


2021 ◽  
Author(s):  
Alexander Tye ◽  
Aaron Wolf ◽  
Nathan Niemi

Populations of detrital zircons are shaped by geologic factors such as sediment transport, erosion mechanisms, and the zircon fertility of source areas. Zircon U-Pb age datasets are influenced both by these geologic factors and by the statistical effects of sampling. Such statistical effects introduce significant uncertainty into the inference of parent population age distributions from detrital zircon samples. This uncertainty must be accounted for in order to understand which features of sample age distributions are attributable to earth processes and which are sampling effects. Sampling effects are likely to be significant at a range of common detrital zircon sample sizes (particularly when n < 300).In order to more accurately account for the uncertainty in estimating parent population age distributions, we introduce a new method to infer probability model ensembles (PMEs) from detrital zircon samples. Each PME represents a set of the potential parent populations that are likely to have produced a given zircon age sample. PMEs form the basis of a new metric of correspondence between two detrital zircon samples, Bayesian Population Correlation (BPC), which is shown in a suite of numerical experiments to be unbiased with respect to sample size. BPC uncertainties can be directly estimated for a specific sample comparison, and BPC results conform to analytical predictions when comparing populations with known proportions of shared ages. We implement all of these features in a set of MATLAB® scripts made freely available as open-source code and as a standalone application. The robust uncertainties, lack of sample size bias, and predictability of BPC are desirable features that differentiate it from existing detrital zircon correspondence metrics. Additionally, analysis of other sample limited datasets with complex probability distributions may also benefit from our approach.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Alamgir Khalil ◽  
Abdullah Ali H. Ahmadini ◽  
Muhammad Ali ◽  
Wali Khan Mashwani ◽  
Shokrya S. Alshqaq ◽  
...  

In this paper, a new approach for deriving continuous probability distributions is developed by incorporating an extra parameter to the existing distributions. Frechet distribution is used as a submodel for an illustration to have a new continuous probability model, termed as modified Frechet (MF) distribution. Several important statistical properties such as moments, order statistics, quantile function, stress-strength parameter, mean residual life function, and mode have been derived for the proposed distribution. In order to estimate the parameters of MF distribution, the maximum likelihood estimation (MLE) method is used. To evaluate the performance of the proposed model, two real datasets are considered. Simulation studies have been carried out to investigate the performance of the parameters’ estimates. The results based on the real datasets and simulation studies provide evidence of better performance of the suggested distribution.


Author(s):  
Vinti Dhaka ◽  
Chandra K. Jaggi ◽  
Sarla Pareek ◽  
Piyush Kant Rai

The recent era describes the demand of inventory systems which are governed through random cause effect phenomenon prevailing the strongest use of random models in the concerned area. Bayesian probability model serve the demands of present need in such inventory systems. The present study deals the use of basic Bayesian theory in the development of some of the inventory models, for e.g.: The inventory model for deteriorating items; Designing of the classical (s, Q) models, etc. Here the motivation of use of Bayes theory is to test the efficacy of optimal design of above said models when demand is supposed to be random having some basic probability distributions. In this regard we discuss the inventory model for deteriorating items and the (s, Q) model and their mathematical solution under Bayesian approach.


Sign in / Sign up

Export Citation Format

Share Document