Nonparametric, statistical analysis of the reliability of satellites and satellite subsystems

Author(s):  
А.А. Брусков

Надёжность долгое время признавалась главным качеством для систем космического аппарата. К сожалению, в литературе имеются лишь ограниченные данные об отказах на орбите и статистическом анализе надежности спутников. Для восполнения этого пробела был проведен непараметрический анализ надежности спутников для спутников на околоземной орбите. В этой работе я расширяю статистический анализ надежности спутников и исследую надежность подсистем космических аппаратов. Поскольку набор данных подвергнут цензуре, я широко использую оценщик Каплана-Мейера для расчета функций надежности и получаю доверительные интервалы для непараметрических результатов надежности для каждой подсистемы спутика. Reliability has long been recognized as the main quality for spacecraft systems. Unfortunately, there is only limited data in the literature on failures in orbit and statistical analysis of the reliability of satellites. To fill this gap, a nonparametric analysis of the reliability of satellites for satellites in near-Earth orbit was carried out. In this work, I expand the statistical analysis of the reliability of satellites and investigate the reliability of spacecraft subsystems. Since the data set is censored, I make extensive use of the Kaplan-Meyer estimator to calculate reliability functions and obtain confidence intervals for nonparametric reliability results for each spootik subsystem.

2021 ◽  
Vol 28 ◽  
pp. 146-150
Author(s):  
L. A. Atramentova

Using the data obtained in a cytogenetic study as an example, we consider the typical errors that are made when performing statistical analysis. Widespread but flawed statistical analysis inevitably produces biased results and increases the likelihood of incorrect scientific conclusions. Errors occur due to not taking into account the study design and the structure of the analyzed data. The article shows how the numerical imbalance of the data set leads to a bias in the result. Using a dataset as an example, it explains how to balance the complex. It shows the advantage of presenting sample indicators with confidence intervals instead of statistical errors. Attention is drawn to the need to take into account the size of the analyzed shares when choosing a statistical method. It shows how the same data set can be analyzed in different ways depending on the purpose of the study. The algorithm of correct statistical analysis and the form of the tabular presentation of the results are described. Keywords: data structure, numerically unbalanced complex, confidence interval.


1982 ◽  
Vol 61 (s109) ◽  
pp. 34-34
Author(s):  
Samuel J. Agronow ◽  
Federico C. Mariona ◽  
Frederick C. Koppitch ◽  
Kazutoshi Mayeda

2020 ◽  
Vol 499 (4) ◽  
pp. 5641-5652
Author(s):  
Georgios Vernardos ◽  
Grigorios Tsagkatakis ◽  
Yannis Pantazis

ABSTRACT Gravitational lensing is a powerful tool for constraining substructure in the mass distribution of galaxies, be it from the presence of dark matter sub-haloes or due to physical mechanisms affecting the baryons throughout galaxy evolution. Such substructure is hard to model and is either ignored by traditional, smooth modelling, approaches, or treated as well-localized massive perturbers. In this work, we propose a deep learning approach to quantify the statistical properties of such perturbations directly from images, where only the extended lensed source features within a mask are considered, without the need of any lens modelling. Our training data consist of mock lensed images assuming perturbing Gaussian Random Fields permeating the smooth overall lens potential, and, for the first time, using images of real galaxies as the lensed source. We employ a novel deep neural network that can handle arbitrary uncertainty intervals associated with the training data set labels as input, provides probability distributions as output, and adopts a composite loss function. The method succeeds not only in accurately estimating the actual parameter values, but also reduces the predicted confidence intervals by 10 per cent in an unsupervised manner, i.e. without having access to the actual ground truth values. Our results are invariant to the inherent degeneracy between mass perturbations in the lens and complex brightness profiles for the source. Hence, we can quantitatively and robustly quantify the smoothness of the mass density of thousands of lenses, including confidence intervals, and provide a consistent ranking for follow-up science.


2020 ◽  
Vol 72 (1) ◽  
Author(s):  
Ryuho Kataoka

Abstract Statistical distributions are investigated for magnetic storms, sudden commencements (SCs), and substorms to identify the possible amplitude of the one in 100-year and 1000-year events from a limited data set of less than 100 years. The lists of magnetic storms and SCs are provided from Kakioka Magnetic Observatory, while the lists of substorms are obtained from SuperMAG. It is found that majorities of events essentially follow the log-normal distribution, as expected from the random output from a complex system. However, it is uncertain that large-amplitude events follow the same log-normal distributions, and rather follow the power-law distributions. Based on the statistical distributions, the probable amplitudes of the 100-year (1000-year) events can be estimated for magnetic storms, SCs, and substorms as approximately 750 nT (1100 nT), 230 nT (450 nT), and 5000 nT (6200 nT), respectively. The possible origin to cause the statistical distributions is also discussed, consulting the other space weather phenomena such as solar flares, coronal mass ejections, and solar energetic particles.


2021 ◽  
Vol 70 (10) ◽  
Author(s):  
Kazuyoshi Gotoh ◽  
Makoto Miyoshi ◽  
I Putu Bayu Mayura ◽  
Koji Iio ◽  
Osamu Matsushita ◽  
...  

The options available for treating infections with carbapenemase-producing Enterobacteriaceae (CPE) are limited; with the increasing threat of these infections, new treatments are urgently needed. Biapenem (BIPM) is a carbapenem, and limited data confirming its in vitro killing effect against CPE are available. In this study, we examined the minimum inhibitory concentrations (MICs) and minimum bactericidal concentrations (MBCs) of BIPM for 14 IMP-1-producing Enterobacteriaceae strains isolated from the Okayama region in Japan. The MICs against almost all the isolates were lower than 0.5 µg ml−1, indicating susceptibility to BIPM, while approximately half of the isolates were confirmed to be bacteriostatic to BIPM. However, initial killing to a 99.9 % reduction was observed in seven out of eight strains in a time–kill assay. Despite the small data set, we concluded that the in vitro efficacy of BIPM suggests that the drug could be a new therapeutic option against infection with IMP-producing CPE.


2021 ◽  
Author(s):  
Monique B. Sager ◽  
Aditya M. Kashyap ◽  
Mila Tamminga ◽  
Sadhana Ravoori ◽  
Christopher Callison-Burch ◽  
...  

BACKGROUND Reddit, the fifth most popular website in the United States, boasts a large and engaged user base on its dermatology forums where users crowdsource free medical opinions. Unfortunately, much of the advice provided is unvalidated and could lead to inappropriate care. Initial testing has shown that artificially intelligent bots can detect misinformation on Reddit forums and may be able to produce responses to posts containing misinformation. OBJECTIVE To analyze the ability of bots to find and respond to health misinformation on Reddit’s dermatology forums in a controlled test environment. METHODS Using natural language processing techniques, we trained bots to target misinformation using relevant keywords and to post pre-fabricated responses. By evaluating different model architectures across a held-out test set, we compared performances. RESULTS Our models yielded data test accuracies ranging from 95%-100%, with a BERT fine-tuned model resulting in the highest level of test accuracy. Bots were then able to post corrective pre-fabricated responses to misinformation. CONCLUSIONS Using a limited data set, bots had near-perfect ability to detect these examples of health misinformation within Reddit dermatology forums. Given that these bots can then post pre-fabricated responses, this technique may allow for interception of misinformation. Providing correct information, even instantly, however, does not mean users will be receptive or find such interventions persuasive. Further work should investigate this strategy’s effectiveness to inform future deployment of bots as a technique in combating health misinformation. CLINICALTRIAL N/A


2014 ◽  
Vol 3 (4) ◽  
pp. 130
Author(s):  
NI MADE METTA ASTARI ◽  
NI LUH PUTU SUCIPTAWATI ◽  
I KOMANG GDE SUKARSA

Statistical analysis which aims to analyze a linear relationship between the independent variable and the dependent variable is known as regression analysis. To estimate parameters in a regression analysis method commonly used is the Ordinary Least Square (OLS). But the assumption is often violated in the OLS, the assumption of normality due to one outlier. As a result of the presence of outliers is parameter estimators produced by the OLS will be biased. Bootstrap Residual is a bootstrap method that is applied to the residual resampling process. The results showed that the residual bootstrap method is only able to overcome the bias on the number of outliers 5% with 99% confidence intervals. The resulting parameters estimators approach the residual bootstrap values ??OLS initial allegations were also able to show that the bootstrap is an accurate prediction tool.


Instruksional ◽  
2019 ◽  
Vol 1 (1) ◽  
pp. 9
Author(s):  
Nirwana Nirwana

Effect of  role playing macro methods on children's speaking ability in group B in Nurul Rohmah Bekasi. This research is motivated by the lack of ability to speak children. This is because the method used in the learning process is more likely to use conventional methods. The selection of methods to role playing macro so that children can be motivated and interested in learning and can stimulate the ability to speak children. The population in this study were all children of group B in kindergarten Nurul Rohmah while the study sample was B3 group 11 children for the experimental group and B1 group 11 children for the control group. Data collection techniques through test techniques, and observation. Data analysis techniques used were descriptive statistical analysis and nonparametric statistical analysis. Based on the results of the Wilcoxon test calculation, the calculated T value is 66 and T table 11 then the results obtained T count (66)> T table (11) H1 is accepted and Ho is rejected means that there is an effect of role playing macro  methods on children's speaking ability. Whereas the calculated Z value obtained is 2.93 and 1.645 table Z then the result of Z arithmetic (2.93)> Z table (1.645) H1 is accepted and Ho is rejected which means that there is an effect of role playing macro method on children's speaking ability. These results indicate that there is a change in the value of the child's speaking ability before and after getting learning based on the role playing macro method.


Sign in / Sign up

Export Citation Format

Share Document