standard normal distribution
Recently Published Documents


TOTAL DOCUMENTS

193
(FIVE YEARS 62)

H-INDEX

11
(FIVE YEARS 2)

Symmetry ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 2139
Author(s):  
Xiuqiong Chen ◽  
Jiayi Kang ◽  
Mina Teicher ◽  
Stephen S.-T. Yau

Nonlinear filtering is of great significance in industries. In this work, we develop a new linear regression Kalman filter for discrete nonlinear filtering problems. Under the framework of linear regression Kalman filter, the key step is minimizing the Kullback–Leibler divergence between standard normal distribution and its Dirac mixture approximation formed by symmetric samples so that we can obtain a set of samples which can capture the information of reference density. The samples representing the conditional densities evolve in a deterministic way, and therefore we need less samples compared with particle filter, as there is less variance in our method. The numerical results show that the new algorithm is more efficient compared with the widely used extended Kalman filter, unscented Kalman filter and particle filter.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Shu-min Fan ◽  
Bei Xia ◽  
Wei-xiang Liu ◽  
Wei Yu ◽  
Zhi-xia Wu ◽  
...  

Abstract Background Z score utility is emphasized in classifying coronary artery lesions in Kawasaki disease patients. The present study is the largest such multicenter Chinese pediatric study about coronary artery diameter reference values and Z score regression equation to date. It is useful in Chinese pediatric echocardiography. Methods A multicenter cohort was assembled, which consisted of 852 healthy children between 1 month and 17 years of age, ten children were excluded because their ultrasound images were not clear, or lost in following up. Diameters of the right coronary artery, left coronary artery, and left anterior descending coronary artery were assessed using echocardiography. Data were body surface area (BSA)-corrected using BSA calculated via either the Stevenson BSA formula or the Haycock BSA formula. Coronary artery diameter reference values and Z score regression equations were established for use in the Chinese pediatric population. Results No difference was observed between coronary artery diameter data corrected using BSAste or BSAhay. Of the five assessed regression models, the exponential model exhibited the best fit and was therefore selected as the basis for derivation of the SZ method. When comparing Z scores, those produced by the SZ method conformed to the standard normal distribution, while those produced by the D method did not. In addition, there was a statistically significant difference between Z scores produced by the SZ and D methods (P < 0.05). Conclusions Coronary artery diameter reference values for echocardiography were successfully established for use in the Chinese pediatric population, and a Z score regression equation more suitable for clinical use in this population was successfully developed.


Author(s):  
Hime Oliveira

This work addresses the problem of sampling from Gaussian probability distributions by means of uniform samples obtained deterministically and directly from space-filling curves (SFCs), a purely topological concept. To that end, the well-known inverse cumulative distribution function method is used, with the help of the probit function,which is the inverse of the cumulative distribution function of the standard normal distribution. Mainly due to the central limit theorem, the Gaussian distribution plays a fundamental role in probability theory and related areas, and that is why it has been chosen to be studied in the present paper. Numerical distributions (histograms) obtained with the proposed method, and in several levels of granularity, are compared to the theoretical normal PDF, along with other already established sampling methods, all using the cited probit function. Final results are validated with the Kullback-Leibler and two other divergence measures, and it will be possible to draw conclusions about the adequacy of the presented paradigm. As is amply known, the generation of uniform random numbers is a deterministic simulation of randomness using numerical operations. That said, sequences resulting from this kind of procedure are not truly random. Even so, and to be coherent with the literature, the expression &rdquo;random number&rdquo; will be used along the text to mean &rdquo;pseudo-random number&rdquo;.


Water ◽  
2021 ◽  
Vol 13 (16) ◽  
pp. 2274
Author(s):  
Guido Leone ◽  
Pasquale Clemente ◽  
Libera Esposito ◽  
Francesco Fiorillo

Debris flows that have occurred in the area of San Martino Valle Caudina (Campania, Southern Italy) are described by geomorphological and hydrological analyses, focusing on the recent event of December 2019. This area can be considered a key example for studying debris-flow phenomena involving the pyroclastic mantle that covers the karstified bedrock along steep slopes. A hydrological analysis of the time series of the maximum annual rainfall, of durations of 1, 3, 6, 12 and 24 h, was carried out based on a new approach to assess rainstorm magnitude. It was quantified by measuring the deviation of the rainfall intensity from the normal conditions, within a specified time period. As the time series of annual maxima are typically skewed, a preliminary transformation is needed to normalize the distribution; to obtain the Z-value of the standard normal distribution, with mean µ = 0 and standard deviation σ = 1, different probability distribution functions were fitted to the actual data. A specific boxplot was used, with box width Z = ±1 and whiskers length Z = ±2. The deviations from these values provide the performance of the distribution fits. For the normalized time series, the rates shown by the trends and relative significance were investigated for the available time series of 11 rain gauges covering the Western–Central Campania region. The most critical condition for the debris-flow initiation appears to occur when a severe or extreme rainfall has a duration ≥ 12 h. The trend analysis did not detect statistically significant increases in the intensity of the rainfall of duration ≥ 6 h.


Author(s):  
Faiz Marikar

The key factor of an assessment is to minimize the errors by having a good reliability and validity of the assessment yardstick. To achieve high score in the test examinee must be aware about assessment cycle and use it in appropriate way in post exam analysis. Outcome of the results can be utilized as a constructive feedback in any given program. This cross-sectional study was conducted at department of Biochemistry, University of Rajarata. Multiple choice questions, structured essay type questions, objective structured practical examination, and continuous assessment was used in this study. Total number of students are 180 and was assessed for difficulty index, discrimination index, reliability, and standard error of measurement. In this study sample for analysis was used basically the examiner divides students into two groups (‘high’ and ‘low’) according to the score sheet of each student. Most of them are doing in a wrong way basically they divide high and low clusters as 25% each and considered upper quartile and lower quartile. In this study we compared it with the standard normal distribution curve where high and low groups are considered as 16% where is the standard. There is no significant difference among both clusters, and we recommend using the standard 16% as the high and low groups in post examination analysis. Keywords: difficulty index, post examination analysis, reliability of the examination, standard error of measurement


Author(s):  
Ahmad Hanandeh ◽  
Omar Eidous

This paper deals with a new, simple one-term approximation to the cumulative distribution function (c.d.f) of the standard normal distribution which does not have closed form representation. The accuracy of the proposed approximation measured using maximum absolute error (M.S.E) and the same criteria is used to compare this approximation with the existing one-term approximation approaches available in the literature. Our approximation has a maximum absolute error of about 0.0016 and this accuracy is sufficient for most practical applications.


2021 ◽  
Author(s):  
Salam Al-Juboori ◽  
Xavier Fernando

Accurate detection of white spaces is crucial to protect primary user against interference with secondary user. Multipath fading and correlation among diversity branches represent essential challenges in Cognitive Radio Network Spectrum Sensing (CRNSS). This dissertation investigates the problem of correlation among multiple diversity receivers in wireless communications in the presence of multipath fading. The work of this dissertation falls into two folds, analysis and solution. In the analysis fold, this dissertation implements a unified approach of performance analysis for cognitive spectrum sensing. It considers a more realistic sensing scenario where non-independent multipath fading channels with diversity combining technique are assumed. Maximum Ratio Combining (MRC), Equal Gain Combining (EGC), Selection Combining (SC) and Selection and Stay Combining (SSC) techniques are employed. Arbitrarily, constant and exponentially dual, triple and L number of Nakagami-m correlated fading branches are investigated. We derive novel closed-form expressions for the average detection probability for each sensing scenario with simpler and more general alternative expressions. Our numerical analysis reveals the deterioration in detection probability due to correlation especially in deep fading. Consequently, an increase in the interference rate between the primary user and secondary user is observed by three times its rate when independent fading branches is assumed. However, results also show that this effect could be compensated for, through employing the appropriate diversity technique and by increasing the diversity branches. Therefore, we say that the correlation cannot be overlooked in deep fading, however in low fading can be ignored so as to reduce complexity and computation. Furthermore, at low fading, low false alarm probability and highly correlated environments, EGC which is simpler scheme performs as good as MRC which is a more complex scheme. Similar result are observed for SC and SSC. For the solution fold and towards combatting the correlation impact on the wireless systems, a decorrelator implementation at the receiver will be very beneficial. We propose such decorrelator scheme which would significantly alleviate the correlation effect. We derive closed-form expressions for the decorrelator receiver detection statistics including the Probability Density Function (PDF) from fundamental principles, considering dual antenna SC receiver in Nakagami-m fading channels. Numerical results show that the PDF of the bivariate difference could be perfectly represented by a semi-standard normal distribution with zero mean and constant variance depending on the bivariate's parameters. This observation would significantly help simplifying the design of decorrelator receiver. The derived statistics can be used in the problem of self-interference for multicarrier systems. Results also show the outage probability has been improved by double, due to the decorrelator.


Sign in / Sign up

Export Citation Format

Share Document