scholarly journals Based on Knowledge Recognition and Using Binomial Distribution Function to Establish a Mathematical Model of Random Selection of Test Questions in the Test Bank

2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Yuan Chang

With the in-depth development of social reforms, the scientificization of enterprise online examinations has become more and more urgent and important. The key to realizing scientific examinations is the automation and rationalization of propositions. Therefore, the construction and realization of the test question bank is also more important. In the realization of the entire test question database, how to select satisfactory test questions randomly from a large number of test questions through the selection of test questions so that the average difficulty, discriminability, and reliability of the test are satisfactory? These requirements are also more important. Among them, random selection of questions is an important difficulty in the realization of the test question bank. In order to solve the difficulties of random selection of these test questions, the author combines the experience of constructing the test question bank and uses the discrete binomial distribution to draw conclusions. Random variables established the first mathematical model for topic selection. By determining the form of the test questions and the distribution of the difficulty of the test questions and then making it use a random function to select questions, this will achieve better results.

2020 ◽  
pp. 9-13
Author(s):  
A. V. Lapko ◽  
V. A. Lapko

An original technique has been justified for the fast bandwidths selection of kernel functions in a nonparametric estimate of the multidimensional probability density of the Rosenblatt–Parzen type. The proposed method makes it possible to significantly increase the computational efficiency of the optimization procedure for kernel probability density estimates in the conditions of large-volume statistical data in comparison with traditional approaches. The basis of the proposed approach is the analysis of the optimal parameter formula for the bandwidths of a multidimensional kernel probability density estimate. Dependencies between the nonlinear functional on the probability density and its derivatives up to the second order inclusive of the antikurtosis coefficients of random variables are found. The bandwidths for each random variable are represented as the product of an undefined parameter and their mean square deviation. The influence of the error in restoring the established functional dependencies on the approximation properties of the kernel probability density estimation is determined. The obtained results are implemented as a method of synthesis and analysis of a fast bandwidths selection of the kernel estimation of the two-dimensional probability density of independent random variables. This method uses data on the quantitative characteristics of a family of lognormal distribution laws.


2003 ◽  
Vol 17 (1) ◽  
pp. 1-14 ◽  
Author(s):  
Peggy A. Hite ◽  
John Hasseldine

This study analyzes a random selection of Internal Revenue Service (IRS) office audits from October 1997 to July 1998, the type of audit that concerns most taxpayers. Taxpayers engage paid preparers in order to avoid this type of audit and to avoid any resulting tax adjustments. The study examines whether there are more audit adjustments and penalty assessments on tax returns with paid-preparer assistance than on tax returns without paid-preparer assistance. By comparing the frequency of adjustments on IRS office audits, the study finds that there are significantly fewer tax adjustments on paid-preparer returns than on self-prepared returns. Moreover, CPA-prepared returns resulted in fewer audit adjustments than non CPA-prepared returns.


Author(s):  
RONALD R. YAGER

We look at the issue of obtaining a variance like measure associated with probability distributions over ordinal sets. We call these dissonance measures. We specify some general properties desired in these dissonance measures. The centrality of the cumulative distribution function in formulating the concept of dissonance is pointed out. We introduce some specific examples of measures of dissonance.


1987 ◽  
Vol 102 (2) ◽  
pp. 329-349 ◽  
Author(s):  
Philip S. Griffin ◽  
William E. Pruitt

Let X, X1, X2,… be a sequence of non-degenerate i.i.d. random variables with common distribution function F. For 1 ≤ j ≤ n, let mn(j) be the number of Xi satisfying either |Xi| > |Xj|, 1 ≤ i ≤ n, or |Xi| = |Xj|, 1 ≤ i ≤ j, and let (r)Xn = Xj if mn(j) = r. Thus (r)Xn is the rth largest random variable in absolute value from amongst X1, …, Xn with ties being broken according to the order in which the random variables occur. Set (r)Sn = (r+1)Xn + … + (n)Xn and write Sn for (0)Sn. We will refer to (r)Sn as a trimmed sum.


2012 ◽  
Vol 22 (03) ◽  
pp. 1250007 ◽  
Author(s):  
PEDRO RODRÍGUEZ ◽  
MARÍA CECILIA RIVARA ◽  
ISAAC D. SCHERSON

A novel parallelization of the Lepp-bisection algorithm for triangulation refinement on multicore systems is presented. Randomization and wise use of the memory hierarchy are shown to highly improve algorithm performance. Given a list of selected triangles to be refined, random selection of candidates together with pre-fetching of Lepp-submeshes lead to a scalable and efficient multi-core parallel implementation. The quality of the refinement is shown to be preserved.


2017 ◽  
Vol 20 (5) ◽  
pp. 939-951
Author(s):  
Amal Almarwani ◽  
Bashair Aljohani ◽  
Rasha Almutairi ◽  
Nada Albalawi ◽  
Alya O. Al Mutairi

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Kent McFadzien ◽  
Lawrence W. Sherman

PurposeThe purpose of this paper is to demonstrate a “maintenance pathway” for ensuring a low false negative rate in closing investigations unlikely to lead to a clearance (detection).Design/methodology/approachA randomised controlled experiment testing solvability factors for non-domestic cases of minor violence.FindingsA random selection of 788 cases, of which 428 would have been screened out, were sent forward for full investigation. The number of cases actually detected was 22. A total of 19 of these were from the 360 recommended for allocation. This represents an improvement of accuracy over the original tests of the model three years earlier.Research limitations/implicationsThis study shows how the safety of an investigative triage tool can be checked on a continuous basis for accuracy in predicting the cases unlikely to be solved if referred for full investigations.Practical implicationsThis safety check pathway means that many more cases can be closed after preliminary investigations, thus saving substantial time for working on cases more likely to yield a detection if sufficient time is put into the cases.Social implicationsMore offenders may be caught and brought to justice by using triage with a safety backstop for accurate forecasting.Originality/valueThis is the first published study of a maintenance pathway based on a random selection of cases that would otherwise not have been investigated. If widely applied, it could yield far greater time for police to pursue high-harm, serious violence.


Sign in / Sign up

Export Citation Format

Share Document