equal difficulty
Recently Published Documents


TOTAL DOCUMENTS

20
(FIVE YEARS 4)

H-INDEX

7
(FIVE YEARS 0)

Author(s):  
Gabriel Anzer ◽  
Pascal Bauer

AbstractPasses are by far football’s (soccer) most frequent event, yet surprisingly little meaningful research has been devoted to quantify them. With the increase in availability of so-called positional data, describing the positioning of players and ball at every moment of the game, our work aims to determine the difficulty of every pass by calculating its success probability based on its surrounding circumstances. As most experts will agree, not all passes are of equal difficulty, however, most traditional metrics count them as such. With our work we can quantify how well players can execute passes, assess their risk profile, and even compute completion probabilities for hypothetical passes by combining physical and machine learning models. Our model uses the first 0.4 seconds of a ball trajectory and the movement vectors of all players to predict the intended target of a pass with an accuracy of $$93.0\%$$ 93.0 % for successful and $$72.0\%$$ 72.0 % for unsuccessful passes much higher than any previously published work. Our extreme gradient boosting model can then quantify the likelihood of a successful pass completion towards the identified target with an area under the curve (AUC) of $$93.4\%$$ 93.4 % . Finally, we discuss several potential applications, like player scouting or evaluating pass decisions.


Religions ◽  
2021 ◽  
Vol 12 (9) ◽  
pp. 687
Author(s):  
James D. Madden

Paul Draper argues that the central issue in the debate over the problem of suffering is not whether the theist can offer a probable explanation of suffering, but whether theism or naturalism can give a better explanation for the facts regarding the distribution of pain as we find them. He likewise maintains a comparison of relative probabilities considering the facts of suffering; atheological naturalism is to be preferred. This essay proceeds in two phases: (a) It will be argued that mainstream positions in naturalistic philosophy of mind make it difficult to take pain as anything but epiphenomenal and therefore not subject to evolutionary explanation. While the distribution of suffering is a difficulty for the theist, the naturalist has equal difficulty explaining the fact that there is any suffering at all in the first place. Thus, the facts of suffering offer no advantage to the atheist. (b) Phenomenologists suggest that there is an intrinsic connection between animal life, pain, and normativity (including a summum bonum). The mere occurrence of life and normativity are, at least prima facie, more likely on the assumption of theism than atheism, so the theist may have a probabilistic advantage relative to the atheist. Phases (a) and (b) together support the overall conclusion that the facts of pain as we find them in the world (including that there is any pain at all) are at least as great, if not greater, a challenge for the atheist as they are the theist.


2021 ◽  
Author(s):  
David White ◽  
Daniel Guilbert ◽  
Victor Perrone de Lima Varela ◽  
Rob Jenkins ◽  
Mike Burton

We present an expanded version of a widely used measure of unfamiliar face matching ability, the Glasgow Face Matching Test (GFMT). The GFMT2 is created using the same source database as the original test but makes five key improvements. First, the test items include variation in head angle, pose, expression and subject-to-camera distance, making the new test more difficult and more representative of challenges in everyday face identification tasks. Second, short and long versions of the test each contain two forms that are calibrated to be of equal difficulty, allowing repeat tests to be performed to examine effects of training interventions. Third, the short form tests contain no repeating face identities, thereby removing any confounding effects of familiarity that may have been present in the original test. Fourth, separate short versions are created to target exceptionally high performing or exceptionally low performing individuals using established psychometric principles. Fifth, all tests are implemented in an executable program, allowing them to be administered automatically. All tests are available free for scientific use via www.gfmt2.org.


2019 ◽  
Vol 19 (3) ◽  
pp. 25-44
Author(s):  
M. Premalatha ◽  
V. Viswanathan

Abstract Choice Based Course Selection (CBCS) allows students to select courses based on their preferred sequence. This preference in selection is normally bounded by constraints set by a university like pre-requisite(s), minimum and maximum number of credits registered per semester. Unplanned course sequence selection affects the performance of the students and may prolong the time to complete the degree. Course Difficulty Index (DI) also contributes to the decline in the performance of the students. To overcome these difficulties, we propose a new Subset Sum Approximation Problem (SSAP) aims to distribute courses to each semester with approximately equal difficulty level using Maximum Prerequisite Weightage (MPW) Algorithm, Difficulty Approximation (DA) algorithm and Adaptive Genetic Algorithm (AGA). The three algorithms have been tested using our university academic dataset and DA algorithm outperforms with 98% accuracy than the MPW and AGA algorithm during course distribution.


2013 ◽  
Vol 12 (1) ◽  
pp. 73-79 ◽  
Author(s):  
Andis Klegeris ◽  
Manpreet Bahniwal ◽  
Heather Hurren

Problem-based learning (PBL) was originally introduced in medical education programs as a form of small-group learning, but its use has now spread to large undergraduate classrooms in various other disciplines. Introduction of new teaching techniques, including PBL-based methods, needs to be justified by demonstrating the benefits of such techniques over classical teaching styles. Previously, we demonstrated that introduction of tutor-less PBL in a large third-year biochemistry undergraduate class increased student satisfaction and attendance. The current study assessed the generic problem-solving abilities of students from the same class at the beginning and end of the term, and compared student scores with similar data obtained in three classes not using PBL. Two generic problem-solving tests of equal difficulty were administered such that students took different tests at the beginning and the end of the term. Blinded marking showed a statistically significant 13% increase in the test scores of the biochemistry students exposed to PBL, while no trend toward significant change in scores was observed in any of the control groups not using PBL. Our study is among the first to demonstrate that use of tutor-less PBL in a large classroom leads to statistically significant improvement in generic problem-solving skills of students.


2003 ◽  
Vol 27 (2) ◽  
pp. 109-115 ◽  
Author(s):  
Joël Bradmetz ◽  
Claire Bonnefoy-Claudet

The conceptual meaning and linguistic use of to know are usually considered to occur earlier than those of to believe. However, the data supporting this claim do not take into account some sources of variation: The difference in the assessment between comprehension and production and the link established between action and representation in standard tasks like that of Wimmer and Perner(1983). The authors counter this claim and attempt to demonstrate a developmental parallelism between the two epistemic operators to know and to believe. This parallelism would be due to the absence of a link between belief and action in a first phase, both developing in a modular system but linked to implicit or explicit access to information, contrary to the usual conception in the literature. Three experiments are reported. The first and the second showed an equal difficulty level between to know and to believe in comprehension in both a declarative and a procedural false belief task and, to the contrary, a lag between the comprehension of to believe and the prediction of a declaration or an action based on a false belief. The third demonstrated that earlier success in attributing a false belief to the other was not a false positive.


NeuroImage ◽  
2002 ◽  
Vol 15 (1) ◽  
pp. 16-25 ◽  
Author(s):  
Tobias H. Donner ◽  
Andreas Kettermann ◽  
Eugen Diesch ◽  
Florian Ostendorf ◽  
Arno Villringer ◽  
...  

1993 ◽  
Vol 1993 (2) ◽  
pp. i-37 ◽  
Author(s):  
Nancy L. Allen ◽  
Paul W. Holland ◽  
Dorothy T. Thayer
Keyword(s):  

1993 ◽  
Vol 2 (1) ◽  
pp. 54-59 ◽  
Author(s):  
Gordon L. Cluff ◽  
Chaslav V. Pavlovic ◽  
Gary J. Overson

The existing speech corpora are of limited value in assessing the combined effects of hearing loss, hearing protection, and noise on speech intelligibility in the work place. One reason for this is the differences between the acoustic characteristics typical of the elevated vocal effort in the work place and those typical of the normal vocal effort used in recording various existing speech corpora. The other reason for the limited value of the existing tests is related to the differences in nonacoustic factors (linguistic structure, predictability of the messages, etc.) between the available speech corpora and the actual messages used in the work place. In this study a speech test designed for use in industrial work places has been developed. The test material consists of eight 20-phrase lists. The lists are phonetically balanced and of approximately equal difficulty. The variability among lists is greatest at very low sensation levels and decreases progressively as the sensation level increases.


Sign in / Sign up

Export Citation Format

Share Document