binary event
Recently Published Documents


TOTAL DOCUMENTS

34
(FIVE YEARS 12)

H-INDEX

7
(FIVE YEARS 2)

2022 ◽  
Author(s):  
Kenneth C. Lichtendahl ◽  
Yael Grushka-Cockayne ◽  
Victor Richmond Jose ◽  
Robert L. Winkler

Many organizations combine forecasts of probabilities of binary events to support critical business decisions, such as the approval of credit or the recommendation of a drug. To aggregate individual probabilities, we offer a new method based on Bayesian principles that can help identify why and when combined probabilities need to be extremized. Extremizing is typically viewed as shifting the average probability farther from one half; we emphasize that it is more suitable to define extremizing as shifting it farther from the base rate. We introduce the notion of antiextremizing, cases in which it might be beneficial to make average probabilities less extreme. Analytically, we find that our Bayesian ensembles often extremize the average forecast but sometimes antiextremize instead. On several publicly available data sets, we demonstrate that our Bayesian ensemble performs well and antiextremizes anywhere from 18% to 73% of the cases. Antiextremizing is required more often when there is bracketing with respect to the base rate among the probabilities being aggregated than with no bracketing.


2022 ◽  
Author(s):  
Alex Luiz Ferreira ◽  
Yujing Gong ◽  
Arie Eskenazi Gozluklu
Keyword(s):  

2021 ◽  
Vol 15 ◽  
Author(s):  
Youngeun Kim ◽  
Priyadarshini Panda

Spiking Neural Networks (SNNs) have recently emerged as an alternative to deep learning owing to sparse, asynchronous and binary event (or spike) driven processing, that can yield huge energy efficiency benefits on neuromorphic hardware. However, SNNs convey temporally-varying spike activation through time that is likely to induce a large variation of forward activation and backward gradients, resulting in unstable training. To address this training issue in SNNs, we revisit Batch Normalization (BN) and propose a temporal Batch Normalization Through Time (BNTT) technique. Different from previous BN techniques with SNNs, we find that varying the BN parameters at every time-step allows the model to learn the time-varying input distribution better. Specifically, our proposed BNTT decouples the parameters in a BNTT layer along the time axis to capture the temporal dynamics of spikes. We demonstrate BNTT on CIFAR-10, CIFAR-100, Tiny-ImageNet, event-driven DVS-CIFAR10 datasets, and Sequential MNIST and show near state-of-the-art performance. We conduct comprehensive analysis on the temporal characteristic of BNTT and showcase interesting benefits toward robustness against random and adversarial noise. Further, by monitoring the learnt parameters of BNTT, we find that we can do temporal early exit. That is, we can reduce the inference latency by ~5 − 20 time-steps from the original training latency. The code has been released at https://github.com/Intelligent-Computing-Lab-Yale/BNTT-Batch-Normalization-Through-Time.


Author(s):  
Raimondas Zemblys ◽  
Diederick C. Niehorster ◽  
Kenneth Holmqvist
Keyword(s):  

2020 ◽  
Vol 21 (2) ◽  
pp. 123-132 ◽  
Author(s):  
Misbakhul Munir ◽  
Hehe Wang ◽  
Paula Agudelo ◽  
Daniel J. Anco

We evaluated the occurrence of fungicide resistance phenotypes in populations of Nothopassalora personata, the causal agent of late leaf spot (LLS), sampled from South Carolina peanut fields in 2018 using a modified detached leaf assay (DLA). Spore suspensions obtained from each of 72 groups of LLS-symptomatic leaves collected from nine counties were used in the DLA to examine phenotypic resistance to four site-specific fungicides (azoxystrobin, benzovindiflupyr, prothioconazole, and thiophanate-methyl) and one multisite contact fungicide (chlorothalonil). Lesion development was measured as a binary event in which presence of a lesion indicated control failure (phenotypic resistance) and absence of a lesion indicated successful control (sensitivity). Phenotypic resistance probabilities of N. personata samples to each site-specific fungicide were compared against the control fungicide (chlorothalonil). Variation in phenotypic resistance probabilities to the four single-site fungicides was observed from most counties (6 of 9), with frequencies of phenotypic resistance having differed among counties. The DLA facilitated rapid evaluation of N. personata phenotypic resistance to azoxystrobin, benzovindiflupyr, prothioconazole, and thiophanate-methyl. This assay has potential to be used as an alternative method for routine monitoring of phenotypic resistance, identification of populations to test for the presence of resistance genes, and improving reliability of current fungicide recommendations.


Author(s):  
Nils Andersson

This chapter provides a brief survey of gravitational-wave astronomy, including the recent recent breakthrough detection. It sets the stage for the rest of the book via simple back-of-the-envelope estimates for different sets of sources. The chapter also describes the first detection of a black hole merger (GW150914) as well as the first observed neutron star binary event (GW170817) and introduces some of the ideas required to understand these breakthroughs.


2019 ◽  
Vol 50 (5) ◽  
pp. 572-597 ◽  
Author(s):  
Hajime Mizuyama ◽  
Seiyu Yamaguchi ◽  
Mizuho Sato

Background. Knowledge sharing among the members of an organization is crucial for enhancing the organization’s performance. However, knowing how to motivate and direct members to effectively and efficiently share their relevant private knowledge concerning the organization’s activities is not entirely a straightforward matter. Aim. This study aims to propose a gamified approach not only for motivating truthful sharing and collective evaluation of knowledge among the members of an organization but also for properly directing those actions so as to maximize the usefulness of the shared knowledge. A case study is also conducted to understand how the proposed approach works in a live business scenario. Method. A prediction market game on a binary event on whether the specified activity will be completed successfully is devised. The game utilizes an original comment aggregation and evaluation system through which relevant knowledge can be shared verbally and evaluated collectively by the players themselves. Players’ behavior is driven toward a desirable direction with the associated incentive framework realized by three game scores. Results. The proposed gamified approach was implemented as a web application and verified with a laboratory experiment. The game was also played by four participants who deliberated on an actual sales proposal in a real company. It was observed that the various valuable knowledge elements that were successfully collected from the participants could be utilized for refining the sales proposal. Conclusions. The game induced motivation through gamification, and some of the designed game scores worked in directing the players’ behavior as desired. The players learned from others’ comments, which brought about a snowball effect and enriched collective knowledge. Future research directions include how to transform this knowledge into an easy-to-comprehend representation.


Entropy ◽  
2019 ◽  
Vol 21 (6) ◽  
pp. 585 ◽  
Author(s):  
Giulio Bottazzi ◽  
Daniele Giachini

We consider a repeated betting market populated by two agents who wage on a binary event according to generic betting strategies. We derive new simple criteria, based on the difference of relative entropies, to establish the relative wealth of the two agents in the long-run. Little information about agents’ behavior is needed to apply the criteria: it is sufficient to know the odds traders believe fair and how much they would bet when the odds are equal to the ones the other agent believes fair. Using our criteria, we show that for a large class of betting strategies, it is generically possible that the ultimate winner is only decided by luck. As an example, we apply our conditions to the case of Constant Relative Risk Averse (CRRA) and quantal response betting.


2019 ◽  
Vol 35 (2) ◽  
pp. 609-621 ◽  
Author(s):  
Sarah Gold ◽  
Edward White ◽  
William Roeder ◽  
Mike McAleenan ◽  
Christine Schubert Kabban ◽  
...  

Abstract The 45th Weather Squadron (45 WS) records daily rain and lightning probabilistic forecasts and the associated binary event outcomes. Subsequently, they evaluate forecast performance and determine necessary adjustments with an established verification process. For deterministic outcomes, weather forecast analysis typically utilizes a traditional contingency table (TCT) for verification; however, the 45 WS uses an alternative tool, the probabilistic contingency table (PCT). Using the TCT for verification requires a threshold, typically at 50%, to dichotomize probabilistic forecasts. The PCT maintains the valuable information in probabilities and verifies the true forecasts being reported. Simulated forecasts and outcomes as well as 2015–18 45 WS data are utilized to compare forecast performance metrics produced from the TCT and PCT to determine which verification tool better reflects the quality of forecasts. Comparisons of frequency bias and other statistical metrics computed from both dichotomized and continuous forecasts reveal misrepresentative performance metrics from the TCT as well as a loss of information necessary for verification. PCT bias better reflects forecast verification in contrast to that of TCT bias, which suggests suboptimal forecasts when in fact the forecasts are accurate.


Sign in / Sign up

Export Citation Format

Share Document