strong assumption
Recently Published Documents


TOTAL DOCUMENTS

25
(FIVE YEARS 11)

H-INDEX

6
(FIVE YEARS 1)

Mathematics ◽  
2022 ◽  
Vol 10 (2) ◽  
pp. 228
Author(s):  
Pablo Pincheira ◽  
Nicolas Hardy ◽  
Andrea Bentancor

We show that a straightforward modification of a trading-based test for predictability displays interesting advantages over the Excess Profitability (EP) test proposed by Anatolyev and Gerco when testing the Driftless Random Walk Hypothesis. Our statistic is called the Straightforward Excess Profitability (SEP) test, and it avoids the calculation of a term that under the null of no predictability should be zero but in practice may be sizable. In addition, our test does not require the strong assumption of independence used to derive the EP test. We claim that dependence is the rule and not the exception. We show via Monte Carlo simulations that the SEP test outperforms the EP test in terms of size and power. Finally, we illustrate the use of our test in an empirical application within the context of the commodity-currencies literature.


Stroke ◽  
2022 ◽  
Author(s):  
Eva A. Mistry ◽  
Sharon D. Yeatts ◽  
Pooja Khatri ◽  
Akshitkumar M. Mistry ◽  
Michelle Detry ◽  
...  

National Institutes of Health Stroke Scale (NIHSS), measured a few hours to days after stroke onset, is an attractive outcome measure for stroke research. NIHSS at the time of presentation (baseline NIHSS) strongly predicts the follow-up NIHSS. Because of the need to account for the baseline NIHSS in the analysis of follow-up NIHSS as an outcome measure, a common and intuitive approach is to define study outcome as the change in NIHSS from baseline to follow-up (ΔNIHSS). However, this approach has important limitations. Analyzing ΔNIHSS implies a very strong assumption about the relationship between baseline and follow-up NIHSS that is unlikely to be satisfied, drawing into question the validity of the resulting statistical analysis. This reduces the precision of the estimates of treatment effects and the power of clinical trials that use this approach to analysis. ANCOVA allows for the analysis of follow-up NIHSS as the dependent variable while adjusting for baseline NIHSS as a covariate in the model and addresses several challenges of using ΔNIHSS outcome using simple bivariate comparisons (eg, a t test, Wilcoxon rank-sum, linear regression without adjustment for baseline) for stroke research. In this article, we use clinical trial simulations to illustrate that variability in NIHSS outcome is less when follow-up NIHSS is adjusted for baseline compared to ΔNIHSS and how a reduction in this variability improves the power. We outline additional, important clinical and statistical arguments to support the superiority of ANCOVA using the final measurement of the NIHSS adjusted for baseline over, and caution against using, the simple bivariate comparison of absolute NIHSS change (ie, delta).


Author(s):  
Woodrow Z. Wang ◽  
Mark Beliaev ◽  
Erdem Bıyık ◽  
Daniel A. Lazar ◽  
Ramtin Pedarsani ◽  
...  

Coordination is often critical to forming prosocial behaviors -- behaviors that increase the overall sum of rewards received by all agents in a multi-agent game. However, state of the art reinforcement learning algorithms often suffer from converging to socially less desirable equilibria when multiple equilibria exist. Previous works address this challenge with explicit reward shaping, which requires the strong assumption that agents can be forced to be prosocial. We propose using a less restrictive peer-rewarding mechanism, gifting, that guides the agents toward more socially desirable equilibria while allowing agents to remain selfish and decentralized. Gifting allows each agent to give some of their reward to other agents. We employ a theoretical framework that captures the benefit of gifting in converging to the prosocial equilibrium by characterizing the equilibria's basins of attraction in a dynamical system. With gifting, we demonstrate increased convergence of high risk, general-sum coordination games to the prosocial equilibrium both via numerical analysis and experiments.


Author(s):  
Takeshi Nakai ◽  
Yuto Misawa ◽  
Yuuki Tokushige ◽  
Mitsugu Iwamoto ◽  
Kazuo Ohta

AbstractCard-based cryptography, introduced by den Boer aims to realize multiparty computation (MPC) by using physical cards. We propose several efficient card-based protocols for the millionaires’ problem by introducing a new operation called Private Permutation (PP) instead of the shuffle used in most of existing card-based cryptography. Shuffle is a useful randomization technique by exploiting the property of card shuffling, but it requires a strong assumption from the viewpoint of arithmetic MPC because shuffle assumes that public randomization is possible. On the other hand, private randomness can be used in PPs, which enables us to design card-based protocols taking ideas of arithmetic MPCs into account. Actually, we show that Yao’s millionaires’ protocol can be easily transformed into a card-based protocol by using PPs, which is not straightforward by using shuffles because Yao’s protocol uses private randomness. Furthermore, we propose entirely novel and efficient card-based millionaire protocols based on PPs by securely updating bitwise comparisons between two numbers, which unveil a power of PPs. As another interest of these protocols, we point out they have a deep connection to the well-known logical puzzle known as “The fork in the road.”


2020 ◽  
Author(s):  
Paolo Benettin

<p>The separation of runoff into different components, typically some “event” (or “new”) water as opposed to some “baseflow” (or “old”) water, is a task that has been attracting hydrologists for decades. The ability to separate runoff sources has implications for our understanding of hydrological processes and to predict changes due to e.g. deforestation or urbanization. Although the methodology has notably evolved during the years, the most conventional and widespread application involves a two-component separation achieved through stable isotopes or electrical conductivity measurements. Use of this approach is based on a strong assumption that is difficult to test in the field: the signatures of the two end-members either do not change during the event or their variations can be taken into account. By using extensive numerical tests, this contribution explores the limits of this assumption. Results highlight the importance of considering the time-varying contribution of soil water, which is not event-water nor baseflow, and show that the method can easily lead to incorrect estimates when the above assumption is not met.</p>


2019 ◽  
Vol 112 (04) ◽  
pp. 491-516
Author(s):  
Daniel H. Weiss

AbstractThis article seeks to break the scholarly deadlock regarding attitudes toward war and bloodshed held by early Christian thinkers. I argue that, whereas previous studies have attempted to fit early Christian stances into one or another “unitary-ethic” framework, the historical-textual data can be best accounted for by positing that many early Christian writers held to a “dual-ethic” orientation. In the latter, certain actions would be viewed as forbidden for Christians but as legitimate for non-Christians in the Roman Empire. Moreover, this dual-ethic stance can be further illuminated by viewing it in connection with the portrayal in the Hebrew Bible of the relation between Levites and the other Israelite tribes. This framing enables us to gain a clearer understanding not only of writers like Origen and Tertullian, who upheld Christian nonviolence while simultaneously praising Roman imperial military activities, but also of writers such as Augustine, whose theological-ethical framework indicates a strong assumption of a dual-ethic stance in his patristic predecessors.


Author(s):  
Yu Han ◽  
Jie Tang ◽  
Qian Chen

Network embedding has been extensively studied in recent years. In addition to the works on static networks, some researchers try to propose new models for evolving networks. However, sometimes most of these dynamic network embedding models are still not in line with the actual situation, since these models have a strong assumption that we can achieve all the changes in the whole network, while in fact we cannot do this in some real world networks, such as the web networks and some large social networks. So in this paper, we study a novel and challenging problem, i.e., network embedding under partial monitoring for evolving networks. We propose a model on dynamic networks in which we cannot perceive all the changes of the structure. We analyze our model theoretically, and give a bound to the error between the results of our model and the potential optimal cases. We evaluate the performance of our model from two aspects. The experimental results on real world datasets show that our model outperforms the baseline models by a large margin.


Author(s):  
Manish Ravula ◽  
Shani Alkoby ◽  
Peter Stone

As autonomous AI agents proliferate in the real world, they will increasingly need to cooperate with each other to achieve complex goals without always being able to coordinate in advance. This kind of cooperation, in which agents have to learn to cooperate on the fly, is called ad hoc teamwork. Many previous works investigating this setting assumed that teammates behave according to one of many predefined types that is fixed throughout the task. This assumption of stationarity in behaviors, is a strong assumption which cannot be guaranteed in many real-world settings. In this work, we relax this assumption and investigate settings in which teammates can change their types during the course of the task. This adds complexity to the planning problem as now an agent needs to recognize that a change has occurred in addition to figuring out what is the new type of the teammate it is interacting with. In this paper, we present a novel Convolutional-Neural-Network-based Change point Detection (CPD) algorithm for ad hoc teamwork. When evaluating our algorithm on the modified predator prey domain, we find that it outperforms existing Bayesian CPD algorithms.


2019 ◽  
Author(s):  
Fareed Hameed Al-Hindawi ◽  
Mariam D. Saffah

The present study aims at presenting a thorough account of the field termed literary pragmatics which emerges in a consequence of applying the different pragmatic approaches to the study and analysis of literary genera. Additionally, it also attempts to explore and shed some light on the relationship between the two domains: pragmatics and literature in order to reveal their commonalities. There exists a strong assumption that these have something in common as they both have to do with language users and how meaning is conveyed. Despite the fact the various pragmatic approaches including speech act theory, conversational implicature, politeness theory and relevance theory are developed mainly in relation to spoken interactions, the study has revealed that they offer invaluable insights to the study of literary texts. Moreover, the process of analyzing literary texts has led to the development and the explanation of the pragmatic approaches themselves.


2019 ◽  
Vol 19 (06) ◽  
pp. 2050117
Author(s):  
Tianya Cao ◽  
Wei Ren

Firstly, we compare the bounded derived categories with respect to the pure-exact and the usual exact structures, and describe bounded derived category by pure-projective modules, under a fairly strong assumption on the ring. Then, we study Verdier quotient of bounded pure derived category modulo the bounded homotopy category of pure-projective modules, which is called a pure singularity category since we show that it reflects the finiteness of pure-global dimension of rings. Moreover, invariance of pure singularity in a recollement of bounded pure derived categories is studied.


Sign in / Sign up

Export Citation Format

Share Document