context sensitivity
Recently Published Documents


TOTAL DOCUMENTS

347
(FIVE YEARS 78)

H-INDEX

29
(FIVE YEARS 3)

Synthese ◽  
2021 ◽  
Author(s):  
Anne Bosse

AbstractThis paper is about an underappreciated aspect of generics: their non-specificity. Many uses of generics, utterances like ‘Seagulls swoop down to steal food’, express non-specific generalisations which do not specify their quantificational force or flavour. I consider whether this non-specificity arises as a by-product of context-sensitivity or semantic incompleteness but argue instead that generics semantically express non-specific generalisations by default as a result of quantifying existentially over more specific ones.


2021 ◽  
Vol 5 (OOPSLA) ◽  
pp. 1-27
Author(s):  
Tian Tan ◽  
Yue Li ◽  
Xiaoxing Ma ◽  
Chang Xu ◽  
Yannis Smaragdakis

Traditional context-sensitive pointer analysis is hard to scale for large and complex Java programs. To address this issue, a series of selective context-sensitivity approaches have been proposed and exhibit promising results. In this work, we move one step further towards producing highly-precise pointer analyses for hard-to-analyze Java programs by presenting the Unity-Relay framework, which takes selective context sensitivity to the next level. Briefly, Unity-Relay is a one-two punch: given a set of different selective context-sensitivity approaches, say S = S1, . . . , Sn, Unity-Relay first provides a mechanism (called Unity)to combine and maximize the precision of all components of S. When Unity fails to scale, Unity-Relay offers a scheme (called Relay) to pass and accumulate the precision from one approach Si in S to the next, Si+1, leading to an analysis that is more precise than all approaches in S. As a proof-of-concept, we instantiate Unity-Relay into a tool called Baton and extensively evaluate it on a set of hard-to-analyze Java programs, using general precision metrics and popular clients. Compared with the state of the art, Baton achieves the best precision for all metrics and clients for all evaluated programs. The difference in precision is often dramatic — up to 71% of alias pairs reported by previously-best algorithms are found to be spurious and eliminated.


2021 ◽  
Vol 41 ◽  
pp. 185-190
Author(s):  
Colin F Camerer ◽  
Xiaomin Li
Keyword(s):  

2021 ◽  
Vol 159 ◽  
pp. 106236
Author(s):  
Mikiko Oono ◽  
Koji Kitamura ◽  
Yoshifumi Nishida ◽  
Tatsuhiro Yamanaka

2021 ◽  
pp. 13-20
Author(s):  
R. B. N. Sinha ◽  
N. Lakshmi
Keyword(s):  

2021 ◽  
Author(s):  
Hame Park ◽  
Christoph Kayser

Whether two sensory cues interact during perceptual judgments depends on their immediate properties, but as suggested by Bayesian models, also on the observer's a priori belief that these originate from a common source. While in many experiments this a priori belief is considered fixed, in real life it must adapt to the momentary context or environment. To understand the adaptive nature of human multisensory perception we investigated the context-sensitivity of spatial judgements in a ventriloquism paradigm. We exposed observers to audio-visual stimuli whose discrepancy either varied over a wider (±46°) or a narrower range (±26°) and hypothesized that exposure to a wider range of discrepancies would facilitate multisensory binding by increasing participants a priori belief about a common source for a given discrepancy. Our data support this hypothesis by revealing an enhanced integration (ventriloquism) bias in the wider context, which was echoed in Bayesian causal inference models fit to participants' data, which assigned a stronger a priori integration tendency during the wider context. Interestingly, the immediate ventriloquism aftereffect, a multisensory response bias obtained following a multisensory test trial, was not affected by the contextual manipulation, although participants' confidence in their spatial judgments differed between contexts for both integration and recalibration trials. These results highlight the context-sensitivity of multisensory binding and suggest that the immediate ventriloquism aftereffect is not a purely sensory-level consequence of the multisensory integration process.


2021 ◽  
pp. 103146
Author(s):  
Yunfei Li ◽  
Bin Zhou ◽  
Manon Glockmann ◽  
Jürgen P. Kropp ◽  
Diego Rybski

2021 ◽  
Vol 54 (6) ◽  
pp. 1-37
Author(s):  
Swati Jaiswal ◽  
Uday P. Khedker ◽  
Alan Mycroft

Context-sensitive methods of program analysis increase the precision of interprocedural analysis by achieving the effect of call inlining. These methods have been defined using different formalisms and hence appear as algorithms that are very different from each other. Some methods traverse a call graph top-down, whereas some others traverse it bottom-up first and then top-down. Some define contexts explicitly, whereas some do not. Some of them directly compute data flow values, while some first compute summary functions and then use them to compute data flow values. Further, different methods place different kinds of restrictions on the data flow frameworks supported by them. As a consequence, it is difficult to compare the ideas behind these methods in spite of the fact that they solve essentially the same problem. We argue that these incomparable views are similar to those of blind men describing an elephant, called context sensitivity, and make it difficult for a non-expert reader to form a coherent picture of context-sensitive data flow analysis. We bring out this whole-elephant view of context sensitivity in program analysis by proposing a unified model of context sensitivity that provides a clean separation between computation of contexts and computation of data flow values. Our model captures the essence of context sensitivity and defines simple soundness and precision criteria for context-sensitive methods. It facilitates declarative specifications of context-sensitive methods, insightful comparisons between them, and reasoning about their soundness and precision. We demonstrate this by instantiating our model to many known context-sensitive methods.


Sign in / Sign up

Export Citation Format

Share Document