general setting
Recently Published Documents


TOTAL DOCUMENTS

658
(FIVE YEARS 222)

H-INDEX

26
(FIVE YEARS 5)

2022 ◽  
Vol 6 (POPL) ◽  
pp. 1-29
Author(s):  
Minseok Jeon ◽  
Hakjoo Oh

In this paper, we challenge the commonly-accepted wisdom in static analysis that object sensitivity is superior to call-site sensitivity for object-oriented programs. In static analysis of object-oriented programs, object sensitivity has been established as the dominant flavor of context sensitivity thanks to its outstanding precision. On the other hand, call-site sensitivity has been regarded as unsuitable and its use in practice has been constantly discouraged for object-oriented programs. In this paper, however, we claim that call-site sensitivity is generally a superior context abstraction because it is practically possible to transform object sensitivity into more precise call-site sensitivity. Our key insight is that the previously known superiority of object sensitivity holds only in the traditional k -limited setting, where the analysis is enforced to keep the most recent k context elements. However, it no longer holds in a recently-proposed, more general setting with context tunneling. With context tunneling, where the analysis is free to choose an arbitrary k -length subsequence of context strings, we show that call-site sensitivity can simulate object sensitivity almost completely, but not vice versa. To support the claim, we present a technique, called Obj2CFA, for transforming arbitrary context-tunneled object sensitivity into more precise, context-tunneled call-site-sensitivity. We implemented Obj2CFA in Doop and used it to derive a new call-site-sensitive analysis from a state-of-the-art object-sensitive pointer analysis. Experimental results confirm that the resulting call-site sensitivity outperforms object sensitivity in precision and scalability for real-world Java programs. Remarkably, our results show that even 1-call-site sensitivity can be more precise than the conventional 3-object-sensitive analysis.


Author(s):  
Naoki Saito ◽  
Yiqun Shao

AbstractExtending computational harmonic analysis tools from the classical setting of regular lattices to the more general setting of graphs and networks is very important, and much research has been done recently. The generalized Haar–Walsh transform (GHWT) developed by Irion and Saito (2014) is a multiscale transform for signals on graphs, which is a generalization of the classical Haar and Walsh–Hadamard transforms. We propose the extended generalized Haar–Walsh transform (eGHWT), which is a generalization of the adapted time–frequency tilings of Thiele and Villemoes (1996). The eGHWT examines not only the efficiency of graph-domain partitions but also that of “sequency-domain” partitions simultaneously. Consequently, the eGHWT and its associated best-basis selection algorithm for graph signals significantly improve the performance of the previous GHWT with the similar computational cost, $$O(N \log N)$$ O ( N log N ) , where N is the number of nodes of an input graph. While the GHWT best-basis algorithm seeks the most suitable orthonormal basis for a given task among more than $$(1.5)^N$$ ( 1.5 ) N possible orthonormal bases in $$\mathbb {R}^N$$ R N , the eGHWT best-basis algorithm can find a better one by searching through more than $$0.618\cdot (1.84)^N$$ 0.618 · ( 1.84 ) N possible orthonormal bases in $$\mathbb {R}^N$$ R N . This article describes the details of the eGHWT best-basis algorithm and demonstrates its superiority using several examples including genuine graph signals as well as conventional digital images viewed as graph signals. Furthermore, we also show how the eGHWT can be extended to 2D signals and matrix-form data by viewing them as a tensor product of graphs generated from their columns and rows and demonstrate its effectiveness on applications such as image approximation.


Author(s):  
Nicolas Boutry ◽  
Rocio Gonzalez-Diaz ◽  
Maria-Jose Jimenez ◽  
Eduardo Paluzo-Hildago

AbstractIn this paper, we define a new flavour of well-composedness, called strong Euler well-composedness. In the general setting of regular cell complexes, a regular cell complex of dimension n is strongly Euler well-composed if the Euler characteristic of the link of each boundary cell is 1, which is the Euler characteristic of an $$(n-1)$$ ( n - 1 ) -dimensional ball. Working in the particular setting of cubical complexes canonically associated with $$n$$ n D pictures, we formally prove in this paper that strong Euler well-composedness implies digital well-composedness in any dimension $$n\ge 2$$ n ≥ 2 and that the converse is not true when $$n\ge 4$$ n ≥ 4 .


Nonlinearity ◽  
2021 ◽  
Vol 35 (1) ◽  
pp. 658-680
Author(s):  
Xueting Tian ◽  
Weisheng Wu

Abstract In this paper we define unstable topological entropy for any subsets (not necessarily compact or invariant) in partially hyperbolic systems as a Carathéodory–Pesin dimension characteristic, motivated by the work of Bowen and Pesin etc. We then establish some basic results in dimension theory for Bowen unstable topological entropy, including an entropy distribution principle and a variational principle in general setting. As applications of this new concept, we study unstable topological entropy of saturated sets and extend some results in Bowen (1973 Trans. Am. Math. Soc. 184 125–36); Pfister and Sullivan (2007 Ergod. Theor. Dynam. Syst. 27 929–56). Our results give new insights to the multifractal analysis for partially hyperbolic systems.


Nonlinearity ◽  
2021 ◽  
Vol 35 (1) ◽  
pp. 567-588
Author(s):  
Rui Zou ◽  
Yongluo Cao ◽  
Yun Zhao

Abstract Let A = {A 1, A 2, …, A k } be a finite collection of contracting affine maps, the corresponding pressure function P(A, s) plays the fundamental role in the study of dimension of self-affine sets. The zero of the pressure function always give the upper bound of the dimension of a self-affine set, and is exactly the dimension of ‘typical’ self-affine sets. In this paper, we consider an expanding base dynamical system, and establish the continuity of the pressure with the singular value function of a Hölder continuous matrix cocycle. This extends Feng and Shmerkin’s result in (Feng and Shmerkin 2014 Geom. Funct. Anal. 24 1101–1128) to a general setting.


2021 ◽  
pp. 1-6
Author(s):  
Supriya Malla ◽  
Ganesh Malla

Background: Arguably the most frequently used term in science, particularly in mathematics and statistics, is linear. However, confusion arises from the various meanings of linearity instructed in different levels of mathematical courses. The definition of linearity taught in high school is less correct than the one learned in a linear algebra class. The correlation coefficient of two quantitative variables is a numerical measure of the affinity, not only linearity, of two variables. However, every statistics book loosely says it is a measure of linear relationship. This clearly show that there is some confusion between use of the terms the linear function and affine function. Objective: This article aims at clarifying the confusion between use of the terms linear function and affine function. It also provides more generalized forms of the gradient in different branches of mathematics and show their equivalency. Materials and Methods: We have used the pure analytical deductive methods to proof the statements.  Results: We have clearly presented that gradient is the measure of affinity, not just linearity. It becomes a special case of the derivative in calculus, of the least-squares estimate of the regression coefficient in statistics and matrix theory. The gradient can ­­­­be seen in terms of the inverse of the informative matrix in the most general setting of the linear model estimation. Conclusion: The article has been clearly written to show the distinction between the linear and affine functions in a concise and unambiguous manner. We hope that readers will clearly see various generalizations of the gradient and article itself would be a simple exposition, enlightening, and fun to read.


2021 ◽  
Author(s):  
Edith Gabriel ◽  
Francisco Rodriguez-Cortes ◽  
Jérôme Coville ◽  
Jorge Mateu ◽  
Joël Chadoeuf

Abstract Seismic networks provide data that are used as basis both for public safety decisions and for scientific research. Their configuration affects the data completeness, which in turn, critically affects several seismological scientific targets (e.g., earthquake prediction, seismic hazard...). In this context, a key aspect is how to map earthquakes density in seismogenic areas from censored data or even in areas that are not covered by the network. We propose to predict the spatial distribution of earthquakes from the knowledge of presence locations and geological relationships, taking into account any interactions between records. Namely, in a more general setting, we aim to estimate the intensity function of a point process, conditional to its censored realization, as in geostatistics for continuous processes. We define a predictor as the best linear unbiased combination of the observed point pattern. We show that the weight function associated to the predictor is the solution of a Fredholm equation of second kind. Both the kernel and the source term of the Fredholm equation are related to the first- and second-order characteristics of the point process through the intensity and the pair correlation function. Results are presented and illustrated on simulated non-stationary point processes and real data for mapping Greek Hellenic seismicity in a region with unreliable and incomplete records.


Author(s):  
Bismark Singh ◽  
Oliver Rehberg ◽  
Theresa Groß ◽  
Maximilian Hoffmann ◽  
Leander Kotzur ◽  
...  

AbstractWe present an algorithm to solve capacity extension problems that frequently occur in energy system optimization models. Such models describe a system where certain components can be installed to reduce future costs and achieve carbon reduction goals; however, the choice of these components requires the solution of a computationally expensive combinatorial problem. In our proposed algorithm, we solve a sequence of linear programs that serve to tighten a budget—the maximum amount we are willing to spend towards reducing overall costs. Our proposal finds application in the general setting where optional investment decisions provide an enhanced portfolio over the original setting that maintains feasibility. We present computational results on two model classes, and demonstrate computational savings up to 96% on certain instances.


Symmetry ◽  
2021 ◽  
Vol 13 (12) ◽  
pp. 2269
Author(s):  
Donal O’Regan

In this paper, we present a variety of existence theorems for maximal type elements in a general setting. We consider multivalued maps with continuous selections and multivalued maps which are admissible with respect to Gorniewicz and our existence theory is based on the author’s old and new coincidence theory. Particularly, for the second section we present presents a collectively coincidence coercive type result for different classes of maps. In the third section we consider considers majorized maps and presents a variety of new maximal element type results. Coincidence theory is motivated from real-world physical models where symmetry and asymmetry play a major role.


2021 ◽  
Vol 2022 (1) ◽  
pp. 253-273
Author(s):  
Josh Smith ◽  
Hassan Jameel Asghar ◽  
Gianpaolo Gioiosa ◽  
Sirine Mrabet ◽  
Serge Gaspers ◽  
...  

Abstract We show that the ‘optimal’ use of the parallel composition theorem corresponds to finding the size of the largest subset of queries that ‘overlap’ on the data domain, a quantity we call the maximum overlap of the queries. It has previously been shown that a certain instance of this problem, formulated in terms of determining the sensitivity of the queries, is NP-hard, but also that it is possible to use graph-theoretic algorithms, such as finding the maximum clique, to approximate query sensitivity. In this paper, we consider a significant generalization of the aforementioned instance which encompasses both a wider range of differentially private mechanisms and a broader class of queries. We show that for a particular class of predicate queries, determining if they are disjoint can be done in time polynomial in the number of attributes. For this class, we show that the maximum overlap problem remains NP-hard as a function of the number of queries. However, we show that efficient approximate solutions exist by relating maximum overlap to the clique and chromatic numbers of a certain graph determined by the queries. The link to chromatic number allows us to use more efficient approximate algorithms, which cannot be done for the clique number as it may underestimate the privacy budget. Our approach is defined in the general setting of f-differential privacy, which subsumes standard pure differential privacy and Gaussian differential privacy. We prove the parallel composition theorem for f-differential privacy. We evaluate our approach on synthetic and real-world data sets of queries. We show that the approach can scale to large domain sizes (up to 1020000), and that its application can reduce the noise added to query answers by up to 60%.


Sign in / Sign up

Export Citation Format

Share Document