Partitioning the Input Domain for Classification

Author(s):  
Adrian Rechy Romero ◽  
Srimal Jayawardena ◽  
Mark Cox ◽  
Paulo Vinicius Koerich Borges
Keyword(s):  
Author(s):  
DANIEL A. SPIELMAN ◽  
SHANG-HUA TENG ◽  
ALPER ÜNGÖR

We present a parallel Delaunay refinement algorithm for generating well-shaped meshes in both two and three dimensions. Like its sequential counterparts, the parallel algorithm iteratively improves the quality of a mesh by inserting new points, the Steiner points, into the input domain while maintaining the Delaunay triangulation. The Steiner points are carefully chosen from a set of candidates that includes the circumcenters of poorly-shaped triangular elements. We introduce a notion of independence among possible Steiner points that can be inserted simultaneously during Delaunay refinements and show that such a set of independent points can be constructed efficiently and that the number of parallel iterations is O( log 2Δ), where Δ is the spread of the input — the ratio of the longest to the shortest pairwise distances among input features. In addition, we show that the parallel insertion of these set of points can be realized by sequential Delaunay refinement algorithms such as by Ruppert's algorithm in two dimensions and Shewchuk's algorithm in three dimensions. Therefore, our parallel Delaunay refinement algorithm provides the same shape quality and mesh-size guarantees as these sequential algorithms. For generating quasi-uniform meshes, such as those produced by Chew's algorithms, the number of parallel iterations is in fact O( log Δ). To the best of our knowledge, our algorithm is the first provably polylog(Δ) time parallel Delaunay-refinement algorithm that generates well-shaped meshes of size within a constant factor of the best possible.


2021 ◽  
Vol 20 (5s) ◽  
pp. 1-23
Author(s):  
Robert Rabe ◽  
Anastasiia Izycheva ◽  
Eva Darulova

Efficient numerical programs are required for proper functioning of many systems. Today’s tools offer a variety of optimizations to generate efficient floating-point implementations that are specific to a program’s input domain. However, sound optimizations are of an “all or nothing” fashion with respect to this input domain—if an optimizer cannot improve a program on the specified input domain, it will conclude that no optimization is possible. In general, though, different parts of the input domain exhibit different rounding errors and thus have different optimization potential. We present the first regime inference technique for sound optimizations that automatically infers an effective subdivision of a program’s input domain such that individual sub-domains can be optimized more aggressively. Our algorithm is general; we have instantiated it with mixed-precision tuning and rewriting optimizations to improve performance and accuracy, respectively. Our evaluation on a standard benchmark set shows that with our inferred regimes, we can, on average, improve performance by 65% and accuracy by 54% with respect to whole-domain optimizations.


Author(s):  
KWOK PING CHAN ◽  
TSONG YUEH CHEN ◽  
DAVE TOWEY

Restricted Random Testing (RRT) is a new method of testing software that improves upon traditional Random Testing (RT) techniques. Research has indicated that failure patterns (portions of an input domain which, when executed, cause the program to fail or reveal an error) can influence the effectiveness of testing strategies. For certain types of failure patterns, it has been found that a widespread and even distribution of test cases in the input domain can be significantly more effective at detecting failure compared with ordinary RT. Testing methods based on RT, but which aim to achieve even and widespread distributions, have been called Adaptive Random Testing (ART) strategies. One implementation of ART is RRT. RRT uses exclusion zones around executed, but non-failure-causing, test cases to restrict the regions of the input domain from which subsequent test cases may be drawn. In this paper, we introduce the motivation behind RRT, explain the algorithm and detail some empirical analyses carried out to examine the effectiveness of the method. Two versions of RRT are presented: Ordinary RRT (ORRT) and Normalized RRT (NRRT). The two versions share the same fundamental algorithm, but differ in their treatment of non-homogeneous input domains. Investigations into the use of alternative exclusion shapes are outlined, and a simple technique for reducing the computational overheads of RRT, prompted by the alternative exclusion shape investigations, is also explained. The performance of RRT is compared with RT and another ART method based on maximized minimum test case separation (DART), showing excellent improvement over RT and a very favorable comparison with DART.


1994 ◽  
Vol 8 (4) ◽  
pp. 591-609
Author(s):  
Gabriele Danninger ◽  
Walter J. Gutjahr

We describe a model for a random failure set in a fixed interval of the real line. (Failure sets are considered in input-domain-based theories of software reliability.) The model is based on an extended binary splitting process. Within the described model, we investigate the problem of how to select k test points such that the probability of finding at least one point of the failure set is maximized. It turns out that for values k > 2, the objective functions to be maximized are closely related to solutions of the Poisson-Euler-Darboux partial differential equation. Optimal test points are determined for arbitrary k in an asymptotic case where the failure set is, in a certain sense, “small” and “intricate,” which is the relevant case for practical applications.


2014 ◽  
Vol 2014 ◽  
pp. 1-16 ◽  
Author(s):  
Rubing Huang ◽  
Jinfu Chen ◽  
Yansheng Lu

Random testing (RT) is a fundamental testing technique to assess software reliability, by simply selecting test cases in a random manner from the whole input domain. As an enhancement of RT, adaptive random testing (ART) has better failure‐detection capability and has been widely applied in different scenarios, such as numerical programs, some object‐oriented programs, and mobile applications. However, not much work has been done on the effectiveness of ART for the programs with combinatorial input domain (i.e., the set of categorical data). To extend the ideas to the testing for combinatorial input domain, we have adopted different similarity measures that are widely used for categorical data in data mining and have proposed two similarity measures based on interaction coverage. Then, we propose a new version named ART‐CID as an extension of ART in combinatorial input domain, which selects an element from categorical data as the next test case such that it has the lowest similarity against already generated test cases. Experimental results show that ART‐CID generally performs better than RT, with respect to different evaluation metrics.


2018 ◽  
Author(s):  
Shi-qi An ◽  
Julie Murtagh ◽  
Kate B. Twomey ◽  
Manoj K. Gupta ◽  
Timothy P. O’Sullivan ◽  
...  

ABSTRACTThe opportunistic pathogen Pseudomonas aeruginosa can participate in inter-species communication through signaling by cis-2-unsaturated fatty acids of the diffusible signal factor (DSF) family. Sensing these signals involves the histidine kinase PA1396 and leads to altered biofilm formation and increased tolerance to various antibiotics. Here, we show that the membrane-associated sensory input domain of PA1396 has five trans-membrane helices, two of which are required for DSF sensing. DSF binding is associated with enhanced auto-phosphorylation of PA1396 incorporated into liposomes. Further, we examined the ability of synthetic DSF analogues to modulate or inhibit PA1396 activity. Several of these analogues block the ability of DSF to trigger auto-phosphorylation and gene expression, whereas others act as inverse agonists reducing biofilm formation and antibiotic tolerance, both in vitro and in murine infection models. These analogues may thus represent lead compounds for novel adjuvants to improve the efficacy of existing antibiotics.


Sign in / Sign up

Export Citation Format

Share Document