scholarly journals Threads and or-parallelism unified

2010 ◽  
Vol 10 (4-6) ◽  
pp. 417-432 ◽  
Author(s):  
VíTOR SANTOS COSTA ◽  
INÊS DUTRA ◽  
RICARDO ROCHA

AbstractOne of the main advantages of Logic Programming (LP) is that it provides an excellent framework for the parallel execution of programs. In this work we investigate novel techniques to efficiently exploit parallelism from real-world applications in low cost multi-core architectures. To achieve these goals, we revive and redesign the YapOr system to exploit or-parallelism based on a multi-threaded implementation. Our new approach takes full advantage of the state-of-the-art fast and optimized YAP Prolog engine and shares the underlying execution environment, scheduler and most of the data structures used to support YapOr's model. Initial experiments with our new approach consistently achieve almost linear speedups for most of the applications, proving itself as a good alternative for exploiting implicit parallelism in the currently available low cost multi-core architectures.

Author(s):  
Mahalingam Ramkumar

Approaches for securing digital assets of information systems can be classified as active approaches based on attack models, and passive approaches based on system-models. Passive approaches are inherently superior to active ones. However, taking full advantage of passive approaches calls for a rigorous standard for a low-complexity-high-integrity execution environment for security protocols. We sketch broad outlines of mirror network (MN) modules, as a candidate for such a standard. Their utility in assuring real-world information systems is illustrated with examples.


2021 ◽  
Vol 54 (6) ◽  
pp. 1-35
Author(s):  
Ninareh Mehrabi ◽  
Fred Morstatter ◽  
Nripsuta Saxena ◽  
Kristina Lerman ◽  
Aram Galstyan

With the widespread use of artificial intelligence (AI) systems and applications in our everyday lives, accounting for fairness has gained significant importance in designing and engineering of such systems. AI systems can be used in many sensitive environments to make important and life-changing decisions; thus, it is crucial to ensure that these decisions do not reflect discriminatory behavior toward certain groups or populations. More recently some work has been developed in traditional machine learning and deep learning that address such challenges in different subdomains. With the commercialization of these systems, researchers are becoming more aware of the biases that these applications can contain and are attempting to address them. In this survey, we investigated different real-world applications that have shown biases in various ways, and we listed different sources of biases that can affect AI applications. We then created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems. In addition to that, we examined different domains and subdomains in AI showing what researchers have observed with regard to unfair outcomes in the state-of-the-art methods and ways they have tried to address them. There are still many future directions and solutions that can be taken to mitigate the problem of bias in AI systems. We are hoping that this survey will motivate researchers to tackle these issues in the near future by observing existing work in their respective fields.


2020 ◽  
Vol 68 ◽  
pp. 311-364
Author(s):  
Francesco Trovo ◽  
Stefano Paladino ◽  
Marcello Restelli ◽  
Nicola Gatti

Multi-Armed Bandit (MAB) techniques have been successfully applied to many classes of sequential decision problems in the past decades. However, non-stationary settings -- very common in real-world applications -- received little attention so far, and theoretical guarantees on the regret are known only for some frequentist algorithms. In this paper, we propose an algorithm, namely Sliding-Window Thompson Sampling (SW-TS), for nonstationary stochastic MAB settings. Our algorithm is based on Thompson Sampling and exploits a sliding-window approach to tackle, in a unified fashion, two different forms of non-stationarity studied separately so far: abruptly changing and smoothly changing. In the former, the reward distributions are constant during sequences of rounds, and their change may be arbitrary and happen at unknown rounds, while, in the latter, the reward distributions smoothly evolve over rounds according to unknown dynamics. Under mild assumptions, we provide regret upper bounds on the dynamic pseudo-regret of SW-TS for the abruptly changing environment, for the smoothly changing one, and for the setting in which both the non-stationarity forms are present. Furthermore, we empirically show that SW-TS dramatically outperforms state-of-the-art algorithms even when the forms of non-stationarity are taken separately, as previously studied in the literature.


Author(s):  
Kelly S. Moreira ◽  
Diana Lermen ◽  
Leandra P. dos Santos ◽  
Fernando Galembeck ◽  
Thiago A. L. Burgo

Converting humidity into useful electrical energy was only recently demonstrated and the improvements presented in this work are not only highly energy efficient, but also contributes to the development of scalable, real-world applications.


2020 ◽  
Vol 10 (12) ◽  
pp. 353
Author(s):  
Shaya Wolf ◽  
Andrea Carneal Burrows ◽  
Mike Borowczak ◽  
Mason Johnson ◽  
Rafer Cooley ◽  
...  

Research on innovative, integrated outreach programs guided three separate week-long outreach camps held across two summers (2018 and 2019). These camps introduced computer science through real-world applications and hands-on activities, each dealing with cybersecurity principles. The camps utilized low-cost hardware and free software to provide a total of 84 students (aged 10 to 18 years) a unique learning experience. Based on feedback from the 2018 camp, a new pre/post survey was developed to assess changes in participant knowledge and interest. Student participants in the 2019 iteration showed drastic changes in their cybersecurity content recall (33% pre vs. 96% post), cybersecurity concept identification within real-world scenarios, and exhibited an increased ability to recognize potential cybersecurity threats in their every-day lives (22% pre vs. 69% post). Finally, students’ self-reported interest-level before and after the camp show a positive increase across all student participants, with the number of students who where highly interested in cybersecurity more than doubling from 31% pre-camp to 65% post-camp. Implications for educators are large as these activities and experiences can be interwoven into traditional schooling as well as less formal camps as pure computer science or through integrated STEM.


Entropy ◽  
2020 ◽  
Vol 22 (4) ◽  
pp. 407 ◽  
Author(s):  
Dominik Weikert ◽  
Sebastian Mai ◽  
Sanaz Mostaghim

In this article, we present a new algorithm called Particle Swarm Contour Search (PSCS)—a Particle Swarm Optimisation inspired algorithm to find object contours in 2D environments. Currently, most contour-finding algorithms are based on image processing and require a complete overview of the search space in which the contour is to be found. However, for real-world applications this would require a complete knowledge about the search space, which may not be always feasible or possible. The proposed algorithm removes this requirement and is only based on the local information of the particles to accurately identify a contour. Particles search for the contour of an object and then traverse alongside using their known information about positions in- and out-side of the object. Our experiments show that the proposed PSCS algorithm can deliver comparable results as the state-of-the-art.


2008 ◽  
Vol 8 (5-6) ◽  
pp. 545-580 ◽  
Author(s):  
WOLFGANG FABER ◽  
GERALD PFEIFER ◽  
NICOLA LEONE ◽  
TINA DELL'ARMI ◽  
GIUSEPPE IELPA

AbstractDisjunctive logic programming (DLP) is a very expressive formalism. It allows for expressing every property of finite structures that is decidable in the complexity class ΣP2(=NPNP). Despite this high expressiveness, there are some simple properties, often arising in real-world applications, which cannot be encoded in a simple and natural manner. Especially properties that require the use of arithmetic operators (like sum, times, or count) on a set or multiset of elements, which satisfy some conditions, cannot be naturally expressed in classic DLP. To overcome this deficiency, we extend DLP by aggregate functions in a conservative way. In particular, we avoid the introduction of constructs with disputed semantics, by requiring aggregates to be stratified. We formally define the semantics of the extended language (called ), and illustrate how it can be profitably used for representing knowledge. Furthermore, we analyze the computational complexity of , showing that the addition of aggregates does not bring a higher cost in that respect. Finally, we provide an implementation of in DLV—a state-of-the-art DLP system—and report on experiments which confirm the usefulness of the proposed extension also for the efficiency of computation.


2018 ◽  
Author(s):  
Aditi Kathpalia ◽  
Nithin Nagaraj

Causality testing methods are being widely used in various disciplines of science. Model-free methods for causality estimation are very useful as the underlying model generating the data is often unknown. However, existing model-free measures assume separability of cause and effect at the level of individual samples of measurements and unlike model-based methods do not perform any intervention to learn causal relationships. These measures can thus only capture causality which is by the associational occurrence of ‘cause’ and ‘effect’ between well separated samples. In real-world processes, often ‘cause’ and ‘effect’ are inherently inseparable or become inseparable in the acquired measurements. We propose a novel measure that uses an adaptive interventional scheme to capture causality which is not merely associational. The scheme is based on characterizing complexities associated with the dynamical evolution of processes on short windows of measurements. The formulated measure, Compression- Complexity Causality is rigorously tested on simulated and real datasets and its performance is compared with that of existing measures such as Granger Causality and Transfer Entropy. The proposed measure is robust to presence of noise, long-term memory, filtering and decimation, low temporal resolution (including aliasing), non-uniform sampling, finite length signals and presence of common driving variables. Our measure outperforms existing state-of-the-art measures, establishing itself as an effective tool for causality testing in real world applications.


Author(s):  
Jie Wen ◽  
Zheng Zhang ◽  
Yong Xu ◽  
Bob Zhang ◽  
Lunke Fei ◽  
...  

Multi-view clustering aims to partition data collected from diverse sources based on the assumption that all views are complete. However, such prior assumption is hardly satisfied in many real-world applications, resulting in the incomplete multi-view learning problem. The existing attempts on this problem still have the following limitations: 1) the underlying semantic information of the missing views is commonly ignored; 2) The local structure of data is not well explored; 3) The importance of different views is not effectively evaluated. To address these issues, this paper proposes a Unified Embedding Alignment Framework (UEAF) for robust incomplete multi-view clustering. In particular, a locality-preserved reconstruction term is introduced to infer the missing views such that all views can be naturally aligned. A consensus graph is adaptively learned and embedded via the reverse graph regularization to guarantee the common local structure of multiple views and in turn can further align the incomplete views and inferred views. Moreover, an adaptive weighting strategy is designed to capture the importance of different views. Extensive experimental results show that the proposed method can significantly improve the clustering performance in comparison with some state-of-the-art methods.


2020 ◽  
Vol 8 ◽  
pp. 539-555
Author(s):  
Marina Fomicheva ◽  
Shuo Sun ◽  
Lisa Yankovskaya ◽  
Frédéric Blain ◽  
Francisco Guzmán ◽  
...  

Quality Estimation (QE) is an important component in making Machine Translation (MT) useful in real-world applications, as it is aimed to inform the user on the quality of the MT output at test time. Existing approaches require large amounts of expert annotated data, computation, and time for training. As an alternative, we devise an unsupervised approach to QE where no training or access to additional resources besides the MT system itself is required. Different from most of the current work that treats the MT system as a black box, we explore useful information that can be extracted from the MT system as a by-product of translation. By utilizing methods for uncertainty quantification, we achieve very good correlation with human judgments of quality, rivaling state-of-the-art supervised QE models. To evaluate our approach we collect the first dataset that enables work on both black-box and glass-box approaches to QE.


Sign in / Sign up

Export Citation Format

Share Document