scholarly journals Planning with Preferences

AI Magazine ◽  
2008 ◽  
Vol 29 (4) ◽  
pp. 25 ◽  
Author(s):  
Jorge A, Baier ◽  
Sheila A. McIlraith

Automated Planning is an old area of AI that focuses on the development of techniques for finding a plan that achieves a given goal from a given set of initial states as quickly as possible. In most real-world applications, users of planning systems have preferences over the multitude of plans that achieve a given goal. These preferences allow to distinguish plans that are more desirable from those that are less desirable. Planning systems should therefore be able to construct high-quality plans, or at the very least they should be able to build plans that have a reasonably good quality given the resources available.In the last few years we have seen a significant amount of research that has focused on developing rich and compelling languages for expressing preferences over plans. On the other hand, we have seen the development of planning techniques that aim at finding high-quality plans quickly, exploiting some of the ideas developed for classical planning. In this paper we review the latest developments in automated preference-based planning. We also review various approaches for preference representation, and the main practical approaches developed so far.

Algorithms ◽  
2021 ◽  
Vol 14 (7) ◽  
pp. 197
Author(s):  
Ali Seman ◽  
Azizian Mohd Sapawi

In the conventional k-means framework, seeding is the first step toward optimization before the objects are clustered. In random seeding, two main issues arise: the clustering results may be less than optimal and different clustering results may be obtained for every run. In real-world applications, optimal and stable clustering is highly desirable. This report introduces a new clustering algorithm called the zero k-approximate modal haplotype (Zk-AMH) algorithm that uses a simple and novel seeding mechanism known as zero-point multidimensional spaces. The Zk-AMH provides cluster optimality and stability, therefore resolving the aforementioned issues. Notably, the Zk-AMH algorithm yielded identical mean scores to maximum, and minimum scores in 100 runs, producing zero standard deviation to show its stability. Additionally, when the Zk-AMH algorithm was applied to eight datasets, it achieved the highest mean scores for four datasets, produced an approximately equal score for one dataset, and yielded marginally lower scores for the other three datasets. With its optimality and stability, the Zk-AMH algorithm could be a suitable alternative for developing future clustering tools.


Author(s):  
Kannan Balasubramanian ◽  
Mala K.

Zero knowledge protocols provide a way of proving that a statement is true without revealing anything other than the correctness of the claim. Zero knowledge protocols have practical applications in cryptography and are used in many applications. While some applications only exist on a specification level, a direction of research has produced real-world applications. Zero knowledge protocols, also referred to as zero knowledge proofs, are a type of protocol in which one party, called the prover, tries to convince the other party, called the verifier, that a given statement is true. Sometimes the statement is that the prover possesses a particular piece of information. This is a special case of zero knowledge protocol called a zero-knowledge proof of knowledge. Formally, a zero-knowledge proof is a type of interactive proof.


1977 ◽  
Vol 7 (4) ◽  
pp. 285-293 ◽  
Author(s):  
Robert D. Dycus

The effect of proposal appearance on technical evaluation scoring was examined experimentally. Two mock proposals were prepared—one from the A Corporation and the other from the B Corporation. Each proposal was prepared in two versions—a “nice” appearing version (stylized “logoed” pages, offset two-color printing, heavy paper stock, plastic 19-ring spiral binding), and a “poor” appearing version (single-spaced typed pages, xerox reproduction, cheap transparent plastic cover, staple binding.) The proposals were scored against a set of eight evaluation questions by twenty-eight experienced government evaluators in a 2 × 2 factorial design experiment. No statistically significant effects of appearance on evaluation scoring were detected. A general model is presented that describes impression in terms of proposal appearance versus proposal thought content. The experiment is interpreted in terms of this model, and “real-world” applications of the model are discussed.


2015 ◽  
Vol 24 (03) ◽  
pp. 1550003 ◽  
Author(s):  
Armin Daneshpazhouh ◽  
Ashkan Sami

The task of semi-supervised outlier detection is to find the instances that are exceptional from other data, using some labeled examples. In many applications such as fraud detection and intrusion detection, this issue becomes more important. Most existing techniques are unsupervised. On the other hand, semi-supervised approaches use both negative and positive instances to detect outliers. However, in many real world applications, very few positive labeled examples are available. This paper proposes an innovative approach to address this problem. The proposed method works as follows. First, some reliable negative instances are extracted by a kNN-based algorithm. Afterwards, fuzzy clustering using both negative and positive examples is utilized to detect outliers. Experimental results on real data sets demonstrate that the proposed approach outperforms the previous unsupervised state-of-the-art methods in detecting outliers.


2021 ◽  
Vol 2078 (1) ◽  
pp. 012045
Author(s):  
Xiaomeng Guo ◽  
Li Yi ◽  
Hang Zou ◽  
Yining Gao

Abstract Most existing face super-resolution (SR) methods are developed based on an assumption that the degradation is fixed and known (e.g., bicubic down sampling). However, these methods suffer a severe performance drop in various unknown degradations in real-world applications. Previous methods usually rely on facial priors, such as facial geometry prior or reference prior, to restore realistic face details. Nevertheless, low-quality inputs cannot provide accurate geometric priors while high-quality references are often unavailable, which limits the use of face super-resolution in real-world scenes. In this work, we propose GPLSR which used the rich priors encapsulated in the pre-trained face GAN network to perform blind face super-resolution. This generative facial priori is introduced into the face super-resolution process through channel squeeze-and-excitation spatial feature transformation layer (SE-SFT), which makes our method achieve a good balance between realness and fidelity. Moreover, GPLSR can restores facial details with single forward pass because of powerful generative facial prior information. Extensive experiment shows that when the magnification factor is 16, this method achieves better performance than existing techniques in both synthetic and real datasets.


2017 ◽  
Vol 29 (2) ◽  
pp. 226-254 ◽  
Author(s):  
Susumu Shikano ◽  
Michael F Stoffel ◽  
Markus Tepe

The relationship between legislatures and bureaucracies is typically modeled as a principal–agent game. Legislators can acquire information about the (non-)compliance of bureaucrats at some specific cost. Previous studies consider the information from oversight to be perfect, which contradicts most real-world applications. We therefore provide a model that includes random noise as part of the information. The quality of provided goods usually increases with information accuracy while simultaneously requiring less oversight. However, bureaucrats never provide high quality if information accuracy is below a specific threshold. We assess the empirical validity of our predictions in a lab experiment. Our data show that information accuracy is indeed an important determinant of both legislator and bureaucrat decision-making.


2016 ◽  
Vol 39 (1) ◽  
pp. 97-108 ◽  
Author(s):  
Jorge Iván Vélez ◽  
Fernando Marmolejo-Ramos ◽  
Juan Carlos Correa

<p>We propose and illustrate a new graphical method to perform diagnostic analyses in two-way contingency tables. In this method, one observation is added or removed from each cell at a time, whilst the other cells are held constant, and the change in a test statistic of interest is graphically represented. The method provides a very simple way of determining how robust our model is (and hence our conclusions) to small changes introduced to the data. We illustrate via four examples, three of them from real-world applications, how this method works.</p>


2020 ◽  
Author(s):  
Youming Zhang ◽  
Ruofei Zhu ◽  
Zhengzhou Zhu ◽  
Qun Guo ◽  
Lei Pang

The problem of Click-through rate(CTR) prediction is the core issue to many real-world applications such as online advertising and recommendation systems. An effective prediction relies on high-order combinatorial features, which are often hand-crafted by experts. Limited by human experience and high implementation costs, combinatorial features cannot be manually captured thoroughly and comprehensively. There have been efforts in improving hand-crafted features automatically by designing feature-generating models such as FMs, DCN, and so on. Despite the great success of these structures, most of the existing models cannot differentiate the high-quality feature interactions from the huge amount of useless feature interactions, which can easily impair their performance. In this paper, we propose a Higher-Order Attentional Network(HOAN) to select high-quality combinatorial features. HOAN is a hierarchical structure, the multiple crossing layers can learn feature interactions of any order in an end-toend manner. Inside the crossing layer, each interaction item has its unique weight with consideration of global information to eliminate useless features and select high-quality features. Besides, HOAN also maintains the integrity of individual feature embedding and offers interpretive feedback to the calculating process. Furthermore, we combine DNN and HOAN, proposing a Deep & Attentional Crossing Network (DACN) to comprehensively model feature interactions from different perspectives. Experiments on sufficient real-world data show that HOAN and DACN outperform state-of-the-art models.


2017 ◽  
Author(s):  
Martin N. Hebart ◽  
Chris I. Baker

AbstractMultivariate decoding methods were developed originally as tools to enable accurate predictions in real-world applications. The realization that these methods can also be employed to study brain function has led to their widespread adoption in the neurosciences. However, prior to the rise of multivariate decoding, the study of brain function was firmly embedded in a statistical philosophy grounded on univariate methods of data analysis. In this way, multivariate decoding for brain interpretation grew out of two established frameworks: multivariate decoding for predictions in real-world applications, and classical univariate analysis based on the study and interpretation of brain activation. We argue that this led to two confusions, one reflecting a mixture of multivariate decoding for prediction or interpretation, and the other a mixture of the conceptual and statistical philosophies underlying multivariate decoding and classical univariate analysis. Here we attempt to systematically disambiguate multivariate decoding for the study of brain function from the frameworks it grew out of. After elaborating these confusions and their consequences, we describe six, often unappreciated, differences between classical univariate analysis and multivariate decoding. We then focus on how the common interpretation of what is signal and noise changes in multivariate decoding. Finally, we use four examples to illustrate where these confusions may impact the interpretation of neuroimaging data. We conclude with a discussion of potential strategies to help resolve these confusions in interpreting multivariate decoding results, including the potential departure from multivariate decoding methods for the study of brain function.HighlightsWe highlight two sources of confusion that affect the interpretation of multivariate decoding resultsOne confusion arises from the dual use of multivariate decoding for predictions in real-world applications and for interpretation in terms of brain functionThe other confusion arises from the different statistical and conceptual frameworks underlying classical univariate analysis to multivariate decodingWe highlight six differences between classical univariate analysis and multivariate decoding and differences in the interpretation of signal and noiseThese confusions are illustrated in four examples revealing assumptions and limitations of multivariate decoding for interpretation


2020 ◽  
Author(s):  
Maurizio Petrelli ◽  
Luca Caricchi ◽  
Diego Perugini

&lt;p&gt;Clinopyroxene based thermometers and barometers are widely used tools for estimating temperature and pressure conditions under which magmas are stored before eruptions.&lt;/p&gt;&lt;p&gt;Several studies reported the development and the application of Clinopyroxene&amp;#8211;liquid geothermobarometers in many different volcanic environments, also warning on the potential pitfall in using overly complex models [e.g., 1 and references therein]. The main drawback in the use of models with a large number of parameters is the potential overfitting of the calibration data, yielding a poor accuracy in real-world applications. On the other hand, simpler models cannot account for the complexity of natural magmatic systems, requiring different calibrations for different magma chemistries [e.g., 2, 3].&lt;/p&gt;&lt;p&gt;In the present study, we report on the development of Clinopyroxene and Clinopyroxene-liquid thermometers and barometers in a wide range of P-T-X conditions using Machine Learning (ML) algorithms. To avoid overfitting and demonstrate the robustness of the different methods, we randomly split the dataset into training and validation portions and repeating this procedure up to 10000 times to trace the performance of each of the used algorithms. We compared the performance of ML algorithms with classical and established Clinopyroxene and Clinopyroxene-liquid thermometers and barometers using local and global calibrations. Finally, we applied the obtained thermometers and barometers to real study cases.&lt;/p&gt;&lt;p&gt;&amp;#160;&lt;/p&gt;&lt;p&gt;[1]&amp;#160;&amp;#160;&amp;#160;&amp;#160;&amp;#160; K. D. Putirka, Thermometers and barometers for volcanic systems, Minerals, Inclusions and Volcanic Processes, 69. 61&amp;#8211;120, 2008.&lt;/p&gt;&lt;p&gt;[2]&amp;#160;&amp;#160;&amp;#160;&amp;#160;&amp;#160; D. A. Neave, K. D. Putirka, Am. Mineral., 2017, DOI:10.2138/am-2017-5968.&lt;/p&gt;&lt;p&gt;[3]&amp;#160;&amp;#160;&amp;#160;&amp;#160;&amp;#160; M. Masotta, S. Mollo, C. Freda, M. Gaeta, G. Moore, Contrib. to Mineral. Petrol., 2013, DOI:10.1007/s00410-013-0927-9.&lt;/p&gt;


Sign in / Sign up

Export Citation Format

Share Document