scholarly journals Private Bayesian Persuasion with Sequential Games

2020 ◽  
Vol 34 (02) ◽  
pp. 1886-1893
Author(s):  
Andrea Celli ◽  
Stefano Coniglio ◽  
Nicola Gatti

We study an information-structure design problem (a.k.a. a persuasion problem) with a single sender and multiple receivers with actions of a priori unknown types, independently drawn from action-specific marginal probability distributions. As in the standard Bayesian persuasion model, the sender has access to additional information regarding the action types, which she can exploit when committing to a (noisy) signaling scheme through which she sends a private signal to each receiver. The novelty of our model is in considering the much more expressive case in which the receivers interact in a sequential game with imperfect information, with utilities depending on the game outcome and the realized action types. After formalizing the notions of ex ante and ex interim persuasiveness (which differ by the time at which the receivers commit to following the sender's signaling scheme), we investigate the continuous optimization problem of computing a signaling scheme which maximizes the sender's expected revenue. We show that computing an optimal ex ante persuasive signaling scheme is NP-hard when there are three or more receivers. Instead, in contrast with previous hardness results for ex interim persuasion, we show that, for games with two receivers, an optimal ex ante persuasive signaling scheme can be computed in polynomial time thanks to the novel algorithm we propose, based on the ellipsoid method.

Author(s):  
Niklas Hahn ◽  
Martin Hoefer ◽  
Rann Smorodinsky

We study an information-structure design problem (i.e., a Bayesian persuasion problem) in an online scenario. Inspired by the classic gambler's problem, consider a set of candidates who arrive sequentially and are evaluated by one agent (the sender). This agent learns the value from hiring the candidate to herself as well as the value to another agent, the receiver. The sender provides a signal to the receiver who, in turn, makes an irrevocable decision on whether or not to hire the candidate. A-priori, for each agent the distribution of valuation is independent across candidates but may not be identical. We design good online signaling schemes for the sender. To assess the performance, we compare the expected utility to that of an optimal offline scheme by a prophet sender who knows all candidate realizations in advance. We show an optimal prophet inequality for online Bayesian persuasion, with a 1/2-approximation when the instance satisfies a "satisfactory-status-quo" assumption. Without this assumption, there are instances without any finite approximation factor. We extend the results to combinatorial domains and obtain prophet inequalities for matching with multiple hires and multiple receivers.


The review article discusses the possibilities of using fractal mathematical analysis to solve scientific and applied problems of modern biology and medicine. The authors show that only such an approach, related to the section of nonlinear mechanics, allows quantifying the chaotic component of the structure and function of living systems, that is a priori important additional information and expands, in particular, the possibilities of diagnostics, differential diagnosis and prediction of the course of physiological and pathological processes. A number of examples demonstrate the specific advantages of using fractal analysis for these purposes. The conclusion can be made that the expanded use of fractal analysis methods in the research work of medical and biological specialists is promising.


Robotica ◽  
2000 ◽  
Vol 18 (3) ◽  
pp. 299-303 ◽  
Author(s):  
Carl-Henrik Oertel

Machine vision-based sensing enables automatic hover stabilization of helicopters. The evaluation of image data, which is produced by a camera looking straight to the ground, results in a drift free autonomous on-board position measurement system. No additional information about the appearance of the scenery seen by the camera (e.g. landmarks) is needed. The technique being applied is a combination of the 4D-approach with two dimensional template tracking of a priori unknown features.


2016 ◽  
Vol 12 (S325) ◽  
pp. 145-155
Author(s):  
Fionn Murtagh

AbstractThis work emphasizes that heterogeneity, diversity, discontinuity, and discreteness in data is to be exploited in classification and regression problems. A global a priori model may not be desirable. For data analytics in cosmology, this is motivated by the variety of cosmological objects such as elliptical, spiral, active, and merging galaxies at a wide range of redshifts. Our aim is matching and similarity-based analytics that takes account of discrete relationships in the data. The information structure of the data is represented by a hierarchy or tree where the branch structure, rather than just the proximity, is important. The representation is related to p-adic number theory. The clustering or binning of the data values, related to the precision of the measurements, has a central role in this methodology. If used for regression, our approach is a method of cluster-wise regression, generalizing nearest neighbour regression. Both to exemplify this analytics approach, and to demonstrate computational benefits, we address the well-known photometric redshift or ‘photo-z’ problem, seeking to match Sloan Digital Sky Survey (SDSS) spectroscopic and photometric redshifts.


2016 ◽  
Vol 2016 ◽  
pp. 1-9 ◽  
Author(s):  
Kelin Lu ◽  
K. C. Chang ◽  
Rui Zhou

This paper addresses the problem of distributed fusion when the conditional independence assumptions on sensor measurements or local estimates are not met. A new data fusion algorithm called Copula fusion is presented. The proposed method is grounded on Copula statistical modeling and Bayesian analysis. The primary advantage of the Copula-based methodology is that it could reveal the unknown correlation that allows one to build joint probability distributions with potentially arbitrary underlying marginals and a desired intermodal dependence. The proposed fusion algorithm requires no a priori knowledge of communications patterns or network connectivity. The simulation results show that the Copula fusion brings a consistent estimate for a wide range of process noises.


2021 ◽  
Vol 9 (2) ◽  
pp. T585-T598
Author(s):  
Abidin B. Caf ◽  
John D. Pigott

Extensive dolomitization is prevalent in the platform and periplatform carbonates in the Lower-Middle Permian strata in the Midland and greater Permian Basin. Early workers have found that the platform and shelf-top carbonates were dolomitized, whereas slope and basinal carbonates remained calcitic, proposing a reflux dolomitization model as the possible diagenetic mechanism. More importantly, they underline that this dolomitization pattern controls the porosity and forms an updip seal. These studies are predominately conducted using well logs, cores, and outcrop analogs, and although exhibiting high resolution vertically, such determinations are laterally sparse. We have used supervised Bayesian classification and probabilistic neural networks (PNN) on a 3D seismic volume to create an estimation of the most probable distribution of dolomite and limestone within a subsurface 3D volume petrophysically constrained. Combining this lithologic information with porosity, we then illuminate the diagenetic effects on a seismic scale. We started our workflow by deriving lithology classifications from well-log crossplots of neutron porosity and acoustic impedance to determine the a priori proportions of the lithology and the probability density functions calculation for each lithology type. Then, we applied these probability distributions and a priori proportions to 3D seismic volumes of the acoustic impedance and predicted neutron porosity volume to create a lithology volume and probability volumes for each lithology type. The acoustic impedance volume was obtained by model-based poststack inversion, and the neutron porosity volume was obtained by the PNN. Our results best supported a regional reflux dolomitization model, in which the porosity is increasing from shelf to slope while the dolomitization is decreasing, but with sea-level forcing. With this study, we determined that diagenesis and the corresponding reservoir quality in these platforms and periplatform strata can be directly imaged and mapped on a seismic scale by quantitative seismic interpretation and supervised classification methods.


2021 ◽  
Author(s):  
Natacha Galmiche ◽  
Nello Blaser ◽  
Morten Brun ◽  
Helwig Hauser ◽  
Thomas Spengler ◽  
...  

<p>Probability distributions based on ensemble forecasts are commonly used to assess uncertainty in weather prediction. However, interpreting these distributions is not trivial, especially in the case of multimodality with distinct likely outcomes. The conventional summary employs mean and standard deviation across ensemble members, which works well for unimodal, Gaussian-like distributions. In the case of multimodality this misleads, discarding crucial information. </p><p>We aim at combining previously developed clustering algorithms in machine learning and topological data analysis to extract useful information such as the number of clusters in an ensemble. Given the chaotic behaviour of the atmosphere, machine learning techniques can provide relevant results even if no, or very little, a priori information about the data is available. In addition, topological methods that analyse the shape of the data can make results explainable.</p><p>Given an ensemble of univariate time series, a graph is generated whose edges and vertices represent clusters of members, including additional information for each cluster such as the members belonging to them, their uncertainty, and their relevance according to the graph. In the case of multimodality, this approach provides relevant and quantitative information beyond the commonly used mean and standard deviation approach that helps to further characterise the predictability.</p>


Author(s):  
Natalia D. Nikolova ◽  
Kiril I. Tenekedjiev

The chapter focuses on the analysis of scaling constants when constructing a utility function over multi-dimensional prizes. Due to fuzzy rationality, those constants are elicited in an interval form. It is assumed that the decision maker has provided additional information describing the uncertainty of the scaling constants’ values within their uncertainty interval. The non-uniform method is presented to find point estimates of the interval scaling constants and to test their unit sum. An analytical solution of the procedure to construct the distribution of the interval scaling constants is provided, along with its numerical realization. A numerical procedure to estimate pvalue of the statistical test is also presented. The method allows making an uncertainty description of constants through different types of probability distributions and fuzzy sets.


Author(s):  
Hector Florez ◽  
Mario Sanchez ◽  
Jorge Villalobos

Enterprise models are created to analyze, document, and communicate the state of an enterprise under multiple perspectives. In addition to being large and complex, the construction of these models presents several difficulties: firstly, they require information provided by sources that might be inaccurate, incomplete, or even obsolete; secondly, although they should be structured, it is not possible to completely define their metamodel a priori. To support this construction process, the usage of enterprise model drafts is proposed, which should have the capacity to conform to changing metamodels and should also support incomplete or imperfect information. Unfortunately, current frameworks and tools have limitations for supporting these two features. Therefore, a set of strategies for the construction of modeling environments that make it possible to properly handle drafts is also proposed. These strategies include the support of metamodel flexibility during the modeling process and an approach to model imperfect information.


2000 ◽  
Vol 02 (02n03) ◽  
pp. 229-248 ◽  
Author(s):  
JOSEF SHINAR ◽  
TAL SHIMA ◽  
VALERY Y. GLIZER

A linear pursuit-evasion game with first-order acceleration dynamics and bounded controls is considered. In this game, the pursuer has to estimate the state variables of the game, including the lateral acceleration of the evader, based on the noise-corrupted measurements of the relative position vector. The estimation process inherently involves some delay, rendering the information structure of the pursuer imperfect. If the pursuer implements the optimal strategy of the perfect information game, an evader with perfect information can take advantage of the estimation delay. However, the performance degradation is minimised if the pursuer compensates for its own estimation delay by implementing the optimal strategy derived from the solution of the imperfect (delayed) information game. In this paper the analytical solution of the delayed information game, allowing to predict the value of the game, is presented. The theoretical results are tested in a noise-corrupted scenario by Monte Carlo simulations, using a Kalman filter type estimator. The simulation results confirm the substantial improvement achieved by the new pursuer strategy.


Sign in / Sign up

Export Citation Format

Share Document