scholarly journals Leadership in Singleton Congestion Games

Author(s):  
Alberto Marchesi ◽  
Stefano Coniglio ◽  
Nicola Gatti

We study Stackelberg games where the underlying structure is a congestion game. We recall that, while leadership in 2-player games has been widely investigated, only few results are known when the number of players is three or more. The intractability of finding a Stackelberg equilibrium (SE) in normal-form and polymatrix games is among them. In this paper, we focus on congestion games in which each player can choose a single resource (a.k.a. singleton congestion games) and a player acts as leader. We show that, without further assumptions, finding an SE when the followers break ties in favor of the leader is not in Poly-APX, unless P = NP. Instead, under the assumption that every player has access to the same resources and that the cost functions are monotonic, we show that an SE can be computed efficiently when the followers break ties either in favor or against the leader.

2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Kehan Si ◽  
Zhen Wu

AbstractThis paper studies a controlled backward-forward linear-quadratic-Gaussian (LQG) large population system in Stackelberg games. The leader agent is of backward state and follower agents are of forward state. The leader agent is dominating as its state enters those of follower agents. On the other hand, the state-average of all follower agents affects the cost functional of the leader agent. In reality, the leader and the followers may represent two typical types of participants involved in market price formation: the supplier and producers. This differs from standard MFG literature and is mainly due to the Stackelberg structure here. By variational analysis, the consistency condition system can be represented by some fully-coupled backward-forward stochastic differential equations (BFSDEs) with high dimensional block structure in an open-loop sense. Next, we discuss the well-posedness of such a BFSDE system by virtue of the contraction mapping method. Consequently, we obtain the decentralized strategies for the leader and follower agents which are proved to satisfy the ε-Nash equilibrium property.


2010 ◽  
Vol 56 (No. 5) ◽  
pp. 201-208 ◽  
Author(s):  
M. Beranová ◽  
D. Martinovičová

The costs functions are mentioned mostly in the relation to the Break-even Analysis where they are presented in the linear form. But there exist several different types and forms of cost functions. Fist of all, it is necessary to distinguish between the short-run and long-run cost function that are both very important tools of the managerial decision making even if each one is used on a different level of management. Also several methods of estimation of the cost function's parameters are elaborated in the literature. But all these methods are based on the past data taken from the financial accounting while the financial accounting is not able to separate the fixed and variable costs and it is also strongly adjusted to taxation in the many companies. As a tool of the managerial decision making support, the cost functions should provide a vision to the future where many factors of risk and uncertainty influence economic results. Consequently, these random factors should be considered in the construction of cost functions, especially in the long-run. In order to quantify the influences of these risks and uncertainties, the authors submit the application of the Bayesian Theorem.


Energies ◽  
2021 ◽  
Vol 14 (10) ◽  
pp. 2885
Author(s):  
Daniel Losada ◽  
Ameena Al-Sumaiti ◽  
Sergio Rivera

This article presents the development, simulation and validation of the uncertainty cost functions for a commercial building with climate-dependent controllable loads, located in Florida, USA. For its development, statistical data on the energy consumption of the building in 2016 were used, along with the deployment of kernel density estimator to characterize its probabilistic behavior. For validation of the uncertainty cost functions, the Monte-Carlo simulation method was used to make comparisons between the analytical results and the results obtained by the method. The cost functions found differential errors of less than 1%, compared to the Monte-Carlo simulation method. With this, there is an analytical approach to the uncertainty costs of the building that can be used in the development of optimal energy dispatches, as well as a complementary method for the probabilistic characterization of the stochastic behavior of agents in the electricity sector.


Author(s):  
João P. Hespanha

This chapter discusses several classes of potential games that are common in the literature and how to derive the Nash equilibrium for such games. It first considers identical interests games and dummy games before turning to decoupled games and bilateral symmetric games. It then describes congestion games, in which all players are equal, in the sense that the cost associated with each resource only depends on the total number of players using that resource and not on which players use it. It also presents other potential games, including the Sudoku puzzle, and goes on to analyze the distributed resource allocation problem, the computation of Nash equilibria for potential games, and fictitious play. It concludes with practice exercises and their corresponding solutions, along with additional exercises.


2014 ◽  
Vol 26 (12) ◽  
pp. 2669-2691 ◽  
Author(s):  
Terence D. Sanger

Human movement differs from robot control because of its flexibility in unknown environments, robustness to perturbation, and tolerance of unknown parameters and unpredictable variability. We propose a new theory, risk-aware control, in which movement is governed by estimates of risk based on uncertainty about the current state and knowledge of the cost of errors. We demonstrate the existence of a feedback control law that implements risk-aware control and show that this control law can be directly implemented by populations of spiking neurons. Simulated examples of risk-aware control for time-varying cost functions as well as learning of unknown dynamics in a stochastic risky environment are provided.


2008 ◽  
Vol 83 (2) ◽  
pp. 198-223 ◽  
Author(s):  
Robert Costrell ◽  
Eric Hanushek ◽  
Susanna Loeb

2018 ◽  
Vol 11 (1) ◽  
pp. 429-439 ◽  
Author(s):  
Marcin L. Witek ◽  
Michael J. Garay ◽  
David J. Diner ◽  
Michael A. Bull ◽  
Felix C. Seidel

Abstract. A new method for retrieving aerosol optical depth (AOD) and its uncertainty from Multi-angle Imaging SpectroRadiometer (MISR) observations over dark water is outlined. MISR's aerosol retrieval algorithm calculates cost functions between observed and pre-simulated radiances for a range of AODs (from 0.0 to 3.0) and a prescribed set of aerosol mixtures. The previous version 22 (V22) operational algorithm considered only the AOD that minimized the cost function for each aerosol mixture and then used a combination of these values to compute the final, “best estimate” AOD and associated uncertainty. The new approach considers the entire range of cost functions associated with each aerosol mixture. The uncertainty of the reported AOD depends on a combination of (a) the absolute values of the cost functions for each aerosol mixture, (b) the widths of the cost function distributions as a function of AOD, and (c) the spread of the cost function distributions among the ensemble of mixtures. A key benefit of the new approach is that, unlike the V22 algorithm, it does not rely on empirical thresholds imposed on the cost function to determine the success or failure of a particular mixture. Furthermore, a new aerosol retrieval confidence index (ARCI) is established that can be used to screen high-AOD retrieval blunders caused by cloud contamination or other factors. Requiring ARCI ≥0.15 as a condition for retrieval success is supported through statistical analysis and outperforms the thresholds used in the V22 algorithm. The described changes to the MISR dark water algorithm will become operational in the new MISR aerosol product (V23), planned for release in 2017.


2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Haifeng Zhao ◽  
Bin Lin ◽  
Wanqing Mao ◽  
Yang Ye

Cooperation of all the members in a supply chain plays an important role in logistics service. The service integrator can encourage cooperation from service suppliers by sharing their cost during the service, which we assume can increase the sales by accumulating the reputation of the supply chain. A differential game model is established with the logistics service supply chain that consists of one service integrator and one supplier. And we derive the optimal solutions of the Nash equilibrium without cost sharing contract and the Stackelberg equilibrium with the integrator as the leader who partially shares the cost of the efforts of the supplier. The results make the benefits of the cost sharing contract in increasing the profits of both players as well as the whole supply chain explicit, which means that the cost sharing contract is an effective coordination mechanism in the long-term relationship of the members in a logistics service supply chain.


Author(s):  
Lingfeng Yang ◽  
Tonghai Wu ◽  
Kunpeng Wang ◽  
Hongkun Wu ◽  
Ngaiming Kwok

Online ferrography, because of its nondestructive and real-time capability, has been increasingly applied in monitoring machine wear states. However, online ferrography images are usually degraded as a result of undesirable image acquisition conditions, which eventually lead to inaccurate identifications. A restoration method focusing on color correction and contrast enhancement is developed to provide high-quality images for subsequent processing. Based on the formation of a degraded image, a model describing the degradation is constructed. Then, cost functions consisting of colorfulness, contrast, and information loss are formulated. An optimal restored image is obtained by minimizing the cost functions, in which parameters are properly determined using the Lagrange multiplier. Experiments are carried out on a collection of online ferrography images, and results show that the proposed method can effectively improve the image both qualitatively and quantitatively.


1999 ◽  
Vol 09 (01) ◽  
pp. 135-146
Author(s):  
GAGAN AGRAWAL

An important component in compiling for distributed memory machines is data partitioning. While a number of automatic analysis techniques have been proposed for this phase, none of them is applicable for irregular problems. In this paper, we present compile-time analysis for determining data partitioning for such applications. We have developed a set of cost functions for determining communication and redistribution costs in irregular codes. We first determine the appropriate distributions for a single data parallel statement, and then use the cost functions with a greedy algorithm for computing distributions for the full program. Initial performance results on a 16 processor IBM SP-2 are also presented.


Sign in / Sign up

Export Citation Format

Share Document