adaptive allocation
Recently Published Documents


TOTAL DOCUMENTS

131
(FIVE YEARS 24)

H-INDEX

19
(FIVE YEARS 4)

Author(s):  
Andrew Schauf ◽  
Poong Oh

Abstract Communities that share common-pool resources (CPRs) often coordinate their actions to sustain resource quality more effectively than if they were regulated by some centralized authority. Networked models of CPR extraction suggest that the flexibility of individual agents to selectively allocate extraction effort among multiple resources plays an important role in maximizing their payoffs. However, empirical evidence suggests that real-world CPR appropriators may often de-emphasize issues of allocation, for example by responding to the degradation of a single resource by reducing extraction from multiple resources, rather than by reallocating extraction effort away from the degraded resource. Here, we study the population-level consequences that emerge when individuals are constrained to apply an equal amount of extraction effort to all CPRs that are available to them within an affiliation network linking agents to resources. In systems where all resources have the same capacity, this uniform-allocation constraint leads to reduced collective wealth compared to unconstrained best-response extraction, but it can produce more egalitarian wealth distributions. The differences are more pronounced in networks that have higher degree heterogeneity among resources. In the case that the capacity of each CPR is proportional to its number of appropriators, the uniform-allocation constraint can lead to more efficient collective extraction since it serves to distribute the burden of over-extraction more evenly among the network’s CPRs. Our results reinforce the importance of adaptive allocation in self-regulation for populations who share linearly degrading CPRs; although uniform-allocation extraction habits can help to sustain higher resource quality than does unconstrained extraction, in general this does not improve collective benefits for a population in the long term.


2021 ◽  
Author(s):  
Vincent Graber ◽  
Eugenio Schuster

Abstract ITER will be the first tokamak to sustain a fusion-producing, or burning, plasma. If the plasma temperature were to inadvertently rise in this burning regime, the positive correlation between temperature and the fusion reaction rate would establish a destabilizing positive feedback loop. Careful regulation of the plasma’s temperature and density, or burn control, is required to prevent these potentially reactor-damaging thermal excursions, neutralize disturbances and improve performance. In this work, a Lyapunov-based burn controller is designed using a full zero-dimensional nonlinear model. An adaptive estimator manages destabilizing uncertainties in the plasma confinement properties and the particle recycling conditions (caused by plasma-wall interactions). The controller regulates the plasma density with requests for deuterium and tritium particle injections. In ITER-like plasmas, the fusion-born alpha particles will primarily heat the plasma electrons, resulting in different electron and ion temperatures in the core. By considering separate response models for the electron and ion energies, the proposed controller can independently regulate the electron and ion temperatures by requesting that different amounts of auxiliary power be delivered to the electrons and ions. These two commands for a specific control effort (electron and ion heating) are sent to an actuator allocation module that optimally maps them to the heating actuators available to ITER: an electron cyclotron heating system (20 MW), an ion cyclotron heating system (20 MW), and two neutral beam injectors (16.5 MW each). Two different actuator allocators are presented in this work. The first actuator allocator finds the optimal mapping by solving a convex quadratic program that includes actuator saturation and rate limits. It is nonadaptive and assumes that the mapping between the commanded control efforts and the allocated actuators (i.e., the effector model) contains no uncertainties. The second actuator allocation module has an adaptive estimator to handle uncertainties in the effector model. This uncertainty includes actuator efficiencies, the fractions of neutral beam heating that are deposited into the plasma electrons and ions, and the tritium concentration of the fueling pellets. Furthermore, the adaptive allocator considers actuator dynamics (actuation lag) that contain uncertainty. This adaptive allocation algorithm is more computationally efficient than the aforementioned nonadaptive allocator because it is computed using dynamic update laws so that finding the solution to a static optimization problem is not required at every time step. A simulation study assesses the performance of the proposed adaptive burn controller augmented with each of the actuator allocation modules.


2021 ◽  
Author(s):  
Kentaro Matsuura ◽  
Junya Honda ◽  
Imad El Hanafi ◽  
Takashi Sozu ◽  
Kentaro Sakamaki

2020 ◽  
Author(s):  
Yunyun Jiang ◽  
Wenle Zhao ◽  
Valerie Durkalski-Mauldin

Abstract Background: In clinical trials with a large sample size, time-trend in response rates can affect the performance of Bayesian response adaptive randomization (BRAR).Methods: To evaluate this impact, we utilize data from a previously completed randomized controlled trial that used a fixed 1:1 allocation. Subject response data from this study demonstrate a clear time-trend in the control group, but not in the treatment group. In this simulation study, we re-assign patients to treatment groups based on a BRAR algorithm, to examine the performance of BRAR as measured by the treatment effect estimation, the probability of early stopping, and the shift in adaptive allocation.Results: Results from specific simulated study scenario show that in the presence of a time-trend, the timing of the first interim analysis is critical for the efficacy/futility decision making. Compared with fixed equal allocation, BRAR results in a higher probability of premature early stopping when time-trend effect exists. The magnitude of such impact varies among different BRAR algorithms.Conclusions: Influential factors such as time trend, should be considered when planning the implementation of BRAR in large trials.Trial Registration: URL: https://www.clinicaltrials.gov. Unique identifier: NCT00235495. Registered on October 10, 2005.


2020 ◽  
Vol 68 (6) ◽  
pp. 1625-1647 ◽  
Author(s):  
Daniel Russo

This paper considers the optimal adaptive allocation of measurement effort for identifying the best among a finite set of options or designs. An experimenter sequentially chooses designs to measure and observes noisy signals of their quality with the goal of confidently identifying the best design after a small number of measurements. Just as the multiarmed bandit problem crystallizes the tradeoff between exploration and exploitation, this “pure exploration” variant crystallizes the challenge of rapidly gathering information before committing to a final decision. The paper proposes several simple Bayesian algorithms for allocating measurement effort and, by characterizing fundamental asymptotic limits on the performance of any algorithm, formalizes a sense in which these seemingly naive algorithms are the best possible.


Sign in / Sign up

Export Citation Format

Share Document