utility functions
Recently Published Documents


TOTAL DOCUMENTS

1115
(FIVE YEARS 168)

H-INDEX

57
(FIVE YEARS 5)

2022 ◽  
Vol 8 (1) ◽  
pp. 235-256
Author(s):  
Holger Hopp

Second language (L2) sentence processing research studies how adult L2 learners understand sentences in real time. I review how L2 sentence processing differs from monolingual first-language (L1) processing and outline major findings and approaches. Three interacting factors appear to mandate L1–L2 differences: ( a) capacity restrictions in the ability to integrate information in an L2; ( b) L1–L2 differences in the weighting of cues, the timing of their application, and the efficiency of their retrieval; and ( c) variation in the utility functions of predictive processing. Against this backdrop, I outline a novel paradigm of interlanguage processing, which examines bilingual features of L2 processing, such as bilingual language systems, nonselective access to all grammars, and processing to learn an L2. Interlanguage processing goes beyond the traditional framing of L2 sentence processing as an incomplete form of monolingual processing and reconnects the field with current approaches to grammar acquisition and the bilingual mental lexicon.


2022 ◽  
Vol 7 ◽  
pp. 14
Author(s):  
Paul Schneider ◽  
Ben van Hout ◽  
Marike Heisen ◽  
John Brazier ◽  
Nancy Devlin

Introduction Standard valuation methods, such as TTO and DCE are inefficient. They require data from hundreds if not thousands of participants to generate value sets. Here, we present the Online elicitation of Personal Utility Functions (OPUF) tool; a new type of online survey for valuing EQ-5D-5L health states using more efficient, compositional elicitation methods, which even allow estimating value sets on the individual level. The aims of this study are to report on the development of the tool, and to test the feasibility of using it to obtain individual-level value sets for the EQ-5D-5L. Methods We applied an iterative design approach to adapt the PUF method, previously developed by Devlin et al., for use as a standalone online tool. Five rounds of qualitative interviews, and one quantitative pre-pilot were conducted to get feedback on the different tasks. After each round, the tool was refined and re-evaluated. The final version was piloted in a sample of 50 participants from the UK. A demo of the EQ-5D-5L OPUF survey is available at: https://eq5d5l.me Results On average, it took participants about seven minutes to complete the OPUF Tool. Based on the responses, we were able to construct a personal EQ-5D-5L value set for each of the 50 participants. These value sets predicted a participants' choices in a discrete choice experiment with an accuracy of 80%. Overall, the results revealed that health state preferences vary considerably on the individual-level. Nevertheless, we were able to estimate a group-level value set for all 50 participants with reasonable precision. Discussion We successfully piloted the OPUF Tool and showed that it can be used to derive a group-level as well as personal value sets for the EQ-5D-5L. Although the development of the online tool is still in an early stage, there are multiple potential avenues for further research.


2021 ◽  
Author(s):  
Min Dai ◽  
Steven Kou ◽  
Shuaijie Qian ◽  
Xiangwei Wan

The problems of nonconcave utility maximization appear in many areas of finance and economics, such as in behavioral economics, incentive schemes, aspiration utility, and goal-reaching problems. Existing literature solves these problems using the concavification principle. We provide a framework for solving nonconcave utility maximization problems, where the concavification principle may not hold, and the utility functions can be discontinuous. We find that adding portfolio bounds can offer distinct economic insights and implications consistent with existing empirical findings. Theoretically, by introducing a new definition of viscosity solution, we show that a monotone, stable, and consistent finite difference scheme converges to the value functions of the nonconcave utility maximization problems. This paper was accepted by Agostino Capponi, finance.


2021 ◽  
Vol 6 (1) ◽  
pp. 101-112
Author(s):  
Ram Orzach ◽  
◽  
Miron Stano ◽  

This paper highlights the limitations and applicability of results developed by Chao & Nahata (2015) for nonlinear pricing. Although Chao and Nahata appear to provide necessary and sufficient conditions for general utility functions, we show that one of their results leads only to a restatement of two constraints, and another result may not be valid when consumers can freely dispose of the good. Their model allows for the possibility that higher quantities will have a lower price than smaller quantities. We provide conditions under free disposal that preclude this anomaly. Our analysis suggests that further research on violations of the single-crossing condition should be encouraged.


2021 ◽  
Author(s):  
◽  
Arindam Bhakta

<p>Humans and many animals can selectively sample important parts of their visual surroundings to carry out their daily activities like foraging or finding prey or mates. Selective attention allows them to efficiently use the limited resources of the brain by deploying sensory apparatus to collect data believed to be pertinent to the organism's current task in hand.  Robots or other computational agents operating in dynamic environments are similarly exposed to a wide variety of stimuli, which they must process with limited sensory and computational resources. Developing computational models of visual attention has long been of interest as such models enable artificial systems to select necessary information from complex and cluttered visual environments, hence reducing the data-processing burden.  Biologically inspired computational saliency models have previously been used in selectively sampling a visual scene, but these have limited capacity to deal with dynamic environments and have no capacity to reason about uncertainty when planning their visual scene sampling strategy. These models typically select contrast in colour, shape or orientation as salient and sample locations of a visual scene in descending order of salience. After each observation, the area around the sampled location is blocked using inhibition of return mechanism to keep it from being re-visited.  This thesis generalises the traditional model of saliency by using an adaptive Kalman filter estimator to model an agent's understanding of the world and uses a utility function based approach to describe what the agent cares about in the visual scene. This allows the agents to adopt a richer set of perceptual strategies than is possible with the classical winner-take-all mechanism of the traditional saliency model. In contrast with the traditional approach, inhibition of return is achieved without implementing an extra mechanism on top of the underlying structure.  This thesis demonstrates the use of five utility functions that are used to encapsulate the perceptual state that is valued by the agent. Each utility function thereby produces a distinct perceptual behaviour that is matched to particular scenarios.  The resulting visual attention distribution of the five proposed utility functions is demonstrated on five real-life videos.  In most of the experiments, pixel intensity has been used as the source of the saliency map. As the proposed approach is independent of the saliency map used, it can be used with other existing more complex saliency map building models. Moreover, the underlying structure of the model is sufficiently general and flexible, hence it can be used as the base of a new range of more sophisticated gaze control systems.</p>


2021 ◽  
Author(s):  
◽  
Arindam Bhakta

<p>Humans and many animals can selectively sample important parts of their visual surroundings to carry out their daily activities like foraging or finding prey or mates. Selective attention allows them to efficiently use the limited resources of the brain by deploying sensory apparatus to collect data believed to be pertinent to the organism's current task in hand.  Robots or other computational agents operating in dynamic environments are similarly exposed to a wide variety of stimuli, which they must process with limited sensory and computational resources. Developing computational models of visual attention has long been of interest as such models enable artificial systems to select necessary information from complex and cluttered visual environments, hence reducing the data-processing burden.  Biologically inspired computational saliency models have previously been used in selectively sampling a visual scene, but these have limited capacity to deal with dynamic environments and have no capacity to reason about uncertainty when planning their visual scene sampling strategy. These models typically select contrast in colour, shape or orientation as salient and sample locations of a visual scene in descending order of salience. After each observation, the area around the sampled location is blocked using inhibition of return mechanism to keep it from being re-visited.  This thesis generalises the traditional model of saliency by using an adaptive Kalman filter estimator to model an agent's understanding of the world and uses a utility function based approach to describe what the agent cares about in the visual scene. This allows the agents to adopt a richer set of perceptual strategies than is possible with the classical winner-take-all mechanism of the traditional saliency model. In contrast with the traditional approach, inhibition of return is achieved without implementing an extra mechanism on top of the underlying structure.  This thesis demonstrates the use of five utility functions that are used to encapsulate the perceptual state that is valued by the agent. Each utility function thereby produces a distinct perceptual behaviour that is matched to particular scenarios.  The resulting visual attention distribution of the five proposed utility functions is demonstrated on five real-life videos.  In most of the experiments, pixel intensity has been used as the source of the saliency map. As the proposed approach is independent of the saliency map used, it can be used with other existing more complex saliency map building models. Moreover, the underlying structure of the model is sufficiently general and flexible, hence it can be used as the base of a new range of more sophisticated gaze control systems.</p>


2021 ◽  
Vol Volume XIV Issue 1-2 (Symposium: How economists are...) ◽  
Author(s):  
Tiago Cardão-Pito

Contemporary mainstream economics cannot be seen as disconnected from philosophical concerns. On the contrary, it should be understood as a defence for a specific philosophy, namely, crude quantitative hedonism where money would measure pleasure and pain. Disguised among a great mathematical apparatus involving utility functions, supply, and demand, lies a specific hedonist philosophy that every year is lectured to thousands of economic and business students around the world. This hedonist philosophy is much less sophisticated than that in ancient hedonist philosophers as Epicurus or Lucretius. Furthermore, it does not solve any of the systematic difficulties regularly faced by hedonist philosophy. However, the argument that economics is detached from philosophy works as a rhetorical artifice to protect its dominant underlying philosophy: Philosophical disputes would have to be addressed within the biased mathematical apparatus of quantitative hedonism. Economists and business students must learn to identify the underlying philosophy in mainstream economics and alternative philosophical systems.


2021 ◽  
Author(s):  
Marco Rogna ◽  
Carla Vogt

Abstract Impact assessment models are a tool largely used to investigate the benefit of reducing polluting emissions and limiting the anthropogenic mean temperature rise. However, they have been often criticised for suggesting low levels of abatement. Countries and regions, that are generally the actors in these models, are usually depicted as having standard concave utility functions in consumption. This, however, disregards a potentially important aspect of environmental negotiations, namely its distributive implications. The present paper tries to fill this gap assuming that countries\regions have Fehr and Schmidt (1999) (F&S) utility functions, specifically tailored for including inequality aversion. Thereby, we propose a new method for the empirical estimation of the inequality aversion parameters by establishing a link between the well known concept of elasticity of marginal utility of consumption and the F&S utility functions, accounting for heterogeneity of countries/regions. By adopting the RICE model, we compare its standard results with the ones obtained introducing F&S utility functions, showing that, under optimal cooperation, the level of temperature rise is significantly lower in the last scenario. In particular, in the last year of the simulation, the optimal temperature rise is 2.1 ◦ C. Furthermore, it is shown that stable coalitions are easier to be achieved when F&S preferences are assumed, even if the advantageous inequality aversion parameter (altruism) is assumed to have a very low value. However, self–sustaining coalitions are far from reaching the environmental target of limiting the mean temperature rise below 2 ◦ C despite the adoption of F&S utility functions.


Sign in / Sign up

Export Citation Format

Share Document