scholarly journals On Risk Assessment and a Value Function under Extreme Uncertainty Based on the Concept of Regular Risk Structure

1997 ◽  
Vol 33 (8) ◽  
pp. 819-824
Author(s):  
Hiroyuki TAMURA ◽  
Satoru TAKAHASHI ◽  
Itsuo HATONO ◽  
Motohide UMANO
foresight ◽  
2018 ◽  
Vol 20 (5) ◽  
pp. 554-570
Author(s):  
Boyan Christov Ivantchev

Purpose The purpose of this study is to research the latest quantitative and qualitative transformations of money and its interaction with the market economy and societies in terms of their influence on the inner nature of money and its transformation from a simple tool to an aim per se, i.e. postmoney. Transforming the perception of the intrinsic value and “soul” of the money into the postmoney, influenced by the rising longevity and wide expectation for the ability to scientifically prolong the human life, will be discussed. This transformation will be confirmed by analysing the results from a national representative sociological survey (panel study with sample size n = 1,000). Design/methodology/approach The author uses the following philosophical methodological approaches – comparative-constructive, phenomenological, cognitive and deconstructive analysis. Findings The objective and qualitative reasons offered by the postmoney theory (PMT) for the transformation of money into postmoney, are related to the being of temporality, as well as to technologization and the sixth factor of production, scientific exponentiality and mental changes in the human being. A current postmoney survey gives a strong base to believe that the perception of an intrinsic value of postmoney changes the shape of a value function – from logarithmic to linear or even stochastic. This is the reason to believe that increasing of a postmoney quantity will lead to a qualitative transformation and psychological increase of postmoney sensitivity. Research limitations/implications The author intends to expand the postmoney survey on the international level so to confirm local findings. Practical implications Postmoney survey might be used as a powerful tool in creating and legalizing non-monistic money based on blockchain technologies and philosophical and socio-economic research of the postmoney issue. Social implications The future of money is of great importance for the exponentiality of the socio-economic environment and societies. Social impact of the money will be inevitably rising in the domain of postmoney perception. Originality/value The author of the current paper coined for the first-time notion of postmoney and now is expanding research developing PMT. As per the best knowledge of the author, shape of the curve of value function was not questioned and believes it might be of help to better understand the money phenomenon.


2014 ◽  
Vol 2014 ◽  
pp. 1-8 ◽  
Author(s):  
Masimba Aspinas Mutakaya ◽  
Eriyoti Chikodza ◽  
Edward T. Chiyaka

This paper considers an exchange rate problem in Lévy markets, where the Central Bank has to intervene. We assume that, in the absence of control, the exchange rate evolves according to Brownian motion with a jump component. The Central Bank is allowed to intervene in order to keep the exchange rate as close as possible to a prespecified target value. The interventions by the Central Bank are associated with costs. We present the situation as an impulse control problem, where the objective of the bank is to minimize the intervention costs. In particular, the paper extends the model by Huang, 2009, to incorporate a jump component. We formulate and prove an optimal verification theorem for the impulse control. We then propose an impulse control and construct a value function and then verify that they solve the quasivariational inequalities. Our results suggest that if the expected number of jumps is high the Central Bank will intervene more frequently and with large intervention amounts hence the intervention costs will be high.


2010 ◽  
Vol 3 (1) ◽  
pp. 35-50 ◽  
Author(s):  
Bibiana Alarcon ◽  
Antonio Aguado ◽  
Resmundo Manga ◽  
Alejandro Josa

2017 ◽  
Vol 114 (11) ◽  
pp. 2860-2864 ◽  
Author(s):  
Maria Chikina ◽  
Alan Frieze ◽  
Wesley Pegden

We present a statistical test to detect that a presented state of a reversible Markov chain was not chosen from a stationary distribution. In particular, given a value function for the states of the Markov chain, we would like to show rigorously that the presented state is an outlier with respect to the values, by establishing a p value under the null hypothesis that it was chosen from a stationary distribution of the chain. A simple heuristic used in practice is to sample ranks of states from long random trajectories on the Markov chain and compare these with the rank of the presented state; if the presented state is a 0.1% outlier compared with the sampled ranks (its rank is in the bottom 0.1% of sampled ranks), then this observation should correspond to a p value of 0.001. This significance is not rigorous, however, without good bounds on the mixing time of the Markov chain. Our test is the following: Given the presented state in the Markov chain, take a random walk from the presented state for any number of steps. We prove that observing that the presented state is an ε-outlier on the walk is significant at p=2ε under the null hypothesis that the state was chosen from a stationary distribution. We assume nothing about the Markov chain beyond reversibility and show that significance at p≈ε is best possible in general. We illustrate the use of our test with a potential application to the rigorous detection of gerrymandering in Congressional districting.


2018 ◽  
Vol 21 (1) ◽  
Author(s):  
Helena Holmström Olsson ◽  
Jan Bosch

Today, connected software-intensive products permeate virtually every aspect of our lives and the amount of customer and product data that is collected by companies across domains is exploding. In revealing what products we use, when and how we use them and how the product performs, this data has the potential to help companies optimize existing products, prioritize among features and evaluate new innovations. However, despite advanced data collection and analysis techniques, companies struggle with how to effectively extract value from the data they collect and they experience difficulties in defining what values to optimize for. As a result, the impact of data is low and companies run the risk of sub-optimization due to misalignment of the values they optimize for. In this paper, and based on multi-case study research in embedded systems and online companies, we explore data collection and analysis practices in companies in the embedded systems and in the online domain. In particular, we look into how the value that is delivered to customers can be expressed as a value function that combines different factors that are of importance to customers. By expressing customer value as a value function, companies have the opportunity to increase their awareness of key value factors and they can establish an agreement on what to optimize for. Based on our findings, we see that companies in the embedded systems domain suffer from vague and confusing value functions while companies in the online domain use simple and straightforward value functions to inform development. Ideally, and as proposed in this paper, companies should strive for a comprehensive value function that includes all relevant factors without being vague or too simple as is the case in the companies we studied. To achieve this, and to address the difficulties many companies experience, we present a systematic approach to value modelling in which we provide detailed guidance for how to quantify feature value in such a way that it can be systematically validated over time to help avoid sub-optimization that will harm the company in the long run.


Author(s):  
Michael Robin Mitchley

Reinforcement learning is a machine learning framework whereby an agent learns to perform a task by maximising its total reward received for selecting actions in each state. The policy mapping states to actions that the agent learns is either represented explicitly, or implicitly through a value function. It is common in reinforcement learning to discretise a continuous state space using tile coding or binary features. We prove an upper bound on the performance of discretisation for direct policy representation or value function approximation.


2021 ◽  
Vol 59 (1) ◽  
pp. 77-107

Political risk concerns the profits and investment plans of international business (MNCs, FDI). The Social Dimensions of Political Risk – SDPR is an unchartered territory of political risk. Consequently, on the basis of the analysis of theories of risk, political risk, systems, values and globalization the concept for SDPR is generated. This concept is based on basic assumptions: 1) society is a system whose elements are subsystems; 2) the societal subsystem is at the core of society; 3) the relation between societal subsystem and society is such as the relation element – system; 4) political risk is systemic; 5) values are axial to the system, and their carrier is the societal subsystem; 6) laws are an artificial construct that has only a value function, but is not a value; 7) the incommensurability between values and the above mentioned artificial construct generates SDPRs that are relevant to the risk for society. A formal theoretical and analytical model of SDPR and a value triangle and conceptual index of SDPR based on it are introduced. Key conclusions pertain to the following: the need for reconsider the paradigm of democracy; greater participation of the societal subsystem; need for subsystems’ mutual restraint based on the principle of authorities’ restraint.


2020 ◽  
Author(s):  
Vicki Osborne ◽  
Miranda Davies ◽  
Sandeep Dhanda ◽  
Debabrata Roy ◽  
Samantha Lane ◽  
...  

AbstractObjectivesGiven the current pandemic, there is an urgent need to identify effective, safe treatments for COVID-19 (coronavirus disease). A systematic benefit-risk assessment was designed and conducted to strengthen the ongoing monitoring of the benefit-risk balance for chloroquine (CQ) and hydroxychloroquine (HCQ) in COVID-19 treatment.MethodsThe overall benefit-risk of the use of chloroquine or hydroxychloroquine as a treatment for COVID-19 compared to standard of care, placebo or other treatments was assessed using the Benefit-Risk Action Team (BRAT) framework. We searched PubMed and Google Scholar to identify literature reporting clinical outcomes in patients taking chloroquine or hydroxychloroquine for COVID-19. A value tree was constructed and key benefits and risks were ranked by two clinicians in order of considered importance.ResultsSeveral potential key benefits and risks were identified for use of hydroxychloroquine or chloroquine in COVID-19 treatment. Currently available results did not show an improvement in mortality risk; Cox proportional hazard ratio (HR) for death between patients who received HCQ alone vs. neither hydroxychloroquine or azithromycin was 1.08 (95% CI 0.63-1.85). A further study compared the incidence of intubation or death (composite outcome) in a time to event analysis between patients who received HCQ vs. those patients who did not (adjusted Cox proportional HR 1.00 (95% CI 0.76-1.32)). Risk of cardiac arrest, abnormal electrocardiogram (ECG) and QT prolongation was greater among patients taking HCQ (with or without azithromycin) compared to standard of care in the same study.ConclusionsOverall, based on the available data there does not appear to be a favourable benefit-risk profile for chloroquine or hydroxychloroquine compared to standard of care in treatment of severe COVID-19. As further data from clinical trials and real world use on these benefits and risks becomes available, this can be incorporated into the framework for an ongoing benefit-risk assessment.


Sign in / Sign up

Export Citation Format

Share Document