Parametric Uncertainty and Fat Tails

Author(s):  
Christian Gollier

This chapter shows how the probability distribution for economic growth is subject to some parametric uncertainty. There is a limited data set for the dynamics of economic growth, and the absence of a sufficiently large data set to estimate the long-term growth process of the economy implies that its parameters are uncertain and subject to learning in the future. This problem is particularly crucial when its parameters are unstable, or when the dynamic process entails low-probability extreme events. Thus, the rarer the event, the less precise the estimate of its likelihood. This builds a bridge between the problem of parametric uncertainty, and the one of extreme events.

2020 ◽  
Vol 72 (1) ◽  
Author(s):  
Ryuho Kataoka

Abstract Statistical distributions are investigated for magnetic storms, sudden commencements (SCs), and substorms to identify the possible amplitude of the one in 100-year and 1000-year events from a limited data set of less than 100 years. The lists of magnetic storms and SCs are provided from Kakioka Magnetic Observatory, while the lists of substorms are obtained from SuperMAG. It is found that majorities of events essentially follow the log-normal distribution, as expected from the random output from a complex system. However, it is uncertain that large-amplitude events follow the same log-normal distributions, and rather follow the power-law distributions. Based on the statistical distributions, the probable amplitudes of the 100-year (1000-year) events can be estimated for magnetic storms, SCs, and substorms as approximately 750 nT (1100 nT), 230 nT (450 nT), and 5000 nT (6200 nT), respectively. The possible origin to cause the statistical distributions is also discussed, consulting the other space weather phenomena such as solar flares, coronal mass ejections, and solar energetic particles.


2016 ◽  
Vol 42 (4) ◽  
pp. 637-660 ◽  
Author(s):  
Germán Kruszewski ◽  
Denis Paperno ◽  
Raffaella Bernardi ◽  
Marco Baroni

Logical negation is a challenge for distributional semantics, because predicates and their negations tend to occur in very similar contexts, and consequently their distributional vectors are very similar. Indeed, it is not even clear what properties a “negated” distributional vector should possess. However, when linguistic negation is considered in its actual discourse usage, it often performs a role that is quite different from straightforward logical negation. If someone states, in the middle of a conversation, that “This is not a dog,” the negation strongly suggests a restricted set of alternative predicates that might hold true of the object being talked about. In particular, other canids and middle-sized mammals are plausible alternatives, birds are less likely, skyscrapers and other large buildings virtually impossible. Conversational negation acts like a graded similarity function, of the sort that distributional semantics might be good at capturing. In this article, we introduce a large data set of alternative plausibility ratings for conversationally negated nominal predicates, and we show that simple similarity in distributional semantic space provides an excellent fit to subject data. On the one hand, this fills a gap in the literature on conversational negation, proposing distributional semantics as the right tool to make explicit predictions about potential alternatives of negated predicates. On the other hand, the results suggest that negation, when addressed from a broader pragmatic perspective, far from being a nuisance, is an ideal application domain for distributional semantic methods.


Author(s):  
Christian Gollier

This chapter aims to provide a unified theoretical foundation to the term structure of discount rates. To do this the chapter develops a benchmark model based on two assumptions: individual preferences toward risk, and the nature of the uncertainty over economic growth. Previously, it was shown that constant relative risk aversion, combined with a random walk for the growth of log consumption, yields a flat term structure for efficient discount rates. In this chapter, these two assumptions are relaxed by using a stochastic dominance approach. Stochastic models of economic growth with mean-reversion, Markov switches, and parametric uncertainty all exhibit some forms of positive statistical dependence of successive growth rates. Because this tends to magnify the long-term risk, it is the driving force of the decreasing nature of the term structure.


2020 ◽  
pp. 004728752095820
Author(s):  
Andrea Guizzardi ◽  
Marcello M. Mariani

This study introduces a new method, named Dynamic Destination Satisfaction Method (DDSME), to model tourists’ satisfaction with a destination (and its attributes), breaking it down into an individual-level component (linked to the specific individual tourists’ perceptions) and a system-level (time-related) component (common to all the tourists). Moreover, this work develops a matrix “entropy/trend accuracy” that destination managers can use to understand to what extent managing a specific attribute has increased tourists’ satisfaction with the destination over multiyear time spans. We test the innovative method on a large data set, covering the period 1997-2015 and including almost 0.8 million observations. By doing so, we analyze tourists’ satisfaction with tourism-related sectors and attributes of Italy as an inbound tourism destination and we use the matrix to map out destination attributes over time. The findings indicate that courtesy, art, and food are strategic attributes to enhance satisfaction in the long term.


2012 ◽  
Vol 03 (02) ◽  
pp. 1250009
Author(s):  
CHARLES KENNY

Robert Solow's model of "exogenous" economic growth driven by the global diffusion of technology is out of fashion because it is contradicted by empirical evidence of income divergence. Today, economic growth is considered "endogenous" and institutions are seen as central to the long-term growth process. At the same time, non-income measures of quality of life do see strong patterns of global growth and convergence. This suggests that institutions may be less important to achieve progress in broader quality of life while a larger and important role concerns the factors that drive exogenous change, including the flow of technology and ideas.


2014 ◽  
Vol 931-932 ◽  
pp. 1353-1359
Author(s):  
Sutheetutt Vacharaskunee ◽  
Sarun Intakosum

Processing of a large data set which is known for today as big data processing is still a problem that has not yet a well-defined solution. The data can be both structured and unstructured. For the structured part, eXtensible Markup Language (XML) is a major tool that freely allows document owners to describe and organize their data using their markup tags. One major problem, however, behind this freedom lies in the big data retrieving process. The same or similar information that are described using the different tags or different structures may not be retrieved if the query statements contains different keywords to the one used in the markup tags. The best way to solve this problem is to specify a standard set of the markup tags for each problem domain. The creation of such a standard set if done manually requires a lot of hard work and is a time consuming process. In addition, it may be hard to define terms that are acceptable by all people. This research proposes a model for a new technique, XML Tag Recommendation (XTR) that aims to solve this problem. This technique applies the idea of Case Base Reasoning (CBR) by collecting the most used tags in each domain as a case. These tags come from the collection of related words in WordNet. The WordCount that is the web site to find the frequency of words is applied to choose the most used one. The input (problem) to the XTR system is an XML document contains the tags specified by the document owners. The solution is a set of the recommended tags, which is the most used tags, for the problem domain of the document. Document owners have a freedom to change or not change the tags in their documents and can provide feedback to the XTR system.


2021 ◽  
Vol 25 (4) ◽  
pp. 2223-2237
Author(s):  
William Rust ◽  
Mark Cuthbert ◽  
John Bloomfield ◽  
Ron Corstanje ◽  
Nicholas Howden ◽  
...  

Abstract. An understanding of multi-annual behaviour in streamflow allows for better estimation of the risks associated with hydrological extremes. This can enable improved preparedness for streamflow-dependant services, such as freshwater ecology, drinking water supply and agriculture. Recently, efforts have focused on detecting relationships between long-term hydrological behaviour and oscillatory climate systems (such as the North Atlantic Oscillation – NAO). For instance, the approximate 7 year periodicity of the NAO has been detected in groundwater-level records in the North Atlantic region, providing potential improvements to the preparedness for future water resource extremes due to their repetitive, periodic nature. However, the extent to which these 7-year, NAO-like signals are propagated to streamflow, and the catchment processes that modulate this propagation, are currently unknown. Here, we show statistically significant evidence that these 7-year periodicities are present in streamflow (and associated catchment rainfall), by applying multi-resolution analysis to a large data set of streamflow and associated catchment rainfall across the UK. Our results provide new evidence for spatial patterns of NAO periodicities in UK rainfall, with areas of greatest NAO signal found in southwest England, south Wales, Northern Ireland and central Scotland, and show that NAO-like periodicities account for a greater proportion of streamflow variability in these areas. Furthermore, we find that catchments with greater subsurface pathway contribution, as characterised by the baseflow index (BFI), generally show increased NAO-like signal strength and that subsurface response times (as characterised by groundwater response time – GRT), of between 4 and 8 years, show a greater signal presence. Our results provide a foundation of understanding for the screening and use of streamflow teleconnections for improving the practice and policy of long-term streamflow resource management.


2012 ◽  
Vol 12 (1) ◽  
pp. 817-868 ◽  
Author(s):  
J. K. Carman ◽  
D. L. Rossiter ◽  
D. Khelif ◽  
H. H. Jonsson ◽  
I. C. Faloona ◽  
...  

Abstract. Aircraft sampling of the stratocumulus-topped boundary layer (STBL) during the Physics of Stratocumulus Top (POST) experiment was primarily achieved using sawtooth flight patterns, during which the atmospheric layer 100 m above and below cloud top was sampled at a frequency of once every 2 min. The large data set that resulted from each of the 16 flights document the complex structure and variability of this interfacial region in a variety of conditions. In this study, we first describe some properties of the entrainment interface layer (EIL), where strong gradients in turbulent kinetic energy (TKE), potential temperature and moisture can be found. We find that defining the EIL by the first two properties tend to yield similar results, but that moisture can be a misleading tracer of the EIL. These results are consistent with studies using large-eddy simulations. We next utilize the POST data to shed light on and constrain processes relevant to entrainment, a key process in the evolution of the STBL that to-date is not well-represented even by high resolution models. We define "entrainment efficiency" as the ratio of the TKE consumed by entrainment to that generated within the STBL (primarily by cloud-top cooling). We find values for the entrainment efficiency that vary by 1.5 orders of magnitude, which is even greater than the one order magnitude that previous modeling results have suggested. Our analysis also demonstrate that the entrainment efficiency depends on the strength of the stratification of the EIL, but not on the TKE in the cloud top region. The relationships between entrainment efficiency and other STBL parameters serve as novel observational contraints for simulations of entrainment in such systems.


Author(s):  
Mo Pak Hung

In this study, the empirical contents of various income inequality measures are compared under an identical framework with a well-tested data set. Our study suggests that long-term income inequality has a strong negative effect on Gross Domestic Product (GDP) growth under different measurements. Moreover, governments should investigate further into changes in the income size of the middle class as an indicator for potential changes in social stability, investment, and GDP growth, besides focusing on the Gini coefficient, which they predominantly do now.


2017 ◽  
Vol 63 (2) ◽  
pp. 287-316 ◽  
Author(s):  
Daniel Druckman ◽  
Lynn Wagner

Attaining durable peace (DP) after a civil war has proven to be a major challenge, as many negotiated agreements lapse into violence. How can negotiations to terminate civil wars be conducted and peace agreements formulated to contribute to lasting peace? This question is addressed in this study with a novel data set. Focusing on justice, we assess relationships between process (procedural justice [PJ]) and outcome (distributive justice [DJ]) justice on the one hand and stable agreements (SA) and DP on the other. Analyses of fifty peace agreements, which were reached from 1957 to 2008, showed a path from PJ to DJ to SA to DP: The justice variables were instrumental in enhancing both short- and long-term peace. These variables had a stronger impact on DP than a variety of contextual- and case-related factors. The empirical link between justice and peace has implications for the way that peace negotiations are structured.


Sign in / Sign up

Export Citation Format

Share Document