scholarly journals Tourist Choice Processing: Evaluating Decision Rules and Methods of Their Measurement

2016 ◽  
Vol 56 (6) ◽  
pp. 699-711 ◽  
Author(s):  
Chunxiao Li ◽  
Scott McCabe ◽  
Haiyan Song

A detailed understanding of decision rules is essential in order to better explain consumption behavior, yet the variety of decision rules used have been somewhat neglected in tourism research. This study adopts an innovative method, greedoid analysis, to estimate a noncompensatory type of decision rule known as lexicographic by aspect (LBA). It is quite different from the weighted additive (WADD) model commonly assumed in tourism studies. By utilizing an experimental research design, this study enables the evaluation of the two types of decision rules regarding their predictive and explanatory power. Additionally, we introduce a novel evaluation indicator (“cost”), which allows further investigation of the heterogeneity in the use of decision rules. The results suggest that although the out-of-sample accuracy is lower, the LBA model has a better explanatory performance on respondents’ preference order. Moreover, the different perspective provided by the LBA model is useful for obtaining managerial implications.

2019 ◽  
Vol 50 (4) ◽  
pp. 1405-1417 ◽  
Author(s):  
Drew Bowlsby ◽  
Erica Chenoweth ◽  
Cullen Hendrix ◽  
Jonathan D. Moyer

AbstractPrevious research by Goldstone et al. (2010) generated a highly accurate predictive model of state-level political instability. Notably, this model identifies political institutions – and partial democracy with factionalism, specifically – as the most compelling factors explaining when and where instability events are likely to occur. This article reassesses the model’s explanatory power and makes three related points: (1) the model’s predictive power varies substantially over time; (2) its predictive power peaked in the period used for out-of-sample validation (1995–2004) in the original study and (3) the model performs relatively poorly in the more recent period. The authors find that this decline is not simply due to the Arab Uprisings, instability events that occurred in autocracies. Similar issues are found with attempts to predict nonviolent uprisings (Chenoweth and Ulfelder 2017) and armed conflict onset and continuation (Hegre et al. 2013). These results inform two conclusions: (1) the drivers of instability are not constant over time and (2) care must be exercised in interpreting prediction exercises as evidence in favor or dispositive of theoretical mechanisms.


Author(s):  
David Easley ◽  
Marcos López de Prado ◽  
Maureen O’Hara ◽  
Zhibai Zhang

Abstract Understanding modern market microstructure phenomena requires large amounts of data and advanced mathematical tools. We demonstrate how machine learning can be applied to microstructural research. We find that microstructure measures continue to provide insights into the price process in current complex markets. Some microstructure features with high explanatory power exhibit low predictive power, while others with less explanatory power have more predictive power. We find that some microstructure-based measures are useful for out-of-sample prediction of various market statistics, leading to questions about market efficiency. We also show how microstructure measures can have important cross-asset effects. Our results are derived using 87 liquid futures contracts across all asset classes.


2009 ◽  
Vol 37 (6) ◽  
pp. 767-780 ◽  
Author(s):  
Hsiu-Li Chen

The aim was to establish a causal structural model to examine consumers' addictive consumption decisions about tobacco. It was found that a consumer forms his/her risk perception based on three information sources. Moreover, a consumer's risk perception can directly influence his/her attitude toward cigarette smoking and also indirectly influence his/her intention to start smoking. From this study, managerial implications for public health professionals and for tobacco manufacturers can be drawn. For the former, it was found that: (i) antismoking advertising should intensively focus on escalating consumer risk perception and should be targeted toward males, the elderly, or persons with less education; and (ii) antismoking advertising and campaigns should be directed towards encouraging less addicted smokers to cease smoking. For the latter, tobacco manufacturers should employ social marketing techniques encouraging people not to smoke in public areas and discouraging young people from smoking.


Economies ◽  
2021 ◽  
Vol 9 (3) ◽  
pp. 118
Author(s):  
Pyung Kun Chu

Extending earlier research on forecasting recessions with financial variables, I examine the importance of additional financial variables and temporal dependence for recession prediction. I show that both additional financial variables, in particular, the Treasury bill spread, default yield spread, stock return volatility, and temporal cubic terms, which account for temporal dependence, independently help to improve not only in-sample, but also out-of-sample recession prediction. I also find that additional financial variables and temporal cubic terms complement each other in enhancing the predictability of recessions, increasing the explanatory power and decreasing prediction error further, compared to their individual performance.


2020 ◽  
Vol 117 (9) ◽  
pp. 4571-4577 ◽  
Author(s):  
Efstathios D. Gennatas ◽  
Jerome H. Friedman ◽  
Lyle H. Ungar ◽  
Romain Pirracchio ◽  
Eric Eaton ◽  
...  

Machine learning is proving invaluable across disciplines. However, its success is often limited by the quality and quantity of available data, while its adoption is limited by the level of trust afforded by given models. Human vs. machine performance is commonly compared empirically to decide whether a certain task should be performed by a computer or an expert. In reality, the optimal learning strategy may involve combining the complementary strengths of humans and machines. Here, we present expert-augmented machine learning (EAML), an automated method that guides the extraction of expert knowledge and its integration into machine-learned models. We used a large dataset of intensive-care patient data to derive 126 decision rules that predict hospital mortality. Using an online platform, we asked 15 clinicians to assess the relative risk of the subpopulation defined by each rule compared to the total sample. We compared the clinician-assessed risk to the empirical risk and found that, while clinicians agreed with the data in most cases, there were notable exceptions where they overestimated or underestimated the true risk. Studying the rules with greatest disagreement, we identified problems with the training data, including one miscoded variable and one hidden confounder. Filtering the rules based on the extent of disagreement between clinician-assessed risk and empirical risk, we improved performance on out-of-sample data and were able to train with less data. EAML provides a platform for automated creation of problem-specific priors, which help build robust and dependable machine-learning models in critical applications.


Symmetry ◽  
2020 ◽  
Vol 12 (6) ◽  
pp. 888
Author(s):  
Qin Xiao ◽  
Fan Luo ◽  
Yapeng Li

Seaplanes have become popular tourism and transportation tools with the ability of take-off and land on water. Recent seaplane accidents are highlighting the need for safety analysis of the seaplane operation process, which includes the sequential stages of water-taxiing, take-off, flight, and landing. This paper proposes a novel approach to modeling the risk of seaplane operation safety using a Bayesian network (BN). The rough risk factors that may cause seaplane accidents are identified by historical data, literature review, and interviews with experts. Based on the identification result, a risk evaluation indicator system is constructed and screened by the Delphi method. The structure of the proposed BN is derived from the indicator system. The parameter of the BN is obtained by expert experience and parameter learning from statistical data. The BN model is validated with an out-of-sample test demonstrating nearly 95% prediction accuracy of the accident severity level. The model is then applied to conduct diagnosis inference and sensitivity analysis to identify the key risk factors for seaplane operation accidents. The result shows that the four most critical risk factors are mental barrier, mechanical failure, visibility, and improper emergency disposal. It provides an early warning to take appropriate preventive and mitigative measures to enhance the overall safety of the seaplane operation process.


Mathematics ◽  
2020 ◽  
Vol 8 (9) ◽  
pp. 1627
Author(s):  
Lucas Schneider ◽  
Johannes Stübinger

This paper develops a dispersion trading strategy based on a statistical index subsetting procedure and applies it to the S&P 500 constituents from January 2000 to December 2017. In particular, our selection process determines appropriate subset weights by exploiting a principal component analysis to specify the individual index explanatory power of each stock. In the following out-of-sample trading period, we trade the most suitable stocks using a hedged and unhedged approach. Within the large-scale back-testing study, the trading frameworks achieve statistically and economically significant returns of 14.52 and 26.51 percent p.a. after transaction costs, as well as a Sharpe ratio of 0.40 and 0.34, respectively. Furthermore, the trading performance is robust across varying market conditions. By benchmarking our strategies against a naive subsetting scheme and a buy-and-hold approach, we find that our statistical trading systems possess superior risk-return characteristics. Finally, a deep dive analysis shows synchronous developments between the chosen number of principal components and the S&P 500 index.


2018 ◽  
Vol 53 (6) ◽  
pp. 2525-2558 ◽  
Author(s):  
Jun Duanmu ◽  
Alexey Malakhov ◽  
William R. McCumber

We reconsider whether hedge funds’ time-varying risk factor exposures are predictive of superior performance. We construct an overall measure (BA) of fund managers and present evidence that top beta active managers deliver superior long-term out-of-sample performance compared to top alpha active managers. BA captures the time-varying nature of beta exposures and can be interpreted as a common factor of both systematic risk (SR) and (1 - R2) measures. BA also compares favorably to extant measures of market timing, capturing the explanatory power of such measures of hedge fund performance.


2015 ◽  
Vol 22 (5) ◽  
pp. 685-714 ◽  
Author(s):  
Kao-Yi SHEN ◽  
Gwo-Hshiung TZENG

This study proposes a combined method to integrate soft computing techniques and multiple criteria decision making (MCDM) methods to guide semiconductor companies to improve financial performance (FP) – based on logical reasoning. The complex and imprecise patterns of FP changes are explored by dominance-based rough set approach (DRSA) to find decision rules associated with FP changes. Companies may identify its underperformed criterion (gap) to conduct formal concept analysis (FCA) – by implication rules – to explore the source criteria regarding the underperformed gap. The source criteria are analysed by decision making trial and evaluation laboratory (DEMATEL) technique to explore the cause-effect relationship among the source criteria for guiding improvements; in the next, DEMATEL-based analytical network process (DANP) can provide the influential weights to form an evaluation model, to select or rank improvement plans. To illustrate the proposed method, the financial data of a real semiconductor company is used as an example to show the involved processes: from performance gaps identification to the selection of five assumed improvement plans. Moreover, the obtained implication rules can integrate with DEMATEL analysis to explore directional influences among the critical criteria, which may provide rich insights and managerial implications in practice.


2020 ◽  
Vol 53 (4) ◽  
pp. 513-554
Author(s):  
Daniel V. Fauser ◽  
Andreas Gruener

This paper examines the prediction accuracy of various machine learning (ML) algorithms for firm credit risk. It marks the first attempt to leverage data on corporate social irresponsibility (CSI) to better predict credit risk in an ML context. Even though the literature on default and credit risk is vast, the potential explanatory power of CSI for firm credit risk prediction remains unexplored. Previous research has shown that CSI may jeopardize firm survival and thus potentially comes into play in predicting credit risk. We find that prediction accuracy varies considerably between algorithms, with advanced machine learning algorithms (e. g. random forests) outperforming traditional ones (e. g. linear regression). Random forest regression achieves an out-of-sample prediction accuracy of 89.75% for adjusted R2 due to the ability of capturing non-linearity and complex interaction effects in the data. We further show that including information on CSI in firm credit risk prediction does not consistently increase prediction accuracy. One possible interpretation of this result is that CSI does not (yet) seem to be systematically reflected in credit ratings, despite prior literature indicating that CSI increases credit risk. Our study contributes to improving firm credit risk predictions using a machine learning design and to exploring how CSI is reflected in credit risk ratings.


Sign in / Sign up

Export Citation Format

Share Document