scholarly journals Predicting similarity judgments in intertemporal choice with machine learning

2017 ◽  
Vol 25 (2) ◽  
pp. 627-635 ◽  
Author(s):  
Jeffrey R. Stevens ◽  
Leen-Kiat Soh
2020 ◽  
Author(s):  
Jeffrey R Stevens ◽  
Alexis Saltzman ◽  
Tanner Rasumussen ◽  
Leen-Kiat Soh

Intertemporal choices involve assessing options with different reward amounts available at different time delays. The similarity approach to intertemporal choice focuses on judging how similar amounts and delays are. Yet we do not fully understand the cognitive process of how these judgments are made. Here, we use machine-learning algorithms to predict similarity judgments to (1) investigate which algorithms best predict these judgments, (2) assess which predictors are most useful in predicting participants' judgments, and (3) determine the minimum number of judgments required to accurately predict future judgments. We applied eight algorithms to similarity judgments for reward amount and time delay made by participants in two data sets. We found that neural network, random forest, and support vector machine algorithms generated the highest out-of-sample accuracy. Though neural networks and support vector machines offer little clarity in terms of a possible process for making similarity judgments, random forest algorithms generate decision trees that can mimic the cognitive computations of human judgment-making. We also found that the numerical difference between amount values or delay values was the most important predictor of these judgments, replicating previous work. Finally, the best performing algorithms such as random forest can make highly accurate predictions of judgments with relatively small sample sizes (~15), which will help minimize the numbers of judgments required to extrapolate to new value pairs. In summary, machine-learning algorithms provide both theoretical improvements to our understanding of the cognitive computations involved in similarity judgments and intertemporal choices as well as practical improvements in designing better ways of collecting data.


2002 ◽  
Vol 40 (4) ◽  
pp. 574-581 ◽  
Author(s):  
Jonathan W. Leland

2021 ◽  
Author(s):  
Jeffrey R Stevens ◽  
Tyler Cully ◽  
Francine W Goh

Similarity models provide an alternative approach to intertemporal choice. Instead of calculating an overall value for options, decision makers compare the similarity of option attributes and make a decision based on similarity. Similarity judgments for reward amounts and time delays depend on both the numerical difference (x2-x1) and ratio (x1/x2) of quantitative values. Changing units of these attribute values (e.g., days vs. weeks) can alter the numerical difference while maintaining the ratio. For example, framing a pair of delays in the unit of weeks (1 vs. 2) or days (7 vs. 14) both result in a ratio of 1/2. Yet the numerical difference between the delays differs depending on the unit (1 for weeks and 7 for days). Here we had participants make similarity judgments and intertemporal choices with amounts framed as dollars or cents and delays framed as days or weeks. We predicted that they units of amounts and delays would influence similarity judgments which would then influence intertemporal choices. We found that participants judged amounts framed as cents as less similar than dollars, and this resulted in more patient intertemporal choices. Additionally, they judged delays framed as weeks as more similar than days, but the framing did not influence choice. These findings suggest that the units in which amounts and delays are framed can influence their similarity judgments, which can shape intertemporal choices. These unit effects may guide stakeholders in framing aspects of intertemporal choices in different units to nudge decision makers into either more impulsive or patient choice.


2020 ◽  
Vol 190 ◽  
pp. 105097
Author(s):  
Fabrizio Adriani ◽  
Silvia Sonderegger

2020 ◽  
Vol 43 ◽  
Author(s):  
Myrthe Faber

Abstract Gilead et al. state that abstraction supports mental travel, and that mental travel critically relies on abstraction. I propose an important addition to this theoretical framework, namely that mental travel might also support abstraction. Specifically, I argue that spontaneous mental travel (mind wandering), much like data augmentation in machine learning, provides variability in mental content and context necessary for abstraction.


2020 ◽  
Author(s):  
Mohammed J. Zaki ◽  
Wagner Meira, Jr
Keyword(s):  

2020 ◽  
Author(s):  
Marc Peter Deisenroth ◽  
A. Aldo Faisal ◽  
Cheng Soon Ong
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document