scholarly journals Sequential Decisions from Sampling: Inductive Generation of Stopping Decisions Using Instance-Based Learning Theory

2021 ◽  
Author(s):  
Cleotilde Gonzalez ◽  
Palvi Aggarwal

Sequential decisions from sampling are common in daily life: we often explore alternatives sequentially, decide when to stop such exploration process, and use the experience acquired during sampling to make a choice for what is expected to be the best option. In decisions from experience, theories of sampling and experiential choice are unable to explain the decision of when to stop the sequential exploration of alternatives. In this chapter, we propose a mechanism to inductively generate stopping decisions, and we demonstrate its plausibility in a large and diverse human data set of the binary choice sampling paradigm. Our proposed stopping mechanism relies on the choice process of a theory of experiential choice, Instance-Based Learning Theory (IBLT). The new stopping mechanism tracks the relative prediction errors of the two options during sampling, and stops when such difference is close to zero. Our results from simulation are able to accurately predict human stopping decisions distributions in the dataset. This model provides an integrated theoretical account of decisions from experience, where the stopping decisions are generated inductively from the sampling process.

2020 ◽  
Author(s):  
Kate Ergo ◽  
Luna De Vilder ◽  
Esther De Loof ◽  
Tom Verguts

Recent years have witnessed a steady increase in the number of studies investigating the role of reward prediction errors (RPEs) in declarative learning. Specifically, in several experimental paradigms RPEs drive declarative learning; with larger and more positive RPEs enhancing declarative learning. However, it is unknown whether this RPE must derive from the participant’s own response, or whether instead any RPE is sufficient to obtain the learning effect. To test this, we generated RPEs in the same experimental paradigm where we combined an agency and a non-agency condition. We observed no interaction between RPE and agency, suggesting that any RPE (irrespective of its source) can drive declarative learning. This result holds implications for declarative learning theory.


2020 ◽  
Author(s):  
Robson Borges de Lima ◽  
Cinthia Pereira de Oliveira ◽  
Rinaldo Luiz Caraciolo Ferreira ◽  
José Antônio Aleixo da Silva ◽  
Emanuel Araújo Silva ◽  
...  

Abstract Background: Dry tropical forests in arid lands cover large areas in Brazil, but few studies report the total biomass stock showing the importance of height measurements, in addition to applying and comparing local and pan-tropical models of biomass prediction for the domain of trees and shrubs found in that environment. Here, we use a biomass data set of 500 trees and shrubs, covering 15 species harvested in a management plan in the state of Pernambuco, in Brazil. We seek to develop local models and compare them with the equations traditionally applied to dry forests - showing the importance of tree height measurements. Due to the non-linear relationships with the independent variables of the tree, we used a nonlinear least squares modeling technique when adjusting models, we adopted the cross-validation procedure. The selection of the models was based on the likelihood measures (AIC), total explained variation (R2) and forecast error (RSE, RMSE and Bias). Results: In summary, our above-ground biomass data set is best represented by the Schumacher-Hall equation: exp [3.5336 + 1.9126 × log (D) + 1.2438 × log (Ht)], which shows that height measurements are essential to estimate accurately biomass. The biggest prediction errors observed when testing pan-tropical models in our data demonstrated the importance of developing new local models and indicated that careful considerations should be made if generic “pantropical” models without height measurements are planned for application in dry forests in Brazil. Conclusions: Thus, local equations can be used for carbon accounting in REDD + and sustainable incentive projects that initiate the development of dry forests and assess ecosystem services.


2008 ◽  
Vol 71 (2) ◽  
pp. 279-285 ◽  
Author(s):  
M. J. STASIEWICZ ◽  
B. P. MARKS ◽  
A. ORTA-RAMIREZ ◽  
D. M. SMITH

Traditional models for predicting the thermal inactivation rate of bacteria are state dependent, considering only the current state of the product. In this study, the potential for previous sublethal thermal history to increase the thermotolerance of Salmonella in ground turkey was determined, a path-dependent model for thermal inactivation was developed, and the path-dependent predictions were tested against independent data. Weibull-Arrhenius parameters for Salmonella inactivation in ground turkey thigh were determined via isothermal tests at 55, 58, 61, and 63°C. Two sets of nonisothermal heating tests also were conducted. The first included five linear heating rates (0.4, 0.9, 1.7, 3.5, and 7.0 K/min) and three holding temperatures (55, 58, and 61°C); the second also included sublethal holding periods at 40, 45, and 50°C. When the standard Weibull-Arrhenius model was applied to the nonisothermal validation data sets, the root mean squared error of prediction was 2.5 log CFU/g, with fail-dangerous residuals as large as 4.7 log CFU/g when applied to the complete nonisothermal data set. However, by using a modified path-dependent model for inactivation, the prediction errors for independent data were reduced by 56%. Under actual thermal processing conditions, use of the path-dependant model would reduce error in thermal lethality predictions for slowly cooked products.


Geofluids ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Krzysztof Sowiżdżał ◽  
Tomasz Słoczyński ◽  
Weronika Kaczmarczyk

The paper discusses the issue of oil-in-place estimation for liquid-saturated shales in Lower Paleozoic (Silurian and Ordovician) organic-rich formations of the Baltic Basin in North Poland. The authors adopted a geochemical method based on Rock Eval results which directly measure hydrocarbon content present in rock samples. Its application on a real data set required the implementation of correction procedures to consider also those oil compounds which were lost before Rock Eval measurements were taken or are not recorded in S1 parameter. It was accomplished through the introduction of two correction coefficients: c1—for evaporation loss and c2—for heavier compounds underestimation. The first one was approximated on the basis of published results and known properties of crude oil, while the second one was addressed with laboratory experimental procedure which combines Rock Eval pyrolysis and rock sample extraction with organic solvents. The calculation formulas were implemented in the 3D geological model of shale formations reproducing their geometry as well as the spatial variability of the petrophysical and geochemical properties. Consequently, the results of oil-in-place estimation were also available as 3D models, ready for visualization and interpretation in terms of delineation of most favorable zones or well placement. The adopted geochemical method and the results of oil-in-place estimation it produced were confronted with standard volumetric method. Although both of them are volumetric methods, the results depend on different sets of rock properties, which is an advantage for result comparison reasons. The study revealed that the geochemical method of oil-in-place estimation in liquid-rich shales after appropriate adjustment, considering shale formation and reservoir fluid dependent conditions, could provide reliable results and be implemented on the early stage of shale exploration process in a condition of production data inaccessibility.


1998 ◽  
Vol 52 (10) ◽  
pp. 1339-1347 ◽  
Author(s):  
Mark R. Riley ◽  
Mark A. Arnold ◽  
David W. Murhammer

A novel method is introduced for developing calibration models for the spectroscopic measurement of chemical concentrations in an aqueous environment. To demonstrate this matrix-enhanced calibration procedure, we developed calibration models to quantitate glucose and glutamine concentrations in an insect cell culture medium that is a complex mixture of more than 20 components, with three components that manifest significant concentration changes. Accurate calibration models were generated for glucose and glutamine by using a calibration data set composed of 60 samples containing the analytes dissolved in an aqueous buffer along with as few as two samples of the analytes dissolved in culture medium. Standard errors of prediction were 1.0 mM for glucose and 0.35 mM for glutamine. The matrix-enhanced method was also applied to culture medium samples collected during the course of a second bioreactor run. Addition of three culture medium samples to a buffer calibration reduced glucose prediction errors from 3.8 mM to 1.0 mM; addition of two culture medium samples reduced glutamine prediction errors from 1.6 mM to 0.76 mM. Results from this study suggest that spectroscopic calibration models can be developed from a relatively simple set of samples provided that some account for variations in the sample matrix.


2020 ◽  
Vol 44 (6) ◽  
pp. 923-930
Author(s):  
I.A. Rodin ◽  
S.N. Khonina ◽  
P.G. Serafimovich ◽  
S.B. Popov

In this work, we carried out training and recognition of the types of aberrations corresponding to single Zernike functions, based on the intensity pattern of the point spread function (PSF) using convolutional neural networks. PSF intensity patterns in the focal plane were modeled using a fast Fourier transform algorithm. When training a neural network, the learning coefficient and the number of epochs for a dataset of a given size were selected empirically. The average prediction errors of the neural network for each type of aberration were obtained for a set of 15 Zernike functions from a data set of 15 thousand PSF pictures. As a result of training, for most types of aberrations, averaged absolute errors were obtained in the range of 0.012 – 0.015. However, determining the aberration coefficient (magnitude) requires additional research and data, for example, calculating the PSF in the extrafocal plane.


2019 ◽  
Vol 2019 ◽  
pp. 1-12
Author(s):  
Ding-Jian Wang ◽  
Huiming Tang ◽  
Peiwu Shen ◽  
Yi Cai

It is of great significance to develop a failure criterion that can describe the orientation-dependent behavior of transversely isotropic rocks. This paper presents a simplified parabolic model that is successful in predicting the strengths of rocks under different confining pressures and bedding angles. The model is a modified version of the normal parabolic criterion for intact rocks. The two orientation-dependent parameters (σcβ and kβ) in the model show trigonometric relationships with the bedding angle, and they can be readily determined through uniaxial and triaxial compression tests. The shape of the failure envelope is determined by kβ, and σcβ only affects the level of rock strength. With application to 446 experimental data, the predicting results by the parabolic criterion are highly consistent with the experimental data, and the predictive capacity of the proposed criterion is better than those of the McLamore-Gray and Tien-Kuo criteria. Besides, the prediction errors for the high confining pressure condition and the bedding-sliding failure mode are smaller than those for the low confining pressure and the non-bedding-sliding failure. Moreover, the prediction error almost remains steady with the decrease of data set, indicating that the proposed criterion is of high precision even if the experimental data are limited.


2017 ◽  
Vol 25 (4) ◽  
pp. 267-277 ◽  
Author(s):  
Harpreet Kaur ◽  
Rainer Künnemeyer ◽  
Andrew McGlone

Comparisons are reported for developing predictive models for dry matter across a wide variety of fruits with near infrared spectroscopy instrumentation, using a number of commercially available hand-held portable instruments (NIRVANA by Integrated Spectronics, F-750 by Felix Instruments, H-100C by Sunforest and SCiO by Consumer Physics) and an in-house laboratory based instrument (Benchtop). Three intrinsic (same fruit type) and combined (all fruit types) data sets were created from two separate batches of fruit populations. The first batch (Lot I) consisted of 205 ripe fruits from three different main fruit types (apples, kiwifruit and summerfruit) and 12 distinct fruit sub-categories. The second batch (Lot II) consisted of 91 ripe fruits from two different fruit types (apples and kiwifruit) and seven distinct fruit sub-categories. The laboratory based Benchtop instrument performed the best overall with typically higher prediction r2 values (>0.92). The hand-held instruments delivered moderate to high r2 values between 0.8 and 0.95. Results obtained with the intrinsic data sets revealed typically lower root mean square errors of prediction for apples and kiwifruit (0.32% to 0.73%) and larger prediction errors for summerfruit (0.53% to 0.82%). Some large performance variations between instruments of the same type were observed suggesting caution in evaluating the relative performance of different instrument types or formats on the basis of data generated with just a single instrument and/or data set. However, performance differences between the different hand-held portable instruments, on the same data sets, were often not statistically significant ( p < 0.05). Instrument choice for any particular application will likely come down to matters not considered here, such as, for example, ease and accuracy during in-field operation and overall reliability.


2018 ◽  
Vol 18 (2) ◽  
pp. 599-611 ◽  
Author(s):  
Marinella Passarella ◽  
Evan B. Goldstein ◽  
Sandro De Muro ◽  
Giovanni Coco

Abstract. We use genetic programming (GP), a type of machine learning (ML) approach, to predict the total and infragravity swash excursion using previously published data sets that have been used extensively in swash prediction studies. Three previously published works with a range of new conditions are added to this data set to extend the range of measured swash conditions. Using this newly compiled data set we demonstrate that a ML approach can reduce the prediction errors compared to well-established parameterizations and therefore it may improve coastal hazards assessment (e.g. coastal inundation). Predictors obtained using GP can also be physically sound and replicate the functionality and dependencies of previous published formulas. Overall, we show that ML techniques are capable of both improving predictability (compared to classical regression approaches) and providing physical insight into coastal processes.


Sign in / Sign up

Export Citation Format

Share Document