scholarly journals A Randomness Perspective on Intelligence Processes

2021 ◽  
Author(s):  
Inhan Kang ◽  
Paul De Boeck ◽  
Ivailo Partchev

We study intelligence processes using a diffusion IRT model with random variability in cognitive model parameters: variability in drift rate (the trend of information accumulation toward a correct or incorrect response) and variability in starting point (from where the information accumulation starts). The random variation concerns randomness across person-item pairs and cannot be accounted for by individual and inter-item differences. Interestingly, the models explain the conditional dependencies between response accuracy and response time that are found in previous studies on cognitive ability tests, leading us to the formulation of a randomness perspective on intelligence processes. For an empirical test, we have analyzed verbal analogies data and matrix reasoning data using diffusion IRT models with different variability assumptions. The results indicate that 1) models with random variability fit better than models without, with implications for the conditional dependencies in both types of tasks; 2) for verbal analogies, random variation in drift rate seems to exist, which can be explained by person-by-item word knowledge differences; and 3) for both types of tasks, the starting point variation was also established, in line with the inductive nature of the tasks, requiring a sequential hypothesis testing process. Finally, the correlation of individual differences in drift rate and SAT suggests a meta-strategic choice of respondents to focus on accuracy rather than speed when they have a higher cognitive capacity and when the task is one for which investing in time pays off. This seems primarily the case for matrix reasoning and less so for verbal analogies.

2020 ◽  
Author(s):  
Quynh Nhu Nguyen ◽  
Pamela Reinagel

AbstractWhen observers make rapid, difficult sensory decisions, their response time is highly variable from trial to trial. We previously compared humans and rats performing the same visual motion discrimination task. Their response time distributions were similar, but for humans accuracy was negatively correlated with response time, whereas for rats it was positively correlated. This is of interest because different mathematical theories of decision-making differ in their predictions regarding the correlation of accuracy with response time. On the premise that sensory decision-making mechanisms are likely to be conserved in mammals, our objective is to reconcile these results within a common theoretical framework. A bounded drift diffusion model (DDM) with stochastic parameters is a strong candidate, because it is known to be able to produce either late errors like humans, or early errors like rats. We consider here such a model with seven free parameters: the evidence accumulator’s starting point z, drift rate v, non-decision-time t, threshold separation a, and three noise terms σz, σv and σt. We fit the model parameters to data from both rats and humans. Trial data simulated by the model recapitulate quantitative details of the relationship between accuracy and response time in both species. On this model, the species difference can be explained by greater variability in the starting point of the diffusion process (σz) in rats, and greater variability in the drift rate (σv) in humans.


2017 ◽  
Vol 65 (4) ◽  
pp. 479-488 ◽  
Author(s):  
A. Boboń ◽  
A. Nocoń ◽  
S. Paszek ◽  
P. Pruski

AbstractThe paper presents a method for determining electromagnetic parameters of different synchronous generator models based on dynamic waveforms measured at power rejection. Such a test can be performed safely under normal operating conditions of a generator working in a power plant. A generator model was investigated, expressed by reactances and time constants of steady, transient, and subtransient state in the d and q axes, as well as the circuit models (type (3,3) and (2,2)) expressed by resistances and inductances of stator, excitation, and equivalent rotor damping circuits windings. All these models approximately take into account the influence of magnetic core saturation. The least squares method was used for parameter estimation. There was minimized the objective function defined as the mean square error between the measured waveforms and the waveforms calculated based on the mathematical models. A method of determining the initial values of those state variables which also depend on the searched parameters is presented. To minimize the objective function, a gradient optimization algorithm finding local minima for a selected starting point was used. To get closer to the global minimum, calculations were repeated many times, taking into account the inequality constraints for the searched parameters. The paper presents the parameter estimation results and a comparison of the waveforms measured and calculated based on the final parameters for 200 MW and 50 MW turbogenerators.


2021 ◽  
Vol 11 (9) ◽  
pp. 3827
Author(s):  
Blazej Nycz ◽  
Lukasz Malinski ◽  
Roman Przylucki

The article presents the results of multivariate calculations for the levitation metal melting system. The research had two main goals. The first goal of the multivariate calculations was to find the relationship between the basic electrical and geometric parameters of the selected calculation model and the maximum electromagnetic buoyancy force and the maximum power dissipated in the charge. The second goal was to find quasi-optimal conditions for levitation. The choice of the model with the highest melting efficiency is very important because electromagnetic levitation is essentially a low-efficiency process. Despite the low efficiency of this method, it is worth dealing with it because is one of the few methods that allow melting and obtaining alloys of refractory reactive metals. The research was limited to the analysis of the electromagnetic field modeled three-dimensionally. From among of 245 variants considered in the article, the most promising one was selected characterized by the highest efficiency. This variant will be a starting point for further work with the use of optimization methods.


2004 ◽  
Vol 5 (1) ◽  
pp. 43-58
Author(s):  
Jeffrey S. Galko ◽  

The ontological question of what there is, from the perspective of common sense, is intricately bound to what can be perceived. The above observation, when combined with the fact that nouns within language can be divided between nouns that admit counting, such as ‘pen’ or ‘human’, and those that do not, such as ‘water’ or ‘gold’, provides the starting point for the following investigation into the foundations of our linguistic and conceptual phenomena. The purpose of this paper is to claim that such phenomena are facilitated by, on the one hand, an intricate cognitive capacity, and on the other by the complex environment within which we live. We are, in a sense, cognitively equipped to perceive discrete instances of matter such as bodies of water. This equipment is related to, but also differs from, that devoted to the perception of objects such as this computer. Behind this difference in cognitive equipment underlies a rich ontology, the beginnings of which lies in the distinction between matter and objects. The following paper is an attempt to make explicit the relationship between matter and objects and also provide a window to our cognition of such entities.


2021 ◽  
Vol 17 (9) ◽  
pp. e1009332
Author(s):  
Fredrik Allenmark ◽  
Ahu Gokce ◽  
Thomas Geyer ◽  
Artyom Zinchenko ◽  
Hermann J. Müller ◽  
...  

In visual search tasks, repeating features or the position of the target results in faster response times. Such inter-trial ‘priming’ effects occur not just for repetitions from the immediately preceding trial but also from trials further back. A paradigm known to produce particularly long-lasting inter-trial effects–of the target-defining feature, target position, and response (feature)–is the ‘priming of pop-out’ (PoP) paradigm, which typically uses sparse search displays and random swapping across trials of target- and distractor-defining features. However, the mechanisms underlying these inter-trial effects are still not well understood. To address this, we applied a modeling framework combining an evidence accumulation (EA) model with different computational updating rules of the model parameters (i.e., the drift rate and starting point of EA) for different aspects of stimulus history, to data from a (previously published) PoP study that had revealed significant inter-trial effects from several trials back for repetitions of the target color, the target position, and (response-critical) target feature. By performing a systematic model comparison, we aimed to determine which EA model parameter and which updating rule for that parameter best accounts for each inter-trial effect and the associated n-back temporal profile. We found that, in general, our modeling framework could accurately predict the n-back temporal profiles. Further, target color- and position-based inter-trial effects were best understood as arising from redistribution of a limited-capacity weight resource which determines the EA rate. In contrast, response-based inter-trial effects were best explained by a bias of the starting point towards the response associated with a previous target; this bias appeared largely tied to the position of the target. These findings elucidate how our cognitive system continually tracks, and updates an internal predictive model of, a number of separable stimulus and response parameters in order to optimize task performance.


2020 ◽  
Author(s):  
Catherine Manning ◽  
Eric-Jan Wagenmakers ◽  
Anthony Norcia ◽  
Gaia Scerif ◽  
Udo Boehm

Children make faster and more accurate decisions about perceptual information as they get older, but it is unclear how different aspects of the decision-making process change with age. Here, we used hierarchical Bayesian diffusion models to decompose performance in a perceptual task into separate processing components, testing age-related differences in model parameters and links to neural data. We collected behavioural and EEG data from 96 six- to twelve-year-olds and 20 adults completing a motion discrimination task. We used a component decomposition technique to identify two response-locked EEG components with ramping activity preceding the response in children and adults: one with activity that was maximal over centro-parietal electrodes and one that was maximal over occipital electrodes. Younger children had lower drift rates (reduced sensitivity), wider boundary separation (increased response caution) and longer non-decision times than older children and adults. Yet model comparisons suggested that the best model of children’s data included age effects only on drift rate and boundary separation (not non-decision time). Next we extracted the slope of ramping activity in our EEG components and covaried these with drift rate. The slopes of both EEG components related positively to drift rate, but the best model with EEG covariates included only the centro-parietal component. By decomposing performance into distinct components and relating them to neural markers, diffusion models have the potential to identify the reasons why children with developmental conditions perform differently to typically developing children - and to uncover processing differences inapparent in the response time and accuracy data alone.


2017 ◽  
Vol 145 (3) ◽  
pp. 751-772 ◽  
Author(s):  
Michael D. Toy ◽  
Ramachandran D. Nair

An energy and potential enstrophy conserving finite-difference scheme for the shallow-water equations is derived in generalized curvilinear coordinates. This is an extension of a scheme formulated by Arakawa and Lamb for orthogonal coordinate systems. The starting point for the present scheme is the shallow-water equations cast in generalized curvilinear coordinates, and tensor analysis is used to derive the invariant conservation properties. Preliminary tests on a flat plane with doubly periodic boundary conditions are presented. The scheme is shown to possess similar order-of-convergence error characteristics using a nonorthogonal coordinate compared to Cartesian coordinates for a nonlinear test of flow over an isolated mountain. A linear normal mode analysis shows that the discrete form of the Coriolis term provides stationary geostrophically balanced modes for the nonorthogonal coordinate and no unphysical computational modes are introduced. The scheme uses centered differences and averages, which are formally second-order accurate. An empirical test with a steady geostrophically balanced flow shows that the convergence rate of the truncation errors of the discrete operators is second order. The next step will be to adapt the scheme for use on the cubed sphere, which will involve modification at the lateral boundaries of the cube faces.


Author(s):  
Helen Steingroever ◽  
Dominik Wabersich ◽  
Eric-Jan Wagenmakers

Abstract The shifted-Wald model is a popular analysis tool for one-choice reaction-time tasks. In its simplest version, the shifted-Wald model assumes a constant trial-independent drift rate parameter. However, the presence of endogenous processes—fluctuation in attention and motivation, fatigue and boredom—suggest that drift rate might vary across experimental trials. Here we show how across-trial variability in drift rate can be accounted for by assuming a trial-specific drift rate parameter that is governed by a positive-valued distribution. We consider two candidate distributions: the truncated normal distribution and the gamma distribution. For the resulting distributions of first-arrival times, we derive analytical and sampling-based solutions, and implement the models in a Bayesian framework. Recovery studies and an application to a data set comprised of 1469 participants suggest that (1) both mixture distributions yield similar results; (2) all model parameters can be recovered accurately except for the drift variance parameter; (3) despite poor recovery, the presence of the drift variance parameter facilitates accurate recovery of the remaining parameters; (4) shift, threshold, and drift mean parameters are correlated.


2015 ◽  
Vol 2 (12) ◽  
pp. 150499 ◽  
Author(s):  
Aidan C. Daly ◽  
David J. Gavaghan ◽  
Chris Holmes ◽  
Jonathan Cooper

As cardiac cell models become increasingly complex, a correspondingly complex ‘genealogy’ of inherited parameter values has also emerged. The result has been the loss of a direct link between model parameters and experimental data, limiting both reproducibility and the ability to re-fit to new data. We examine the ability of approximate Bayesian computation (ABC) to infer parameter distributions in the seminal action potential model of Hodgkin and Huxley, for which an immediate and documented connection to experimental results exists. The ability of ABC to produce tight posteriors around the reported values for the gating rates of sodium and potassium ion channels validates the precision of this early work, while the highly variable posteriors around certain voltage dependency parameters suggests that voltage clamp experiments alone are insufficient to constrain the full model. Despite this, Hodgkin and Huxley's estimates are shown to be competitive with those produced by ABC, and the variable behaviour of posterior parametrized models under complex voltage protocols suggests that with additional data the model could be fully constrained. This work will provide the starting point for a full identifiability analysis of commonly used cardiac models, as well as a template for informative, data-driven parametrization of newly proposed models.


2017 ◽  
Vol 12 (4) ◽  
Author(s):  
Yousheng Chen ◽  
Andreas Linderholt ◽  
Thomas J. S. Abrahamsson

Correlation and calibration using test data are natural ingredients in the process of validating computational models. Model calibration for the important subclass of nonlinear systems which consists of structures dominated by linear behavior with the presence of local nonlinear effects is studied in this work. The experimental validation of a nonlinear model calibration method is conducted using a replica of the École Centrale de Lyon (ECL) nonlinear benchmark test setup. The calibration method is based on the selection of uncertain model parameters and the data that form the calibration metric together with an efficient optimization routine. The parameterization is chosen so that the expected covariances of the parameter estimates are made small. To obtain informative data, the excitation force is designed to be multisinusoidal and the resulting steady-state multiharmonic frequency response data are measured. To shorten the optimization time, plausible starting seed candidates are selected using the Latin hypercube sampling method. The candidate parameter set giving the smallest deviation to the test data is used as a starting point for an iterative search for a calibration solution. The model calibration is conducted by minimizing the deviations between the measured steady-state multiharmonic frequency response data and the analytical counterparts that are calculated using the multiharmonic balance method. The resulting calibrated model's output corresponds well with the measured responses.


Sign in / Sign up

Export Citation Format

Share Document