scholarly journals The Stochastic Early Reaction, Inhibition, and Late Action (SERIA) Model for Antisaccades

2017 ◽  
Author(s):  
Eduardo A. Aponte ◽  
Dario Schoebi ◽  
Klaas E. Stephan ◽  
Jakob Heinzle

AbstractThe antisaccade task is a classic paradigm used to study the voluntary control of eye movements. It requires participants to suppress a reactive eye movement to a visual target and to concurrently initiate a saccade in the opposite direction. Although several models have been proposed to explain error rates and reaction times in this task, no formal model comparison has yet been performed. Here, we describe a Bayesian modeling approach to the antisaccade task that allows us to formally compare different models on the basis of their evidence. First, we provide a formal likelihood function of actions (pro- and antisaccades) and reaction times based on previously published models. Second, we introduce theStochastic Early Reaction, Inhibition, and late Action model(SERIA), a novel model postulating two different mechanisms that interact in the antisaccade task: an early GO/NO-GO race decision process and a late GO/GO decision process. Third, we apply these models to a data set from an experiment with three mixed blocks of pro- and antisaccade trials. Bayesian model comparison demonstrates that the SERIA model explains the data better than competing models that do not incorporate a late decision process. Moreover, we show that the race decision processes postulated by the SERIA model are, to a large extent, insensitive to the cue presented on a single trial. Finally, we use parameter estimates to demonstrate that changes in reaction time and error rate due to the probability of a trial type (prosaccade or antisaccade) are best explained by faster or slower inhibition and the probability of generating late voluntary prosaccades.Author summaryOne widely replicated finding in schizophrenia research is that patients tend to make more errors in the antisaccade task, a psychometric paradigm in which participants are required to look in the opposite direction of a visual cue. This deficit has been suggested to be an endophenotype of schizophrenia, as first order relatives of patients tend to show similar but milder deficits. Currently, most models applied to experimental findings in this task are limited to fit average reaction times and error rates. Here, we propose a novel statistical model that fits experimental data from the antisaccade task, beyond summary statistics. The model is inspired by the hypothesis that antisaccades are the result of several competing decision processes that interact nonlinearly with each other. In applying this model to a relatively large experimental data set, we show that mean reaction times and error rates do not fully reflect the complexity of the processes that are likely to underlie experimental findings. In the future, our model could help to understand the nature of the deficits observed in schizophrenia by providing a statistical tool to study their biological underpinnings.

2018 ◽  
Vol 120 (6) ◽  
pp. 3001-3016 ◽  
Author(s):  
Eduardo A. Aponte ◽  
Dominic G. Tschan ◽  
Klaas E. Stephan ◽  
Jakob Heinzle

In the antisaccade task participants are required to saccade in the opposite direction of a peripheral visual cue (PVC). This paradigm is often used to investigate inhibition of reflexive responses as well as voluntary response generation. However, it is not clear to what extent different versions of this task probe the same underlying processes. Here, we explored with the Stochastic Early Reaction, Inhibition, and late Action (SERIA) model how the delay between task cue and PVC affects reaction time (RT) and error rate (ER) when pro- and antisaccade trials are randomly interleaved. Specifically, we contrasted a condition in which the task cue was presented before the PVC with a condition in which the PVC served also as task cue. Summary statistics indicate that ERs and RTs are reduced and contextual effects largely removed when the task is signaled before the PVC appears. The SERIA model accounts for RT and ER in both conditions and better so than other candidate models. Modeling demonstrates that voluntary pro- and antisaccades are frequent in both conditions. Moreover, early task cue presentation results in better control of reflexive saccades, leading to fewer fast antisaccade errors and more rapid correct prosaccades. Finally, high-latency errors are shown to be prevalent in both conditions. In summary, SERIA provides an explanation for the differences in the delayed and nondelayed antisaccade task.NEW & NOTEWORTHY In this article, we use a computational model to study the mixed antisaccade task. We contrast two conditions in which the task cue is presented either before or concurrently with the saccadic target. Modeling provides a highly accurate account of participants’ behavior and demonstrates that a significant number of prosaccades are voluntary actions. Moreover, we provide a detailed quantitative analysis of the types of error that occur in pro- and antisaccade trials.


2019 ◽  
Author(s):  
Eduardo A. Aponte ◽  
Dario Schöbi ◽  
Klaas E. Stephan ◽  
Jakob Heinzle

AbstractBackgroundPatients with schizophrenia make more errors than healthy subjects on the antisaccade task. In this paradigm, participants are required to inhibit a reflexive saccade to a target and to select the correct action (a saccade in the opposite direction). While the precise origin of this deficit is not clear, it has been connected to aberrant dopaminergic and cholinergic neuromodulation.MethodsTo study the impact of dopamine and acetylcholine on inhibitory control and action selection, we administered two selective drugs (levodopa 200mg/galantamine 8mg) to healthy volunteers (N=100) performing the antisaccade task. A computational model (SERIA) was employed to separate the contribution of inhibitory control and action selection to empirical reaction times and error rates.ResultsModeling suggested that levodopa improved action selection (at the cost of increased reaction times) but did not have a significant effect on inhibitory control. By contrast, according to our model, galantamine affected inhibitory control in a dose dependent fashion, reducing inhibition failures at low doses and increasing them at higher levels. These effects were sufficiently specific that the computational analysis allowed for identifying the drug administered to an individual with 70% accuracy.ConclusionsOur results do not support the hypothesis that elevated tonic dopamine strongly impairs inhibitory control. Rather levodopa improved the ability to select correct actions. Instead, inhibitory control was modulated by cholinergic drugs. This approach may provide a starting point for future computational assays that differentiate neuromodulatory abnormalities in heterogeneous diseases like schizophrenia.


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 275-275
Author(s):  
A Mokler ◽  
B Fischer

In an antisaccade task subjects are required to generate a voluntary saccade to the side opposite to a small visual stimulus. With fixation-point offset preceding stimulus onset (gap) subjects produce some involuntary saccades to the stimulus and correct them by a second saccade. We wanted to know whether the subjects recognised their errors and whether a recognised sequence (error followed by correction) is different from an unrecognised sequence. To test the access to the correction mechanism, subjects were asked in subsequent experiments to produce the error-correction sequence voluntarily (voluntary sequence). We used the gap = 200 ms condition. A valid cue was presented 100 ms before stimulus onset. This manipulation increased the error rate (Fischer and Weber, 1996 Experimental Brain Research109 507 – 512). Subjects indicated errors by key-press. The rate of recognised and unrecognised errors, saccadic size, reaction times (SRT), and correction times (CRT) were determined. Altogether 93 data sets (400 trials each) from 38 subjects were analysed. The mean error rate was 20%, of which 62% went unrecognised. In sessions with high error rates the fraction of unrecognised errors was high. The SRT of the errors ranged from 80 to 170 ms with a strong mode of express saccades at 100 ms. Both types of errors had the same mean SRT of 117 – 119 ms. The unrecognised errors were 0.4 deg smaller. They were corrected after a mean CRT of 95 ms. The recognised errors were corrected after 127 ms; in the voluntary sequence the correction occurred after 217 ms. The CRT distributions differ from each other with the unrecognised errors having an extra peak around 45 ms, suggesting different modes of correction, to which perception has different access. These results raise the question why the large and long-lasting changes of the retinal image escape the conscious perception so often.


2020 ◽  
Vol 501 (2) ◽  
pp. 1663-1676
Author(s):  
R Barnett ◽  
S J Warren ◽  
N J G Cross ◽  
D J Mortlock ◽  
X Fan ◽  
...  

ABSTRACT We present the results of a new, deeper, and complete search for high-redshift 6.5 < z < 9.3 quasars over 977 deg2 of the VISTA Kilo-Degree Infrared Galaxy (VIKING) survey. This exploits a new list-driven data set providing photometry in all bands Z, Y, J, H, Ks, for all sources detected by VIKING in J. We use the Bayesian model comparison (BMC) selection method of Mortlock et al., producing a ranked list of just 21 candidates. The sources ranked 1, 2, 3, and 5 are the four known z > 6.5 quasars in this field. Additional observations of the other 17 candidates, primarily DESI Legacy Survey photometry and ESO FORS2 spectroscopy, confirm that none is a quasar. This is the first complete sample from the VIKING survey, and we provide the computed selection function. We include a detailed comparison of the BMC method against two other selection methods: colour cuts and minimum-χ2 SED fitting. We find that: (i) BMC produces eight times fewer false positives than colour cuts, while also reaching 0.3 mag deeper, (ii) the minimum-χ2 SED-fitting method is extremely efficient but reaches 0.7 mag less deep than the BMC method, and selects only one of the four known quasars. We show that BMC candidates, rejected because their photometric SEDs have high χ2 values, include bright examples of galaxies with very strong [O iii] λλ4959,5007 emission in the Y band, identified in fainter surveys by Matsuoka et al. This is a potential contaminant population in Euclid searches for faint z > 7 quasars, not previously accounted for, and that requires better characterization.


1988 ◽  
Vol 13 (1) ◽  
pp. 25-32 ◽  
Author(s):  
T V Seshadri ◽  
N Kinra

Who, in the organization, buys the computer system? How are various departments involved in the organizational decision process? T V Seshadri and N Kinra analyse the decision processes of 30 organizations that had bought a computer system—mini, mainframe, or macro. Based on a questionnaire study and factor analysis, the authors conclude that the EDP department and Board of Directors are critical in the buying grids of the purchasing organizations. They draw implications of their findings for managers marketing computer systems.


1989 ◽  
Vol 69 (3_suppl) ◽  
pp. 1083-1089 ◽  
Author(s):  
Michael P. Rastatter ◽  
Gail Scukanec ◽  
Jeff Grilliot

Lexical decision vocal reaction times (RT) were obtained for a group of Chinese subjects to unilateral tachistoscopically presented pictorial, single, and combination Chinese characters. The RT showed a significant right visual-field advantage, with significant correlations of performance between the visual fields for each type of character. Error analysis gave a significant interaction between visual fields and error type—significantly more false positive errors occurred following left visual-field inputs. These results suggest that the left hemisphere was responsible for processing each type of character, possibly reflecting superior postaccess lexical-decision processes.


1983 ◽  
Vol 53 (3) ◽  
pp. 775-778 ◽  
Author(s):  
Richard W. Millard ◽  
Ian M. Evans

A sample of 12 clinical psychologists and 12 graduate students in clinical psychology performed an analogue task to investigate decision processes with respect to the judged salience of criteria for social validity. Six child cases were considered by all; each card contained information describing a dangerous behavior, information accompanied by an explicit normative refererence, the same information without a normative reference, or unrelated filler comments. Non-parametric analyses indicated that subjects consistently evaluated information about dangerous behavior as being more serious than any other concern; dangerousness was ranked first 94.4% of the time. Subjects did not distinguish between information with explicit normative referents and the same information without any such referents. Students and clinicians did not differ in their response to these categories of information. The results demonstrate the application of a fixed-order problem-solving method to study the clinical-decision process and suggest the importance of criteria for social validity in this sequence.


Author(s):  
David A. Atchison ◽  
Carol A. Pedersen ◽  
Stephen J. Dain ◽  
Joanne M. Wood

We investigated the effect of color-vision deficiency on reaction times and accuracy of identification of traffic light signals. Participants were 20 color-normal and 49 color-deficient males, the latter divided into subgroups of different severity and type. Participants performed a tracking task. At random intervals, stimuli simulating standard traffic light signals were presented against a white background at 5° to right or left. Participants identified stimulus color (red/yellow/green) by pressing an appropriate response button. Mean response times for color normals were 525, 410, and 450 ms for red, yellow, and green lights, respectively. For color deficients, response times to red lights increased with increase in severity of color deficiency, with deutans performing worse than protans of similar severity: response times of deuteranopes and protanopes were 53% and 35% longer than those of color normals. A similar pattern occurred for yellow lights, with deuteranopes and protanopes having increased response times of 85% and 53%, respectively. For green lights, response times of all groups were similar. Error rates showed patterns similar to those of response times. Contrary to previous studies, deutans performed much worse than protans of similar severity. Actual or potential applications of this research include traffic signal design and driver licensing.


PeerJ ◽  
2016 ◽  
Vol 4 ◽  
pp. e2184 ◽  
Author(s):  
Jim Lumsden ◽  
Andy Skinner ◽  
Andy T. Woods ◽  
Natalia S. Lawrence ◽  
Marcus Munafò

Computerised cognitive assessments are a vital tool in the behavioural sciences, but participants often view them as effortful and unengaging. One potential solution is to add gamelike elements to these tasks in order to make them more intrinsically enjoyable, and some researchers have posited that a more engaging task might produce higher quality data. This assumption, however, remains largely untested. We investigated the effects of gamelike features and test location on the data and enjoyment ratings from a simple cognitive task. We tested three gamified variants of the Go-No-Go task, delivered both in the laboratory and online. In the first version of the task participants were rewarded with points for performing optimally. The second version of the task was framed as a cowboy shootout. The third version was a standard Go-No-Go task, used as a control condition. We compared reaction time, accuracy and subjective measures of enjoyment and engagement between task variants and study location. We found points to be a highly suitable game mechanic for gamified cognitive testing because they did not disrupt the validity of the data collected but increased participant enjoyment. However, we found no evidence that gamelike features could increase engagement to the point where participant performance improved. We also found that while participants enjoyed the cowboy themed task, the difficulty of categorising the gamelike stimuli adversely affected participant performance, increasing No-Go error rates by 28% compared to the non-game control. Responses collected online vs. in the laboratory had slightly longer reaction times but were otherwise very similar, supporting other findings that online crowdsourcing is an acceptable method of data collection for this type of research.


2014 ◽  
Vol 7 (4) ◽  
pp. 5447-5464 ◽  
Author(s):  
S. Tilmes ◽  
M. J. Mills ◽  
U. Niemeier ◽  
H. Schmidt ◽  
A. Robock ◽  
...  

Abstract. A new Geoengineering Model Intercomparison Project (GeoMIP) experiment "G4 specified stratospheric aerosols" (short name: G4SSA) is proposed to investigate the impact of stratospheric aerosol geoengineering on atmospheric composition, climate, and the environment. In contrast to the earlier G4 GeoMIP experiment, which requires an emission of sulphur dioxide (SO2) into the model, a prescribed aerosol forcing file is provided to the community, to be consistently applied to future model experiments between 2020 and 2100. This stratospheric aerosol distribution, with a total burden of about 2 Tg S has been derived using the ECHAM5-HAM microphysical model, based on a continuous annual tropical emission of 8 Tg SO2 year−1. A ramp-up of geoengineering in 2020 and a ramp-down in 2070 over a period of two years are included in the distribution, while a background aerosol burden should be used for the last 3 decades of the experiment. The performance of this experiment using climate and chemistry models in a multi-model comparison framework will allow us to better understand the significance of the impact of geoengineering and the abrupt termination after 50 years on climate and composition of the atmosphere in a changing environment. The zonal and monthly mean stratospheric aerosol input dataset is available at https://www2.acd.ucar.edu/gcm/geomip-g4-specified-stratospheric-aerosol-data-set.


Sign in / Sign up

Export Citation Format

Share Document