scholarly journals S61. COMPUTATIONAL MODELLING OF VISUAL MOTION PERCEPTION AND ITS ASSOCIATION WITH SCHIZOTYPAL TRAITS

2020 ◽  
Vol 46 (Supplement_1) ◽  
pp. S56-S56
Author(s):  
Kinga Farkas ◽  
Zsófia Pálffy ◽  
Bertalan Polner

Abstract Background Psychotic symptoms might be explained by disturbances of information processing due to errors of inference during neural coding, and hierarchical models could advance our understanding of how impaired functioning at different levels of the processing hierarchy are associated with psychotic symptoms. However, in order to examine to what extent such alterations are temporary or stable, the psychometric reliability and validity of the measurements need to be established. Individual differences in visual perception were measured by responses to uncertain stimuli presented during a probabilistic associative learning task. Our novel contributions are the measurement of cross-modal (visual and acoustic) associative learning and the assessment of the psychometric properties of indicators derived from a perceptual decision task: we evaluate its internal consistency, test-retest reliability and external validity as shown by associations with schizotypal traits. Methods Participants (32 healthy individuals, 13 men, age (SD) = 27.4 (9.4)) performed a perceptual decision task twice with one week delay. They were asked to indicate the direction of perceived motion of disambiguous and ambiguous visual stimuli (640 trials), which were preceded by visual and acoustic cues that were probabilistically associated with the motion direction and were congruent (both predict the same motion) or incongruent (cues predict different motion). Schizotypal traits were measured with the short version of the Oxford-Liverpool Inventory of Feelings and Experiences (O-LIFE) questionnaire, which showed good internal consistency and test-retest reliability (Cronbach’s alpha: 0.71 – 0.83 for subscales, test-retest correlation for Cognitive Disorganization: r = 0.84, and Unusual Experiences: r = 0.79). Results We found a significant difference in response reaction times between stimuli with high and low probability (t = -2.037; p = 0.044). Acoustic cues predicted the decision significantly higher in case of ambiguous stimuli in both sessions (1. t=4.19, p<0.001; 2: t=3.46, p=0.002). Congruency of visual and acoustic cue pairs had no significant effect on response times for ambiguous stimuli. Reaction times and bias towards reliance on auditory cues during perceptual decision making under uncertainty showed stability over the two sessions (test-retest rho’s ranging from 0.56 – 0.72). Cognitive Disorganization scores showed weak negative correlation with response time under uncertainty (session 1: r= -0.24, session 2: r= -0.28), Unusual Experiences scores showed weak negative correlation with the bias towards reliance on auditory cues (session1: r= -0.21, session 2: r= -0.19). We did not find relationship between general response speed and any O-LIFE subscale scores. Discussion The results show some intraindividual stability of individual differences in perceptual decision making as measured by our paradigm. Participants with higher schiztypal scores tend to have slower response speed under uncertainty and greater bias towards reliance on auditory cues in a small healthy sample which implies it might be useful to measure these variables in clinical population and evaluate the effectiveness of therapeutic interventions or illness progression in follow-up studies. The presented preliminary results derived from descriptive statistics of the behavioral data. Our research group is currently working on fitting a trial-by-trial hierarchical computational model - which includes the representation of uncertainty - to find more detailed individual differences, e.g. the time course of parameter changes while learning in a visual perception task.

2017 ◽  
Vol 114 (10) ◽  
pp. 2771-2776 ◽  
Author(s):  
Hildward Vandormael ◽  
Santiago Herce Castañón ◽  
Jan Balaguer ◽  
Vickie Li ◽  
Christopher Summerfield

Humans move their eyes to gather information about the visual world. However, saccadic sampling has largely been explored in paradigms that involve searching for a lone target in a cluttered array or natural scene. Here, we investigated the policy that humans use to overtly sample information in a perceptual decision task that required information from across multiple spatial locations to be combined. Participants viewed a spatial array of numbers and judged whether the average was greater or smaller than a reference value. Participants preferentially sampled items that were less diagnostic of the correct answer (“inlying” elements; that is, elements closer to the reference value). This preference to sample inlying items was linked to decisions, enhancing the tendency to give more weight to inlying elements in the final choice (“robust averaging”). These findings contrast with a large body of evidence indicating that gaze is directed preferentially to deviant information during natural scene viewing and visual search, and suggest that humans may sample information “robustly” with their eyes during perceptual decision-making.


2007 ◽  
Vol 21 (2) ◽  
pp. 169-189 ◽  
Author(s):  
Peter Borkenau ◽  
Nadine Mauer

The trait–congruency hypothesis predicts that persons high in positive or negative trait affect more readily process pleasant or unpleasant stimuli, respectively. In two studies, participants were administered measures of personality and affect. Moreover, a yes/no lexical decision task with pleasant, unpleasant and neutral words was administered in Study 1, whereas a go/no‐go task was used in Study 2. Several methods to increase reliabilities of differences in reaction times are explored. Correlations of measures of personality and trait affect with decision times were mostly consistent with the trait–congruency hypothesis, particularly for decision times in the go/no‐go task that measured individual differences in valence‐specific decision times more reliably. The findings suggest that trait‐related concept accessibility is one source of trait congruity. Copyright © 2006 John Wiley & Sons, Ltd.


2011 ◽  
Vol 23 (9) ◽  
pp. 2147-2158 ◽  
Author(s):  
Simone Kühn ◽  
Florian Schmiedek ◽  
Björn Schott ◽  
Roger Ratcliff ◽  
Hans-Jochen Heinze ◽  
...  

Perceptual decision-making performance depends on several cognitive and neural processes. Here, we fit Ratcliff's diffusion model to accuracy data and reaction-time distributions from one numerical and one verbal two-choice perceptual-decision task to deconstruct these performance measures into the rate of evidence accumulation (i.e., drift rate), response criterion setting (i.e., boundary separation), and peripheral aspects of performance (i.e., nondecision time). These theoretical processes are then related to individual differences in brain activation by means of multiple regression. The sample consisted of 24 younger and 15 older adults performing the task in fMRI before and after 100 daily 1-hr behavioral training sessions in a multitude of cognitive tasks. Results showed that individual differences in boundary separation were related to striatal activity, whereas differences in drift rate were related to activity in the inferior parietal lobe. These associations were not significantly modified by adult age or perceptual expertise. We conclude that the striatum is involved in regulating response thresholds, whereas the inferior parietal lobe might represent decision-making evidence related to letters and numbers.


2021 ◽  
Vol 15 ◽  
Author(s):  
Clara Saleri Lunazzi ◽  
Amélie J. Reynaud ◽  
David Thura

Recent theories and data suggest that adapted behavior involves economic computations during which multiple trade-offs between reward value, accuracy requirement, energy expenditure, and elapsing time are solved so as to obtain rewards as soon as possible while spending the least possible amount of energy. However, the relative impact of movement energy and duration costs on perceptual decision-making and movement initiation is poorly understood. Here, we tested 31 healthy subjects on a perceptual decision-making task in which they executed reaching movements to report probabilistic choices. In distinct blocks of trials, the reaching duration (“Time” condition) and energy (“Effort” condition) costs were independently varied compared to a “Reference” block, while decision difficulty was maintained similar at the block level. Participants also performed a simple delayed-reaching (DR) task aimed at estimating movement initiation duration in each motor condition. Results in that DR task show that long duration movements extended reaction times (RTs) in most subjects, whereas energy-consuming movements led to mixed effects on RTs. In the decision task, about half of the subjects decreased their decision durations (DDs) in the Time condition, while the impact of energy on DDs were again mixed across subjects. Decision accuracy was overall similar across motor conditions. These results indicate that movement duration and, to a lesser extent, energy expenditure, idiosyncratically affect perceptual decision-making and action initiation. We propose that subjects who shortened their choices in the time-consuming condition of the decision task did so to limit a drop of reward rate.


2019 ◽  
Author(s):  
Deborah A. Barany ◽  
Ana Gómez-Granados ◽  
Margaret Schrayer ◽  
Sarah A. Cutts ◽  
Tarkeshwar Singh

AbstractVisual processing in parietal areas of the dorsal stream facilitates sensorimotor transformations for rapid movement. This action-related visual processing is hypothesized to play a distinct functional role from the perception-related processing in the ventral stream. However, it is unclear how the two streams interact when perceptual identification is a prerequisite to executing an accurate movement. In the current study, we investigated how perceptual decision-making involving the ventral stream influences arm and eye movement strategies. Participants (N = 26) moved a robotic manipulandum using right whole-arm movements to rapidly reach a stationary object or intercept a moving object on an augmented-reality display. On some blocks of trials, participants needed to identify the shape of the object (circle or ellipse) as a cue to either hit the object (circle) or move to a pre-defined location away from the object (ellipse). We found that during perceptual decision-making, there was an increased urgency to act during interception movements relative to reaching, which was associated with more decision errors. Faster hand reaction times were correlated with a strategy to adjust the movement post-initiation, and this strategy was more prominent during interception. Saccadic reaction times were faster and initial gaze lags and gains greater during decisions, suggesting that eye movements adapt to perceptual demands for guiding limb movements. Together, our findings suggest that the integration of ventral stream information with visuomotor planning depends on imposed (or perceived) task demands.New and NoteworthyVisual processing for perception and for action are thought to be mediated by two specialized neural pathways. Using a visuomotor decision-making task, we show that participants differentially utilized online perceptual decision-making in reaching and interception, and that eye movements necessary for perception influenced motor decision strategies. These results provide evidence that task complexity modulates how pathways processing perception versus action information interact during the visual control of movement.


NeuroImage ◽  
2006 ◽  
Vol 33 (3) ◽  
pp. 1016-1027 ◽  
Author(s):  
Katja Mériau ◽  
Isabell Wartenburger ◽  
Philipp Kazzer ◽  
Kristin Prehn ◽  
Claas-Hinrich Lammers ◽  
...  

2013 ◽  
Vol 34 (1) ◽  
pp. 41-47 ◽  
Author(s):  
Patricia Lockwood ◽  
Abigail Millings ◽  
Erica Hepper ◽  
Angela C. Rowe

Crying is a powerful solicitation of caregiving, yet little is known about the cognitive processes underpinning caring responses to crying others. This study examined (1) whether crying (compared to sad and happy) faces differentially elicited semantic activation of caregiving, and (2) whether individual differences in cognitive and emotional empathy moderated this activation. Ninety participants completed a lexical decision task in which caregiving, neutral, and nonwords were presented after subliminal exposure (24 ms) to crying, sad, and happy faces. Individuals low in cognitive empathy had slower reaction times to caregiving (vs. neutral) words after exposure to crying faces, but not after sad or happy faces. Results are discussed with respect to the role of empathy in response to crying others.


2016 ◽  
Vol 115 (2) ◽  
pp. 915-930 ◽  
Author(s):  
Matthew A. Carland ◽  
Encarni Marcos ◽  
David Thura ◽  
Paul Cisek

Perceptual decision making is often modeled as perfect integration of sequential sensory samples until the accumulated total reaches a fixed decision bound. In that view, the buildup of neural activity during perceptual decision making is attributed to temporal integration. However, an alternative explanation is that sensory estimates are computed quickly with a low-pass filter and combined with a growing signal reflecting the urgency to respond and it is the latter that is primarily responsible for neural activity buildup. These models are difficult to distinguish empirically because they make similar predictions for tasks in which sensory information is constant within a trial, as in most previous studies. Here we presented subjects with a variant of the classic constant-coherence motion discrimination (CMD) task in which we inserted brief motion pulses. We examined the effect of these pulses on reaction times (RTs) in two conditions: 1) when the CMD trials were blocked and subjects responded quickly and 2) when the same CMD trials were interleaved among trials of a variable-motion coherence task that motivated slower decisions. In the blocked condition, early pulses had a strong effect on RTs but late pulses did not, consistent with both models. However, when subjects slowed their decision policy in the interleaved condition, later pulses now became effective while early pulses lost their efficacy. This last result contradicts models based on perfect integration of sensory evidence and implies that motion signals are processed with a strong leak, equivalent to a low-pass filter with a short time constant.


Sign in / Sign up

Export Citation Format

Share Document