stimulus presentation
Recently Published Documents


TOTAL DOCUMENTS

828
(FIVE YEARS 195)

H-INDEX

55
(FIVE YEARS 4)

Mathematics ◽  
2022 ◽  
Vol 10 (2) ◽  
pp. 267
Author(s):  
Richard Schweickert ◽  
Xiaofang Zheng

A Multinomial Processing Tree (MPT) is a directed tree with a probability associated with each arc and partitioned terminal vertices. We consider an additional parameter for each arc, a measure such as time. Each vertex represents a process. An arc descending from a vertex represents selection of a process outcome. A source vertex represents processing beginning with stimulus presentation and a terminal vertex represents a response. An experimental factor selectively influences a vertex if changing the factor level changes parameter values on arcs descending from that vertex and no others. Earlier work shows that if each of two factors selectively influences a different vertex in an arbitrary MPT it is equivalent to one of two simple MPTs. Which applies depends on whether the two selectively influenced vertices are ordered by the factors or not. A special case, the Standard Binary Tree for Ordered Processes, arises if the vertices are ordered and the factor selectively influencing the first vertex changes parameter values on only two arcs. We derive necessary and sufficient conditions, testable by bootstrapping, for this case. Parameter values are not unique. We give admissible transformations for them. We calculate degrees of freedom needed for goodness of fit tests.


2021 ◽  
Vol 12 ◽  
Author(s):  
Chloe Callahan-Flintoft ◽  
Christian Barentine ◽  
Jonathan Touryan ◽  
Anthony J. Ries

Using head mounted displays (HMDs) in conjunction with virtual reality (VR), vision researchers are able to capture more naturalistic vision in an experimentally controlled setting. Namely, eye movements can be accurately tracked as they occur in concert with head movements as subjects navigate virtual environments. A benefit of this approach is that, unlike other mobile eye tracking (ET) set-ups in unconstrained settings, the experimenter has precise control over the location and timing of stimulus presentation, making it easier to compare findings between HMD studies and those that use monitor displays, which account for the bulk of previous work in eye movement research and vision sciences more generally. Here, a visual discrimination paradigm is presented as a proof of concept to demonstrate the applicability of collecting eye and head tracking data from an HMD in VR for vision research. The current work’s contribution is 3-fold: firstly, results demonstrating both the strengths and the weaknesses of recording and classifying eye and head tracking data in VR, secondly, a highly flexible graphical user interface (GUI) used to generate the current experiment, is offered to lower the software development start-up cost of future researchers transitioning to a VR space, and finally, the dataset analyzed here of behavioral, eye and head tracking data synchronized with environmental variables from a task specifically designed to elicit a variety of eye and head movements could be an asset in testing future eye movement classification algorithms.


2021 ◽  
Author(s):  
Alex Miklashevsky

Previous research demonstrated a close bidirectional relationship between spatial attention and the manual motor system. However, it is unclear whether an explicit hand movement is necessary for this relationship to appear. A novel method with high temporal resolution – bimanual grip force registration – sheds light on this issue. Participants held two grip force sensors while being presented with lateralized stimuli (exogenous attentional shifts, Experiment 1), left- or right-pointing central arrows (endogenous attentional shifts, Experiment 2), or the words "left" or "right" (endogenous attentional shifts, Experiment 3). There was an early interaction between the presentation side or arrow direction and grip force: lateralized objects and central arrows led to an increase of the ipsilateral force and a decrease of the contralateral force. Surprisingly, words led to the opposite pattern: increased force in the contralateral hand and decreased force in the ipsilateral hand. The effect was stronger and appeared earlier for lateralized objects (60 ms after stimulus presentation) than for arrows (100 ms) or words (250 ms). Thus, processing visuospatial information automatically activates the manual motor system, but the timing and direction of this effect vary depending on the type of stimulus.


Author(s):  
Luis M. Gómez-Laplaza ◽  
Robert Gerlai

AbstractVideo playback is a widely used technique for presentation of visual stimuli in animal behavior research. In the analysis of behavioral responses to social cues, presentation of video recordings of live conspecifics represents a consistently reproducible stimulus. However, video-recordings do not interact with the experimental subject, and thus this stimulus may be inferior in the social context. Here, we evaluated how angelfish (Pterophyllum scalare) respond to a video playback of conspecifics versus a live shoal of conspecifics. Using binary choice tests, subjects were presented different stimuli. Time spent close to one versus the other stimulus was considered an index of preference. We found angelfish to prefer a live shoal of conspecifics to an empty tank, and also the video playback of a shoal of conspecifics to a blank screen, although the level of preference in the latter was lower than in the former. These results indicate that video-playback of live conspecifics may be appropriate in angelfish, thus allowing manipulation of specific cues that angelfish may use in quantity discrimination. However, when we directly contrasted a live and a video recorded shoal, both having the same number of members, experimental fish preferred the live shoal. When the choice consisted of a live shoal of four conspecifics versus a video playback of a shoal of nine conspecifics no clear preference emerged. These results imply that video-playback has disadvantages in quantity discrimination studies with angelfish. Exploring procedural and/or technological parameters will verify the suitability of video-recording-based stimulus presentation for future use in angelfish.


Author(s):  
Larah Maunder ◽  
Nina Micanovic ◽  
Jackie S. Huberman ◽  
Meredith L. Chivers

According to the Incentive Motivation Model (IMM) of sexual response, the rewarding and pleasurable aspects of a sexual act strengthen its incentive value and capacity to trigger sexual motivation. One such sexual reward is orgasm consistency, the percentage of time that orgasm is experienced during a sex act. Orgasm consistency may serve to influence the incentive value of a sexual behaviour. We tested this tenet of the IMM by examining whether orgasm consistency predicted women’s sexual responses to films depicting various sex acts. Data were collected from four separate studies examining women’s genital and subjective sexual response. Participants ( N = 144, age range = 18–65) were presented with neutral and erotic film stimuli while their genital arousal was assessed using vaginal photoplethysmography or thermography. Participants reported their sexual arousal level before, during, and after each stimulus presentation, and completed questionnaires assessing sexual history and experiences, sexual interests, and sexual functioning. Orgasm consistency during penile–vaginal intercourse (PVI) significantly predicted genital arousal to films depicting PVI, but similar relationships were not observed between genital or self-reported arousal and orgasm consistency during receptive oral sex and masturbation. Findings suggest that increasing orgasm consistency to a sex act may increase its incentive value, thereby triggering greater genital response to depictions of that act. Lack of consistent orgasm or generally pleasurable and rewarding sex may limit the capacity of sex acts to trigger sexual motivation in future sexual encounters, thus contributing to low sexual arousal and desire in women.


eNeuro ◽  
2021 ◽  
pp. ENEURO.0538-20.2021
Author(s):  
Elena Plog ◽  
Martin I. Antov ◽  
Philipp Bierwirth ◽  
Andreas Keil ◽  
Ursula Stockhorst

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Andreas Strube ◽  
Michael Rose ◽  
Sepideh Fazeli ◽  
Christian Büchel

AbstractProcessing of negative affective pictures typically leads to desynchronization of alpha-to-beta frequencies (ERD) and synchronization of gamma frequencies (ERS). Given that in predictive coding higher frequencies have been associated with prediction errors, while lower frequencies have been linked to expectations, we tested the hypothesis that alpha-to-beta ERD and gamma ERS induced by aversive pictures are associated with expectations and prediction errors, respectively. We recorded EEG while volunteers were involved in a probabilistically cued affective picture task using three different negative valences to produce expectations and prediction errors. Our data show that alpha-to-beta band activity after stimulus presentation was related to the expected valence of the stimulus as predicted by a cue. The absolute mismatch of the expected and actual valence, which denotes an absolute prediction error was related to increases in alpha, beta and gamma band activity. This demonstrates that top-down predictions and bottom-up prediction errors are represented in typical spectral patterns associated with affective picture processing. This study provides direct experimental evidence that negative affective picture processing can be described by neuronal predictive coding computations.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Lluís Hernández-Navarro ◽  
Ainhoa Hermoso-Mendizabal ◽  
Daniel Duque ◽  
Jaime de la Rocha ◽  
Alexandre Hyafil

AbstractStandard models of perceptual decision-making postulate that a response is triggered in reaction to stimulus presentation when the accumulated stimulus evidence reaches a decision threshold. This framework excludes however the possibility that informed responses are generated proactively at a time independent of stimulus. Here, we find that, in a free reaction time auditory task in rats, reactive and proactive responses coexist, suggesting that choice selection and motor initiation, commonly viewed as serial processes, are decoupled in general. We capture this behavior by a novel model in which proactive and reactive responses are triggered whenever either of two competing processes, respectively Action Initiation or Evidence Accumulation, reaches a bound. In both types of response, the choice is ultimately informed by the Evidence Accumulation process. The Action Initiation process readily explains premature responses, contributes to urgency effects at long reaction times and mediates the slowing of the responses as animals get satiated and tired during sessions. Moreover, it successfully predicts reaction time distributions when the stimulus was either delayed, advanced or omitted. Overall, these results fundamentally extend standard models of evidence accumulation in decision making by showing that proactive and reactive processes compete for the generation of responses.


2021 ◽  
Vol 12 ◽  
Author(s):  
Alison M. Harris

Visual snow syndrome, characterized by persistent flickering dots throughout the visual field, has been hypothesized to arise from abnormal neuronal responsiveness in visual processing regions. Previous research has reported a lack of typical VEP habituation to repeated stimulus presentation in patients with visual snow. Yet these studies generally used pattern-reversal paradigms, which are suboptimal for measuring cortical responses to the onset of foveal stimulation. Instead, these responses are better indexed by the C2, a pattern-onset VEP peaking 100–120 ms after stimulus onset. In this case study, we analyzed the C2 and its adaptation profile in data previously collected from a single patient with visual snow using a “double-pulse” presentation paradigm. In controls, shorter intervals between stimulus pairs were associated with greater attenuation of the C2 VEP, with recovery from adaptation at longer stimulus onset asynchronies (SOAs). However, the visual snow patient showed the opposite pattern, with reduced C2 amplitude at longer SOAs despite distinct C2 peaks at the shortest SOAs. These results stand in contrast not only to the pattern of C2 VEP attenuation in controls, but also to a lack of adaptation previously reported for the pattern-onset P1 VEP in this patient. Exploratory source localization using equivalent current dipole fitting further suggested that P1 and C2 VEPs in the visual snow patient arose from distinct sources in extrastriate visual cortex. While preliminary, these results support differential patterns of VEP attenuation and potentiation within the same individual, potentially pointing toward multiple mechanisms of abnormal neuronal responsiveness in visual snow syndrome.


2021 ◽  
Vol 17 (11) ◽  
pp. e1008877
Author(s):  
Fangfang Hong ◽  
Stephanie Badde ◽  
Michael S. Landy

To obtain a coherent perception of the world, our senses need to be in alignment. When we encounter misaligned cues from two sensory modalities, the brain must infer which cue is faulty and recalibrate the corresponding sense. We examined whether and how the brain uses cue reliability to identify the miscalibrated sense by measuring the audiovisual ventriloquism aftereffect for stimuli of varying visual reliability. To adjust for modality-specific biases, visual stimulus locations were chosen based on perceived alignment with auditory stimulus locations for each participant. During an audiovisual recalibration phase, participants were presented with bimodal stimuli with a fixed perceptual spatial discrepancy; they localized one modality, cued after stimulus presentation. Unimodal auditory and visual localization was measured before and after the audiovisual recalibration phase. We compared participants’ behavior to the predictions of three models of recalibration: (a) Reliability-based: each modality is recalibrated based on its relative reliability—less reliable cues are recalibrated more; (b) Fixed-ratio: the degree of recalibration for each modality is fixed; (c) Causal-inference: recalibration is directly determined by the discrepancy between a cue and its estimate, which in turn depends on the reliability of both cues, and inference about how likely the two cues derive from a common source. Vision was hardly recalibrated by audition. Auditory recalibration by vision changed idiosyncratically as visual reliability decreased: the extent of auditory recalibration either decreased monotonically, peaked at medium visual reliability, or increased monotonically. The latter two patterns cannot be explained by either the reliability-based or fixed-ratio models. Only the causal-inference model of recalibration captures the idiosyncratic influences of cue reliability on recalibration. We conclude that cue reliability, causal inference, and modality-specific biases guide cross-modal recalibration indirectly by determining the perception of audiovisual stimuli.


Sign in / Sign up

Export Citation Format

Share Document