visual priming
Recently Published Documents


TOTAL DOCUMENTS

93
(FIVE YEARS 24)

H-INDEX

14
(FIVE YEARS 1)

2022 ◽  
Vol 9 ◽  
Author(s):  
Kelly Ann Schmidtke ◽  
Navneet Aujla ◽  
Tom Marshall ◽  
Abid Hussain ◽  
Gerard P. Hodgkinson ◽  
...  

BackgroundResearch conducted in the United States suggests that two primes (citrus smells and pictures of a person's eyes) can increase hand gel dispenser use on the day they are introduced in hospital. The current study, conducted at a hospital in the United Kingdom, evaluated the effectiveness of these primes, both in isolation and in combination, at the entry way to four separate wards, over a longer duration than the previous work.MethodsA crossover randomized controlled trial was conducted. Four wards were allocated for 6 weeks of observation to each of four conditions, including “control,” “olfactory,” “visual,” or “both” (i.e., “olfactory” and “visual” combined). It was hypothesized that hand hygiene compliance would be greater in all priming conditions relative to the control condition. The primary outcome was whether people used the gel dispenser when they entered the wards. After the trial, a follow up survey of staff at the same hospital assessed the barriers to, and facilitators of, hand hygiene compliance. The trial data were analyzed using regression techniques and the survey data were analyzed using descriptive statistics.ResultsThe total number of individuals observed in the trial was 9,811 (female = 61%), with similar numbers across conditions, including “control” N = 2,582, “olfactory” N = 2,700, “visual” N = 2,488, and “both” N = 2,141. None of the priming conditions consistently increased hand hygiene. The lowest percentage compliance was observed in the “both” condition (7.8%), and the highest was observed in the “visual” condition (12.7%). The survey was completed by 97 staff (female = 81%). “Environmental resources” and “social influences” were the greatest barriers to staff cleaning their hands.ConclusionsTaken together, the current findings suggest that the olfactory and visual priming interventions investigated do not influence hand hygiene consistently. To increase the likelihood of such interventions succeeding, future research should focus on prospectively determined mechanisms of action.


2021 ◽  
Vol 15 ◽  
Author(s):  
Jarrod Hollis ◽  
Glyn W. Humphreys ◽  
Peter M. Allen

Evidence is presented for intermediate, wholistic visual representations of objects and non-objects that are computed online and independent of visual attention. Short-term visual priming was examined between visually similar shapes, with targets either falling at the (valid) location cued by primes or at another (invalid) location. Object decision latencies were facilitated when the overall shapes of the stimuli were similar irrespective of whether the location of the prime was valid or invalid, with the effects being equally large for object and non-object targets. In addition, the effects were based on the overall outlines of the stimuli and low spatial frequency components, not on local parts. In conclusion, wholistic shape representations based on outline form, are rapidly computed online during object recognition. Moreover, activation of common wholistic shape representations prime the processing of subsequent objects and non-objects irrespective of whether they appear at attended or unattended locations. Rapid derivation of wholistic form provides a key intermediate stage of object recognition.


2021 ◽  
Author(s):  
Elena Pyatigorskaya ◽  
Matteo Maran ◽  
Emiliano Zaccarella

Language comprehension proceeds at a very fast pace. It is argued that context influences the speed of language comprehension by providing informative cues for the correct processing of the incoming linguistic input. Priming studies investigating the role of context in language processing have shown that humans quickly recognise target words that share orthographic, morphological, or semantic information with their preceding primes. How syntactic information influences the processing of incoming words is however less known. Early syntactic priming studies reported faster recognition for noun and verb targets (e.g., apple or sing) following primes with which they form grammatical phrases or sentences (the apple, he sings). The studies however leave open a number of questions about the reported effect, including the degree of automaticity of syntactic priming, the facilitative versus inhibitory nature, and the specific mechanism underlying the priming effect—that is, the type of syntactic information primed on the target word. Here we employed a masked syntactic priming paradigm in four behavioural experiments in German language to test whether masked primes automatically facilitate the categorization of nouns and verbs presented as flashing visual words. Overall, we found robust syntactic priming effects with masked primes—thus suggesting high automaticity of the process—but only when verbs were morpho-syntactically marked (er kau-t; he chew-s). Furthermore, we found that, compared to baseline, primes slow down target categorisation when the relationship between prime and target is syntactically incorrect, rather than speeding it up when the prime-target relationship is syntactically correct. This argues in favour of an inhibitory nature of syntactic priming. Overall, the data indicate that humans automatically extract abstract syntactic features from word categories as flashing visual words, which has an impact on the speed of successful language processing during language comprehension.


Author(s):  
Andreas Opitz ◽  
Denisa Bordag

Abstract Previous research has shown that orthographic marking may have a function beyond identifying orthographic word forms. In two visual priming experiments with native speakers and advanced learners of German (Czech natives) we tested the hypothesis that orthography can convey word-class cues comparable to morphological marking. We examined the effect of initial letter capitalization of nouns (a specific property of German orthography) on the processing of five homonymous and grammatically ambiguous forms. Both populations showed the same pattern of results: deverbal nouns (conversions) patterned together with countable nouns while in a previous study (with eliminated orthographic word-class cues) they patterned together with infinitives. Together, findings suggest that orthographic cues can trigger word-class-specific lexical retrieval/access. They also suggest a lexical entry structure in which conversion nouns, infinitives, and inflected verbal forms share a category-neutral parent node and that specified subnodes are accessed only when specifying cues are available and/or necessary for processing.


2021 ◽  
Vol 17 (9) ◽  
pp. e1009415
Author(s):  
Giulio Matteucci ◽  
Benedetta Zattera ◽  
Rosilari Bellacosa Marotti ◽  
Davide Zoccolan

Computing global motion direction of extended visual objects is a hallmark of primate high-level vision. Although neurons selective for global motion have also been found in mouse visual cortex, it remains unknown whether rodents can combine multiple motion signals into global, integrated percepts. To address this question, we trained two groups of rats to discriminate either gratings (G group) or plaids (i.e., superpositions of gratings with different orientations; P group) drifting horizontally along opposite directions. After the animals learned the task, we applied a visual priming paradigm, where presentation of the target stimulus was preceded by the brief presentation of either a grating or a plaid. The extent to which rat responses to the targets were biased by such prime stimuli provided a measure of the spontaneous, perceived similarity between primes and targets. We found that gratings and plaids, when uses as primes, were equally effective at biasing the perception of plaid direction for the rats of the P group. Conversely, for G group, only the gratings acted as effective prime stimuli, while the plaids failed to alter the perception of grating direction. To interpret these observations, we simulated a decision neuron reading out the representations of gratings and plaids, as conveyed by populations of either component or pattern cells (i.e., local or global motion detectors). We concluded that the findings for the P group are highly consistent with the existence of a population of pattern cells, playing a functional role similar to that demonstrated in primates. We also explored different scenarios that could explain the failure of the plaid stimuli to elicit a sizable priming magnitude for the G group. These simulations yielded testable predictions about the properties of motion representations in rodent visual cortex at the single-cell and circuitry level, thus paving the way to future neurophysiology experiments.


Author(s):  
Denisa Bordag ◽  
Andreas Opitz

Abstract In two visual priming experiments, we investigated the relation of form-identical word forms with different grammatical functions in L1 and L2 German. Four different grammatical types (inflected verbs, infinitives, deverbal conversion forms, and countable nouns) were used as primes and their influence on the processing of form-identical inflected verbs as targets was compared. Results revealed full priming of inflected verbs, but only partial priming for conversion forms and infinitives. No priming was observed for semantically related countable nouns suggesting that they have a separate lexical entry. The findings bring first psycholinguistic evidence for typological claims that deverbal conversion nouns and infinitives fall into the category of nonfinites. They also support accounts assuming representations with a basic lexical entry and word-category specific subentries. The same priming pattern was observed in L1 and L2 suggesting that representation and processing of the studied complex forms is not fundamentally different in the two populations.


2021 ◽  
Author(s):  
Giulio Matteucci ◽  
Benedetta Zattera ◽  
Rosilari Bellacosa Marotti ◽  
Davide Zoccolan

AbstractComputing global motion direction of extended visual objects is a hallmark of primate high-level vision. Although neurons selective for global motion have also been found in mouse visual cortex, it remains unknown whether rodents can combine multiple motion signals into global, integrated percepts. To address this question, we trained two groups of rats to discriminate either gratings (G group) or plaids (i.e., superpositions of gratings with different orientations; P group) drifting horizontally along opposite directions. After the animals learned the task, we applied a visual priming paradigm, where presentation of the target stimulus was preceded by the brief presentation of either a grating or a plaid. The extent to which rat responses to the targets were biased by such prime stimuli provided a measure of the spontaneous, perceived similarity between primes and targets. We found that gratings and plaids, when uses as primes, were equally effective at biasing the perception of plaid direction for the rats of the P group. Conversely, for G group, only the gratings acted as effective prime stimuli, while the plaids failed to alter the perception of grating direction. To interpret these observations, we simulated a decision neuron reading out the representations of gratings and plaids, as conveyed by populations of either component or pattern cells (i.e., local or global motion detectors). We concluded that the findings for the P group are highly consistent with the existence of a population of pattern cells, playing a functional role similar to that demonstrated in primates. We also explored different scenarios that could explain the failure of the plaid stimuli to elicit a sizable priming magnitude for the G group. These simulations yielded testable predictions about the properties of motion representations in rodent visual cortex at the single-cell and circuitry level, thus paving the way to future neurophysiology experiments.


2020 ◽  
Vol 20 (11) ◽  
pp. 1592
Author(s):  
Patrick Little ◽  
Chaz Firestone
Keyword(s):  

2020 ◽  
Vol 26 (1) ◽  
pp. 256-276
Author(s):  
Patrick A. Stewart ◽  
Austin D. Eubanks ◽  
Nicholas Hersom ◽  
Cooper A. Hearn

The 2020 Democratic presidential primary debates provide a unique opportunity to systematically evaluate network visual production choices in a multicandidate context. The joint decision of the Democratic National Committee and NBC to include an expansive field of twenty contenders through “prime-time” debates on consecutive nights (June 26 and 27, 2019) provided for a natural experiment with equal numbers of top- and second-tier candidates randomly assigned to each night. In this preregistered study, we evaluate whether candidates are treated differently in the amount of camera time they receive (visual priming) and types of camera shots they appear in (visual framing) based on electoral status. We replicate a study of the initial two Democratic and Republican 2016 presidential primary debates for each party and that found the top-two candidates received substantially better visual coverage than all others. We confirm and extend these findings by evaluating different operationalizations of electoral status (top-two, top-tier, stage position, and poll standing). Findings suggest that when visual priming is considered, stage position outperforms other electoral status indicators in terms of explaining variance for total camera and average fixation time. In terms of visual framing, head-and-shoulder “one-shots” are better predicted by top-tier status, whereas public opinion poll standing predicts increased time spent in multiple-candidate shots. Finally, appearances in “two-shots” (side-by-side and split-screen portrayals) were not significantly explained by electoral status, likely due to the paucity of these depictions.


2020 ◽  
Author(s):  
F. Cervantes Constantino ◽  
T. Sánchez-Costa ◽  
G.A. Cipriani ◽  
A. Carboni

AbstractSurroundings continually propagate audiovisual (AV) signals, and by attending we make clear and precise sense of those that matter at any given time. In such cases, parallel visual and auditory contributions may jointly serve as a basis for selection. It is unclear what hierarchical effects arise when initial selection criteria are unimodal, or involve uncertainty. Uncertainty in sensory information is a factor considered in computational models of attention proposing precision weighting as a primary mechanism for selection. The effects of visuospatial selection on auditory processing were investigated here with electroencephalography (EEG). We examined the encoding of random tone pips probabilistically associated to spatially-attended visual changes, via a temporal response function model (TRF) of the auditory EEG timeseries. AV precision, or temporal uncertainty, was manipulated across stimuli while participants sustained endogenous visuospatial attention. TRF data showed that cross-modal modulations were dominated by AV precision between auditory and visual onset times. The roles of unimodal (visuospatial and auditory) uncertainties, each a consequence of non-synchronous AV presentations, were further investigated. The TRF data demonstrated that visuospatial uncertainty in attended sector size determines transfer effects by enabling the visual priming of tones when relevant for auditory segregation, in line with top-down processing timescales. Auditory uncertainty in distractor proportion, on the other hand, determined susceptibility of early tone encoding to automatic change by incoming visual update processing. The findings provide a hierarchical account of the role of uni- and cross-modal sources of uncertainty on the neural encoding of sound dynamics in a multimodal attention task.


Sign in / Sign up

Export Citation Format

Share Document