scholarly journals Effects of Prime Task on Affective Priming By Facial Expressions of Emotion

2007 ◽  
Vol 10 (2) ◽  
pp. 209-217 ◽  
Author(s):  
Luis Aguado ◽  
Ana Garcia-Gutierrez ◽  
Ester Castañeda ◽  
Cristina Saugar

Priming of affective word evaluation by pictures of faces showing positive and negative emotional expressions was investigated in two experiments that used a double task procedure where participants were asked to respond to the prime or to the target on different trials. The experiments varied between-subjects the prime task assignment and the prime-target interval (SOA, stimulus onset asynchrony). Significant congruency effects (that is, faster word evaluation when prime and target had the same valence than when they were of opposite valence) were observed in both experiments. When the prime task oriented the subjects to an affectively irrelevant property of the faces (their gender), priming was observed at SOA 300 ms but not at SOA 1000 ms (Experiment 1). However, when the prime task assignment explicitly oriented the subjects to the valence of the face, priming was observed at both SOA durations (Experiment 2). These results show, first, that affective priming by pictures of facial emotion can be obtained even when the subject has an explicit goal to process a non-affective property of the prime. Second, sensitivity of the priming effect to SOA duration seems to depend on whether it is mediated by intentional or unintentional activation of the valence of the face prime.

2012 ◽  
Vol 24 (7) ◽  
pp. 1806-1821
Author(s):  
Bernard M. C. Stienen ◽  
Konrad Schindler ◽  
Beatrice de Gelder

Given the presence of massive feedback loops in brain networks, it is difficult to disentangle the contribution of feedforward and feedback processing to the recognition of visual stimuli, in this case, of emotional body expressions. The aim of the work presented in this letter is to shed light on how well feedforward processing explains rapid categorization of this important class of stimuli. By means of parametric masking, it may be possible to control the contribution of feedback activity in human participants. A close comparison is presented between human recognition performance and the performance of a computational neural model that exclusively modeled feedforward processing and was engineered to fulfill the computational requirements of recognition. Results show that the longer the stimulus onset asynchrony (SOA), the closer the performance of the human participants was to the values predicted by the model, with an optimum at an SOA of 100 ms. At short SOA latencies, human performance deteriorated, but the categorization of the emotional expressions was still above baseline. The data suggest that, although theoretically, feedback arising from inferotemporal cortex is likely to be blocked when the SOA is 100 ms, human participants still seem to rely on more local visual feedback processing to equal the model's performance.


2020 ◽  
Vol 13 (5) ◽  
Author(s):  
Jacob G. Martin ◽  
Charles E. Davis ◽  
Maximilian Riesenhuber ◽  
Simon J. Thorpe

Here, we provide an analysis of the microsaccades that occurred during continuous visual search and targeting of small faces that we pasted either into cluttered background photos or into a simple gray background.  Subjects continuously used their eyes to target singular 3-degree upright or inverted faces in changing scenes.  As soon as the participant’s gaze reached the target face, a new face was displayed in a different and random location.  Regardless of the experimental context (e.g. background scene, no background scene), or target eccentricity (from 4 to 20 degrees of visual angle), we found that the microsaccade rate dropped to near zero levels within only 12 milliseconds after stimulus onset.  There were almost never any microsaccades after stimulus onset and before the first saccade to the face.  One subject completed 118 consecutive trials without a single microsaccade.  However, in about 20% of the trials, there was a single microsaccade that occurred almost immediately after the preceding saccade’s offset.  These microsaccades were task oriented because their facial landmark targeting distributions matched those of saccades within both the upright and inverted face conditions.  Our findings show that a single feedforward pass through the visual hierarchy for each stimulus is likely all that is needed to effectuate prolonged continuous visual search.  In addition, we provide evidence that microsaccades can serve perceptual functions like correcting saccades or effectuating task-oriented goals during continuous visual search.


1985 ◽  
Vol 60 (3) ◽  
pp. 995-998 ◽  
Author(s):  
Tamotsu Sohmiya ◽  
Kazuko Sohmiya

A method for analyzing the temporal suppression mechanism in binocular rivalry is described. A test pattern was presented to one eye and a suppressing pattern to the other eye after varying time intervals. The subject was instructed to report the frequency of nonsuppression phases of the test pattern immediately after presentation of the suppressing pattern. Analysis indicated that the test pattern was never suppressed at the 0-msec. stimulus onset asynchrony and the nonsuppression probabilities decreased as the onset asynchrony increased. Moreover, resistivity to contralateral suppression was greater when the test pattern was projected to the dominant eye.


2016 ◽  
Author(s):  
Rasa Gulbinaite ◽  
Barkin İlhan ◽  
Rufin VanRullen

ABSTRACTThe modulatory role of spontaneous brain oscillations on perception of threshold-level stimuli is well established. Here, we provide evidence that alpha-band (7-14 Hz) oscillations not only modulate but also can drive perception. We used the “triple-flash” illusion: Occasional perception of three flashes when only two spatially-coincident veridical ones are presented, separated by ~100 ms. The illusion was proposed to result from superposition of two hypothetical oscillatory impulse response functions (IRF) generated in response to each flash (Bowen, 1989). In Experiment 1, we varied stimulus onset asynchrony (SOA) and validated Bowen's theory: the optimal SOA for illusion to occur was correlated, across subjects, with the subject-specific IRF period. Experiment 2 revealed that pre-stimulus parietal alpha EEG phase and power, as well as post-stimulus alpha phase-locking, together determine the occurrence of the illusion on a trial-by-trial basis. Thus, oscillatory reverberations create something out of nothing – a third flash where there are only two.


2021 ◽  
Vol 11 ◽  
Author(s):  
Zhe Shang ◽  
Yingying Wang ◽  
Taiyong Bi

It has long been suggested that emotion, especially threatening emotion, facilitates early visual perception to promote adaptive responses to potential threats in the environment. Here, we tested whether and how fearful emotion affects the basic visual ability of visual acuity. An adapted Posner’s spatial cueing task was employed, with fearful and neutral faces as cues and a Vernier discrimination task as the probe. The time course of the emotional attention effect was examined by varying the stimulus onset asynchrony (SOA) of the cue and probe. Two independent experiments (Experiments 1 and 3) consistently demonstrated that the brief presentation of a fearful face increased visual acuity at its location. The facilitation of perceptual sensitivity was detected at an SOA around 300 ms when the face cues were presented for both 250 ms (Experiment 1) and 150 ms (Experiment 3). This effect cannot be explained by physical differences between the fearful and neutral faces because no improvement was found when the faces were presented inverted (Experiment 2). In the last experiment (Experiment 4), the face cues were flashed very briefly (17 ms), and we did not find any improvement induced by the fearful face. Overall, we provide evidence that emotion interacts with attention to affect basic visual functions.


2007 ◽  
Vol 35 (1) ◽  
pp. 95-106 ◽  
Author(s):  
Adriaan Spruyt ◽  
Dirk Hermans ◽  
Jan De Houwer ◽  
Heleen Vandromme ◽  
Paul Eelen

Perception ◽  
1982 ◽  
Vol 11 (4) ◽  
pp. 415-426 ◽  
Author(s):  
Adam Reeves

Different underlying processes account for the descending and ascending portions of the metacontrast U-shaped function obtained in the flanking-masks paradigm. One or another process is dominant on each trial. Each process is monotonic with stimulus onset asynchrony in the region in which it can be measured. The two processes may be isolated by asking the subject to report on each trial not only target visibility but also whether target and mask appear simultaneous or not. Standard U-shaped functions could be obtained only as an artifact of averaging across these different types of trials.


2020 ◽  
Author(s):  
Abimael Francisco do Nascimento

The general objective of this study is to analyze the postulate of the ethics of otherness as the first philosophy, presented by Emmanuel Levinas. It is a proposal that runs through Levinas' thinking from his theoretical foundations, to his philosophical criticism. Levinas' thought presents itself as a new thought, as a critique of ontology and transcendental philosophy. For him, the concern with knowledge and with being made the other to be forgotten, placing the other in totality. Levinas proposes the ethics of otherness as sensitivity to the other. The subject says here I am, making myself responsible for the other in an infinite way, in a transcendence without return to myself, becoming hostage to the other, as an irrefutable responsibility. The idea of the infinite, present in the face of the other, points to a responsibility whoever more assumes himself, the more one is responsible, until the substitution by other.


2020 ◽  
Author(s):  
Joshua W Maxwell ◽  
Eric Ruthruff ◽  
michael joseph

Are facial expressions of emotion processed automatically? Some authors have not found this to be the case (Tomasik et al., 2009). Here we revisited the question with a novel experimental logic – the backward correspondence effect (BCE). In three dual-task studies, participants first categorized a sound (Task 1) and then indicated the location of a target face (Task 2). In Experiment 1, Task 2 required participants to search for one facial expression of emotion (angry or happy). We observed positive BCEs, indicating that facial expressions of emotion bypassed the central attentional bottleneck and thus were processed in a capacity-free, automatic manner. In Experiment 2, we replicated this effect but found that morphed emotional expressions (which were used by Tomasik) were not processed automatically. In Experiment 3, we observed similar BCEs for another type of face processing previously shown to be capacity-free – identification of familiar faces (Jung et al., 2013). We conclude that facial expressions of emotion are identified automatically when sufficiently unambiguous.


Sign in / Sign up

Export Citation Format

Share Document