scholarly journals Reward Rapidly Enhances Visual Perception

2021 ◽  
pp. 095679762110218
Author(s):  
Phillip (Xin) Cheng ◽  
Anina N. Rich ◽  
Mike E. Le Pelley

Rewards exert a deep influence on our cognition and behavior. Here, we used a paradigm in which reward information was provided at either encoding or retrieval of a brief, masked stimulus to show that reward can also rapidly modulate perceptual encoding of visual information. Experiment 1 ( n = 30 adults) showed that participants’ response accuracy was enhanced when a to-be-encoded grating signaled high reward relative to low reward, but only when the grating was presented very briefly and participants reported that they were not consciously aware of it. Experiment 2 ( n = 29 adults) showed that there was no difference in participants’ response accuracy when reward information was instead provided at the stage of retrieval, ruling out an explanation of the reward-modulation effect in terms of differences in motivated retrieval. Taken together, our findings provide behavioral evidence consistent with a rapid reward modulation of visual perception, which may not require consciousness.

2020 ◽  
Author(s):  
Phillip Cheng ◽  
Anina N. Rich ◽  
Mike Le Pelley

Rewards exert a deep influence on our cognition and behaviour. Here, we used a paradigm in which reward information was provided at either encoding or retrieval of a brief, masked stimulus to show that reward can also rapidly modulate early neural processing of visual information, prior to consciousness. Experiment 1 showed enhanced response accuracy when a to-be-encoded grating signalled high reward relative to low reward, but only when the grating was presented very briefly and participants were not consciously aware of it. Experiment 2 showed no difference in response accuracy when reward information was instead provided at the stage of retrieval, ruling out an explanation of the reward-modulation effect in terms of differences in motivated retrieval. Taken together, our findings provide the first behavioural evidence for a rapid reward-modulation of visual perception, which does not seem to require consciousness.


2010 ◽  
Vol 33 (2-3) ◽  
pp. 61-83 ◽  
Author(s):  
Joseph Henrich ◽  
Steven J. Heine ◽  
Ara Norenzayan

AbstractBehavioral scientists routinely publish broad claims about human psychology and behavior in the world's top journals based on samples drawn entirely from Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies. Researchers – often implicitly – assume that either there is little variation across human populations, or that these “standard subjects” are as representative of the species as any other population. Are these assumptions justified? Here, our review of the comparative database from across the behavioral sciences suggests both that there is substantial variability in experimental results across populations and that WEIRD subjects are particularly unusual compared with the rest of the species – frequent outliers. The domains reviewed include visual perception, fairness, cooperation, spatial reasoning, categorization and inferential induction, moral reasoning, reasoning styles, self-concepts and related motivations, and the heritability of IQ. The findings suggest that members of WEIRD societies, including young children, are among the least representative populations one could find for generalizing about humans. Many of these findings involve domains that are associated with fundamental aspects of psychology, motivation, and behavior – hence, there are no obviousa priorigrounds for claiming that a particular behavioral phenomenon is universal based on sampling from a single subpopulation. Overall, these empirical patterns suggests that we need to be less cavalier in addressing questions ofhumannature on the basis of data drawn from this particularly thin, and rather unusual, slice of humanity. We close by proposing ways to structurally re-organize the behavioral sciences to best tackle these challenges.


2021 ◽  
Vol 12 ◽  
Author(s):  
Nao Kokaji ◽  
Masashi Nakatani

Among the senses of food, our subjective sense of taste is significantly influenced by our visual perception. In appetite science, previous research has reported that when we estimate quality in daily life, we rely considerably on visual information. This study focused on the multimodal mental imagery evoked by the visual information of food served on a plate and examined the effect of the peripheral visual information of garnish on the sensory impression of the main dish. A sensory evaluation experiment was conducted to evaluate the impressions of food photographs, and multivariate analysis was used to structure sensory values. It was found that the appearance of the garnish placed on the plates close to the main dish contributes to visual appetite stimulants. It is evident that color, moisture, and taste (sourness and spiciness) play a major role in the acceptability of food. To stimulate one’s appetite, it is important to make the main dish appear warm. These results can be used to modulate the eating experience and stimulate appetite. Applying these results to meals can improve the dining experience by superimposing visual information with augmented reality technology or by presenting real appropriate garnishes.


2018 ◽  
Vol 231 ◽  
pp. 01017 ◽  
Author(s):  
Piotr Tomczuk ◽  
Marcin Chrzanowicz

Advertising media are located in such a way that they are visible to road users and the visual information reaches the driver regardless of whether the media is located in or outside the lane. Road managers have limited possibilities to influence technical parameters of media placed outside the roadway. However, the light emitted from the advertising medium reaching the driver may be subject to limitations. Subject literature indicates the necessity of using emission parameters in relation to technical parameters of media. The current requirements in Poland specify the maximum luminance values of the advertising media. However, a number of other relevant vehicle emission parameters which may degrade the driver's visual perception of the driver's road environment shall not be taken into account. The need to introduce guidelines for the installation of roadside advertising requires the presentation and discussion of specific technical parameters concerning light emission from the advertising medium. In the article have been discussed of emission parameters of advertising media and examples of measurements of individual lighting parameters, which are possible to be registered in field conditions.


Author(s):  
Ralph Schumacher

The aim of this paper is to defend a broad concept of visual perception, according to which it is a sufficient condition for visual perception that subjects receive visual information in a way which enables them to give reliably correct answers about the objects presented to them. According to this view, blindsight, non-epistemic seeing, and conscious visual experience count as proper types of visual perception. This leads to two consequences concerning the role of the phenomenal qualities of visual experiences. First, phenomenal qualities are not necessary in order to see something, because in the case of blindsight, subjects can see objects without experiences phenomenal qualities. Second, they cannot be intentional properties, since they are not essential properties of visual experiences, and because the content of visual experiences cannot be constituted by contingent properties.


2019 ◽  
Vol 3 (1) ◽  
pp. 17
Author(s):  
Ramya Akula ◽  
Ivan Garibay

Social networking platforms connect people from all around the world. Because of their user-friendliness and easy accessibility, their traffic is increasing drastically. Such active participation has caught the attention of many research groups that are focusing on understanding human behavior to study the dynamics of these social networks. Oftentimes, perceiving these networks is hard, mainly due to either the large size of the data involved or the ineffective use of visualization strategies. This work introduces VizTract to ease the visual perception of complex social networks. VizTract is a two-level graph abstraction visualization tool that is designed to visualize both hierarchical and adjacency information in a tree structure. We use the Facebook dataset from the Social Network Analysis Project from Stanford University. On this data, social groups are referred as circles, social network users as nodes, and interactions as edges between the nodes. Our approach is to present a visual overview that represents the interactions between circles, then let the user navigate this overview and select the nodes in the circles to obtain more information on demand. VizTract aim to reduce visual clutter without any loss of information during visualization. VizTract enhances the visual perception of complex social networks to help better understand the dynamics of the network structure. VizTract within a single frame not only reduces the complexity but also avoids redundancy of the nodes and the rendering time. The visualization techniques used in VizTract are the force-directed layout, circle packing, cluster dendrogram, and hierarchical edge bundling. Furthermore, to enhance the visual information perception, VizTract provides interaction techniques such as selection, path highlight, mouse-hover, and bundling strength. This method helps social network researchers to display large networks in a visually effective way that is conducive to ease interpretation and analysis. We conduct a study to evaluate the user experience of the system and then collect information about their perception via a survey. The goal of the study is to know how humans can interpret the network when visualized using different visualization methods. Our results indicate that users heavily prefer those visualization techniques that aggregate the information and the connectivity within a given space, such as hierarchical edge bundling.


2003 ◽  
Vol 15 (2) ◽  
pp. 259-275 ◽  
Author(s):  
ANN E. BIGELOW

There is little documentation of how and when joint attention emerges in blind infants because the study of this ability has been predominantly reliant on visual information. Ecological self-knowledge, which is necessary for joint attention, is impaired in blind infants and is evidenced by their reaching for objects on external cues, which also marks the beginning of their Stage 4 understanding of space and object. Entry into Stage 4 should occur before joint attention emerges in these infants. In a case study of two totally blind infants, the development of joint attention was longitudinally examined during Stage 4 in monthly sessions involving interactions with objects and familiar adults. The interactions were scored for behavior preliminary to joint attention, behavior liberally construed as joint attention, and behavior conservatively construed as joint attention. Behavior preliminary to joint attention occurred throughout Stage 4; behavior suggestive of joint attention by both liberal and conservative standards emerged initially in Stage 4 and became prevalent by mid to late Stage 4. The findings are discussed in terms of how they inform our thinking about the development of joint attention with respect to the importance of vision, cognition, social context, language, and early self-knowledge.


2018 ◽  
Author(s):  
Rachel N. Denison ◽  
Shlomit Yuval-Greenberg ◽  
Marisa Carrasco

AbstractOur visual input is constantly changing, but not all moments are equally relevant. Temporal attention, the prioritization of visual information at specific points in time, increases perceptual sensitivity at behaviorally relevant times. The dynamic processes underlying this increase are unclear. During fixation, humans make small eye movements called microsaccades, and inhibiting microsaccades improves perception of brief stimuli. Here we asked whether temporal attention changes the pattern of microsaccades in anticipation of brief stimuli. Human observers (female and male) judged brief stimuli presented within a short sequence. They were given either an informative precue to attend to one of the stimuli, which was likely to be probed, or an uninformative (neutral) precue. We found strong microsaccadic inhibition before the stimulus sequence, likely due to its predictable onset. Critically, this anticipatory inhibition was stronger when the first target in the sequence (T1) was precued (task-relevant) than when the precue was uninformative. Moreover, the timing of the last microsaccade before T1 and the first microsaccade after T1 shifted, such that both occurred earlier when T1 was precued than when the precue was uninformative. Finally, the timing of the nearest pre- and post-T1 microsaccades affected task performance. Directing voluntary temporal attention therefore impacts microsaccades, helping to stabilize fixation at the most relevant moments, over and above the effect of predictability. Just as saccading to a relevant stimulus can be an overt correlate of the allocation of spatial attention, precisely timed gaze stabilization can be an overt correlate of the allocation of temporal attention.Significance statementWe pay attention at moments in time when a relevant event is likely to occur. Such temporal attention improves our visual perception, but how it does so is not well understood. Here we discovered a new behavioral correlate of voluntary, or goal-directed, temporal attention. We found that the pattern of small fixational eye movements called microsaccades changes around behaviorally relevant moments in a way that stabilizes the position of the eyes. Microsaccades during a brief visual stimulus can impair perception of that stimulus. Therefore, such fixation stabilization may contribute to the improvement of visual perception at attended times. This link suggests that in addition to cortical areas, subcortical areas mediating eye movements may be recruited with temporal attention.


Sign in / Sign up

Export Citation Format

Share Document