visual bias
Recently Published Documents


TOTAL DOCUMENTS

43
(FIVE YEARS 12)

H-INDEX

9
(FIVE YEARS 1)

2021 ◽  
Vol 8 (3) ◽  
pp. 351-375
Author(s):  
Vanessa A. D. Wilson ◽  
Carolin Kade ◽  
Julia Fischer

Visual bias in social cognition studies is often interpreted to indicate preference, yet it is difficult to elucidate whether this translates to social preference. Moreover, visual bias is often framed in terms of surprise or recognition. It is thus important to examine whether an interpretation of preference is warranted in looking time studies. Here, using touchscreen training, we examined (1) looking time to non-social images in an image viewing task, and (2) preference of non-social images in a paired choice task, in captive long-tailed macaques, Macaca fascicularis. In a touchscreen test phase, we examined (3) looking time to social images in a viewing task, and (4) preference of social images in a paired choice task. Finally, we examined (5) looking time to social images in a non-test environment. For social content, the monkeys did not exhibit clear preferences for any category (conspecific/heterospecific, in-group/outgroup, kin/non-kin, young/old) in the explicit choice paradigm, nor did they differentiate between images in the viewing tasks, thus hampering our interpretation of the data. Post-hoc analysis of the training data however revealed a visual bias towards images of food and objects over landscapes in the viewing task. Similarly, across choice-task training sessions, food and object images were chosen more frequently than landscapes. This suggests that the monkeys’ gaze may indeed indicate preference, but this only became apparent for non-social stimuli. Why these monkeys had no biases in the social domain remains enigmatic. To better answer questions about attention to social stimuli, we encourage future research to examine behavioral measures alongside looking time.


PLoS ONE ◽  
2021 ◽  
Vol 16 (4) ◽  
pp. e0249950
Author(s):  
Rebecca Scheurich ◽  
Caroline Palmer ◽  
Batu Kaya ◽  
Caterina Agostino ◽  
Signy Sheldon

Although it is understood that episodic memories of everyday events involve encoding a wide array of perceptual and non-perceptual information, it is unclear how these distinct types of information are recalled. To address this knowledge gap, we examine how perceptual (visual versus auditory) and non-perceptual details described within a narrative, a proxy for everyday event memories, were retrieved. Based on previous work indicating a bias for visual content, we hypothesized that participants would be most accurate at recalling visually described details and would tend to falsely recall non-visual details with visual descriptors. In Study 1, participants watched videos of a protagonist telling narratives of everyday events under three conditions: with visual, auditory, or audiovisual details. All narratives contained the same non-perceptual content. Participants’ free recall of these narratives under each condition were scored for the type of details recalled (perceptual, non-perceptual) and whether the detail was recalled with gist or verbatim memory. We found that participants were more accurate at gist and verbatim recall for visual perceptual details. This visual bias was also evident when we examined the errors made during recall such that participants tended to incorrectly recall details with visual information, but not with auditory information. Study 2 tested for this pattern of results when the narratives were presented in auditory only format. Results conceptually replicated Study 1 in that there was still a persistent visual bias in what was recollected from the complex narratives. Together, these findings indicate a bias for recruiting visualizable content to construct complex multi-detail memories.


2021 ◽  
pp. 147035722097406
Author(s):  
Jürgen Maier ◽  
Isabella Glogger ◽  
Lukas P Otto ◽  
Jennifer Bast

Media professionals make use of various production techniques in the visual portrayal of politicians on television. A large body of literature indicates that these techniques exert varying influence on, for example, the evaluation of these actors, leading to the question of whether politicians are depicted in an equal way. Focusing on televised debates, this content analysis of five German debates aims to determine if there is a visual bias in the portrayal of candidates, depending on party affiliation, gender and role. Among other forms of bias, the authors find a difference in the use of camera movements and angle depending on the candidate’s gender and party affiliation.


2020 ◽  
Vol 17 (2) ◽  
pp. 145-162
Author(s):  
Jaqueline Vasconcelos Braga ◽  
Tiago Barros Pontes e Silva ◽  
Virgínia Tiradentes Souto

O mundo contemporâneo é caracterizado por um amplo volume de informações produzidas. Contudo, proceder a seleção e leitura dessas informações por meio de relatos de pesquisa ou de notícias ainda é um desafio. Entre os obstáculos presentes se destacam os vieses da informação, originados por tratamentos de jornalistas ou pesquisadores, ou mesmo provocados intencionalmente para subverter a representação da realidade a partir dos dados obtidos. Assim, o presente estudo visa discutir a interpretação de informações visuais em representações gráficas de cálculos estatísticos de modo a contextualizar alguns dos principais recursos visuais de enviesamento de pesquisa. Para tanto, aborda os principais modos de enviesamento em pesquisas a partir das representações da estatística e da visualização de dados e identifica alguns passos nos quais o enviesamento se traduz em informações visuais. A partir do levantamento realizado, sugere-se que a compreensão visual dos recursos de visualização de dados pode ao menos instigar a indagação do leitor acerca do possível viés.*****The contemporary world is characterized by a large volume of produced information. However, selecting and reading this information through research reports or news is still a challenge. Among the present obstacles stand out the information bias, originated by treatments of journalists or researchers, or even intentionally provoked to subvert the representation of reality from the obtained data. Thus, the present study aims to discuss the interpretation of visual information in graphical representations of statistical calculations in order to contextualize some of the main visual bias features of research. To this end, it addresses the main modes of search bias from statistical representations and data visualization and identifies some steps in which bias translates into visual information. From the study, it is suggested that the visual understanding of data visualization resources may at least instigate the reader's question about the possible bias.


Author(s):  
Shaobo Min ◽  
Hantao Yao ◽  
Hongtao Xie ◽  
Chaoqun Wang ◽  
Zheng-Jun Zha ◽  
...  
Keyword(s):  

Leonardo ◽  
2020 ◽  
Vol 53 (3) ◽  
pp. 263-267 ◽  
Author(s):  
Cécile Chevalier ◽  
Chris Kiefer

As augmented reality (AR) quickly evolves with new technological practice, there is a growing need to question and reevaluate its potential as a medium for creative expression. The authors discuss AR within computational art, framed within AR as a medium, AR aesthetics and applications. The Forum for Augmented Reality Immersive Instruments (ARImI), a two-day event on AR, highlights both possibilities and fundamental concerns for continuing artworks in this field, including visual bias, sensory modalities, interactivity and performativity. The authors offer a new AR definition as real-time computationally mediated perception.


2020 ◽  
Vol 10 (2) ◽  
pp. 100
Author(s):  
Satoshi Nobusako ◽  
Taeko Tsujimoto ◽  
Ayami Sakai ◽  
Takashi Shuto ◽  
Emi Furukawa ◽  
...  

Although the media can have both negative and positive effects on children’s cognitive and motor functions, its influence on their perceptual bias and manual dexterity is unclear. Thus, we investigated the association between media viewing time, media preference level, perceptual bias, and manual dexterity in 100 school-aged children. Questionnaires completed by children and their parents were used to ascertain media viewing time and preference levels. Perceptual bias and manual dexterity were measured using the visual-tactile temporal order judgment task and Movement Assessment Battery for Children—2nd edition, respectively. There were significant positive correlations between age and media viewing time and between media viewing time and media preference level. There was also a significant negative correlation between visual bias and manual dexterity. Hierarchical multiple regression analysis revealed that increasing visual bias was a significant predictor of decreasing manual dexterity. Further, children with low manual dexterity showed significant visual bias compared to those with high manual dexterity, when matched for age and gender. The present results demonstrated that, in school-aged children, although viewing media was not associated with perceptual bias and manual dexterity, there was a significant association between perceptual bias and manual dexterity.


2019 ◽  
Author(s):  
Sudha Sharma ◽  
Sharba Bandyopadhyay

AbstractIn a dynamic environment with rapidly changing contingencies, the orbitofrontal cortex (OFC) guides flexible behavior through coding of stimulus value. Although stimulus-evoked responses in the OFC are known to convey outcome, baseline sensory response properties in the mouse OFC are poorly understood. To understand mechanisms involved in stimulus value/outcome encoding it is important to know the response properties of single neurons in the mouse OFC, purely from a sensory perspective. Ruling out effects of behavioral state, memory and others, we studied the anesthetized mouse OFC responses to auditory, visual and audiovisual/multisensory stimuli, multisensory associations and sensory-driven input organization to the OFC. Almost all, OFC single neurons were found to be multisensory in nature, with sublinear to supralinear integration of the component unisensory stimuli. With a novel multisensory oddball stimulus set, we show that the OFC receives both unisensory as well as multisensory inputs, further corroborated by retrograde tracers showing labeling in secondary auditory and visual cortices, which we find to also have similar multisensory integration and responses. With long audiovisual pairing/association, we show rapid plasticity in OFC single neurons, with a strong visual bias, leading to a strong depression of auditory responses and effective enhancement of visual responses. Such rapid multisensory association driven plasticity is absent in the auditory and visual cortices, suggesting its emergence in the OFC. Based on the above results we propose a hypothetical local circuit model in the OFC that integrates auditory and visual information which participates in computing stimulus value in dynamic multisensory environments.Significance StatementProperties and modification of sensory responses of neurons in the orbitofrontal cortex (OFC) involved in flexible behavior through stimulus value/outcome encoding are poorly understood. Such responses are critical in providing the framework for the encoding of stimulus value based on behavioral context while also directing plastic changes in sensory regions. The mouse OFC is found to be primarily multisensory with varied nonlinear interactions, explained by unisensory and multisensory inputs. Audio-visual associations cause rapid plastic changes in the OFC, which bias visual responses while suppressing auditory responses. Similar plasticity was absent in the sensory cortex. Thus the observed intrinsic visual bias in the OFC weighs visual stimuli more than associated auditory stimuli in value encoding in a dynamic multisensory environment.


2019 ◽  
Vol 6 (1) ◽  
pp. 161-182
Author(s):  
George Jaramillo ◽  
Lynne Mennie

Textile patterns, whether printed, knitted, woven or embroidered, tend to be inspired by and created in response to the visual environment. The soundscape is a significant component of the embodied multisensory landscape ‐ from the buzz of fluorescent tube lights in an office to the intermittent roar of water flowing in a river; no space is ever silent (Schafer 1994). Attunement to environmental soundscape provides inspiration in music, art and, in this case, the creation of textile patterns, challenging the visual bias of pattern creation. In this ongoing study, the audio sources from bird song to horses galloping are visualized into spectrograms forming contemporary landscape-inspired textile patterns. Spectrograms are a type of visualization of an audio spectrum where the intensity and multiple frequencies are displayed across time, rather than simply the pitch and amplitude of the sound source. These spectrograms are then transformed into textile patterns through the interaction between a maker's existing skill set and digital software. By sharing this process with a group of textile practitioners, this sound-to-visual approach forms the foundation of a co-created textile pattern design. In this way, the process of soundscape-inspired design challenges the visual bias of existing textile patterns, contributing to the sensory ethnography of the contemporary landscape. Here we explore key insights that emerged from the project ‐ experimenting, collaborating and disrupting ‐ through the imagery of process and pattern making, as well as, through the narratives and reflections of the practitioners, presenting a collective visual encounter. In the end, the project opens dialogues to collaboratively understand and relate to the local soundscape as a source of inspiration for pattern making, and begins to formalize a design narrative based on the non-visual environment.


Sign in / Sign up

Export Citation Format

Share Document