scholarly journals Evidence for a visual bias when recalling complex narratives

PLoS ONE ◽  
2021 ◽  
Vol 16 (4) ◽  
pp. e0249950
Author(s):  
Rebecca Scheurich ◽  
Caroline Palmer ◽  
Batu Kaya ◽  
Caterina Agostino ◽  
Signy Sheldon

Although it is understood that episodic memories of everyday events involve encoding a wide array of perceptual and non-perceptual information, it is unclear how these distinct types of information are recalled. To address this knowledge gap, we examine how perceptual (visual versus auditory) and non-perceptual details described within a narrative, a proxy for everyday event memories, were retrieved. Based on previous work indicating a bias for visual content, we hypothesized that participants would be most accurate at recalling visually described details and would tend to falsely recall non-visual details with visual descriptors. In Study 1, participants watched videos of a protagonist telling narratives of everyday events under three conditions: with visual, auditory, or audiovisual details. All narratives contained the same non-perceptual content. Participants’ free recall of these narratives under each condition were scored for the type of details recalled (perceptual, non-perceptual) and whether the detail was recalled with gist or verbatim memory. We found that participants were more accurate at gist and verbatim recall for visual perceptual details. This visual bias was also evident when we examined the errors made during recall such that participants tended to incorrectly recall details with visual information, but not with auditory information. Study 2 tested for this pattern of results when the narratives were presented in auditory only format. Results conceptually replicated Study 1 in that there was still a persistent visual bias in what was recollected from the complex narratives. Together, these findings indicate a bias for recruiting visualizable content to construct complex multi-detail memories.

2020 ◽  
Author(s):  
John J Shaw ◽  
Zhisen Urgolites ◽  
Padraic Monaghan

Visual long-term memory has a large and detailed storage capacity for individual scenes, objects, and actions. However, memory for combinations of actions and scenes is poorer, suggesting difficulty in binding this information together. Sleep can enhance declarative memory of information, but whether sleep can also boost memory for binding information and whether the effect is general across different types of information is not yet known. Experiments 1 to 3 tested effects of sleep on binding actions and scenes, and Experiments 4 and 5 tested binding of objects and scenes. Participants viewed composites and were tested 12-hours later after a delay consisting of sleep (9pm-9am) or wake (9am-9pm), on an alternative forced choice recognition task. For action-scene composites, memory was relatively poor with no significant effect of sleep. For object-scene composites sleep did improve memory. Sleep can promote binding in memory, depending on the type of information to be combined.


Electronics ◽  
2021 ◽  
Vol 10 (6) ◽  
pp. 741
Author(s):  
Yuseok Ban ◽  
Kyungjae Lee

Many researchers have suggested improving the retention of a user in the digital platform using a recommender system. Recent studies show that there are many potential ways to assist users to find interesting items, other than high-precision rating predictions. In this paper, we study how the diverse types of information suggested to a user can influence their behavior. The types have been divided into visual information, evaluative information, categorial information, and narrational information. Based on our experimental results, we analyze how different types of supplementary information affect the performance of a recommender in terms of encouraging users to click more items or spend more time in the digital platform.


2017 ◽  
Vol 14 (2) ◽  
pp. 234-252
Author(s):  
Emilia Christie Picelli Sanches ◽  
Claudia Mara Scudelari Macedo ◽  
Juliana Bueno

A acessibilidade na educação de pessoas cegas é um direito que deve ser cumprido. Levando-se em consideração que o design da informação almeja transmitir uma informação de forma efetiva ao receptor, e que uma imagem estática precisa ser adaptada para que um aluno cego tenha acesso a esse conteúdo visual, propõe-se uma maneira de traduzir a informação visual para o tátil. O propósito deste artigo, então, é apresentar um modelo para tradução de imagens estáticas bidimensionais em imagens táteis tridimensionais. Por isso, parte de uma breve revisão da literatura sobre cegueira, percepção tátil e imagens táteis. Na sequência, apresenta o modelo de tradução em três partes: (1) recomendações da literatura; (2) estrutura e (3) modelo preliminar para teste. Depois, descreve o teste do modelo realizado com dois designers com habilidades de modelagem digital (potenciais usuários). Como resultado dos testes, obtiveram-se duas modelagens distintas, uma utilizando da elevação e outra utilizando texturas, porém, os dois participantes realizaram com sucesso a tarefa pretendida. Ainda, a partir dos resultados dos obtidos, também, foi possível perceber falhas no modelo que necessitam ser ajustadas para as próximas etapas da pesquisa.+++++Accessibility in education of blind people is a right that must be fulfilled. Considering that information design aims to transmit an information in an effective way to the receiver, and that a static image needs to be adapted so that a blind student can have access to this visual content, it is proposed a way to translate the visual information to the tactile sense. The purpose of this paper is to present a translating model of static two-dimensional images into three-dimensional tactile images. First, it starts from a brief literature review aboutblindness, tactile perception and tactile images. Second, it presents the translating model in three sections: (1) literature recommendations; (2) structure and (3) finished model for testing. Then, it describes the tests with the model and two designers with digital modelling abilities (potential users). As a result from the tests, two distinct models were obtained, one using elevation and other using textures, although, both participants successfully made the intended task. Also from the test results, it was possible to perceive flaws on the model that need to be adjusted for the next steps of the research.


2019 ◽  
Vol 27 ◽  
pp. 165-173
Author(s):  
Jung-Hun Kim ◽  
Ji-Eun Park ◽  
In-Hee Ji ◽  
Chul-Ho Won ◽  
Jong-Min Lee ◽  
...  

2020 ◽  
pp. 002383091989888
Author(s):  
Luma Miranda ◽  
Marc Swerts ◽  
João Moraes ◽  
Albert Rilliard

This paper presents the results of three perceptual experiments investigating the role of auditory and visual channels for the identification of statements and echo questions in Brazilian Portuguese. Ten Brazilian speakers (five male) were video-recorded (frontal view of the face) while they produced a sentence (“ Como você sabe”), either as a statement (meaning “ As you know.”) or as an echo question (meaning “ As you know?”). Experiments were set up including the two different intonation contours. Stimuli were presented in conditions with clear and degraded audio as well as congruent and incongruent information from both channels. Results show that Brazilian listeners were able to distinguish statements and questions prosodically and visually, with auditory cues being dominant over visual ones. In noisy conditions, the visual channel improved the interpretation of prosodic cues robustly, while it degraded them in conditions where the visual information was incongruent with the auditory information. This study shows that auditory and visual information are integrated during speech perception, also when applied to prosodic patterns.


1975 ◽  
Vol 69 (5) ◽  
pp. 226-233
Author(s):  
Sally Rogow

The blind child builds his perceptions from tactual (haptic) and auditory information. Assumptions on the part of professionals that tactual and visual data are identical can result in misconceptions that may lead to delayed development and distortions of cognitive process in blind children. A review of research on the perception of form and spatial relationships suggests that differences between tactual and visual information result in differences in perceptual organization. However, studies indicate that blind children reach developmental milestones (e.g., conservation) at approximately the same ages as sighted children.


2012 ◽  
Vol 25 (0) ◽  
pp. 148
Author(s):  
Marcia Grabowecky ◽  
Emmanuel Guzman-Martinez ◽  
Laura Ortega ◽  
Satoru Suzuki

Watching moving lips facilitates auditory speech perception when the mouth is attended. However, recent evidence suggests that visual attention and awareness are mediated by separate mechanisms. We investigated whether lip movements suppressed from visual awareness can facilitate speech perception. We used a word categorization task in which participants listened to spoken words and determined as quickly and accurately as possible whether or not each word named a tool. While participants listened to the words they watched a visual display that presented a video clip of the speaker synchronously speaking the auditorily presented words, or the same speaker articulating different words. Critically, the speaker’s face was either visible (the aware trials), or suppressed from awareness using continuous flash suppression. Aware and suppressed trials were randomly intermixed. A secondary probe-detection task ensured that participants attended to the mouth region regardless of whether the face was visible or suppressed. On the aware trials responses to the tool targets were no faster with the synchronous than asynchronous lip movements, perhaps because the visual information was inconsistent with the auditory information on 50% of the trials. However, on the suppressed trials responses to the tool targets were significantly faster with the synchronous than asynchronous lip movements. These results demonstrate that even when a random dynamic mask renders a face invisible, lip movements are processed by the visual system with sufficiently high temporal resolution to facilitate speech perception.


2020 ◽  
Vol 31 (01) ◽  
pp. 030-039 ◽  
Author(s):  
Aaron C. Moberly ◽  
Kara J. Vasil ◽  
Christin Ray

AbstractAdults with cochlear implants (CIs) are believed to rely more heavily on visual cues during speech recognition tasks than their normal-hearing peers. However, the relationship between auditory and visual reliance during audiovisual (AV) speech recognition is unclear and may depend on an individual’s auditory proficiency, duration of hearing loss (HL), age, and other factors.The primary purpose of this study was to examine whether visual reliance during AV speech recognition depends on auditory function for adult CI candidates (CICs) and adult experienced CI users (ECIs).Participants included 44 ECIs and 23 CICs. All participants were postlingually deafened and had met clinical candidacy requirements for cochlear implantation.Participants completed City University of New York sentence recognition testing. Three separate lists of twelve sentences each were presented: the first in the auditory-only (A-only) condition, the second in the visual-only (V-only) condition, and the third in combined AV fashion. Each participant’s amount of “visual enhancement” (VE) and “auditory enhancement” (AE) were computed (i.e., the benefit to AV speech recognition of adding visual or auditory information, respectively, relative to what could potentially be gained). The relative reliance of VE versus AE was also computed as a VE/AE ratio.VE/AE ratio was predicted inversely by A-only performance. Visual reliance was not significantly different between ECIs and CICs. Duration of HL and age did not account for additional variance in the VE/AE ratio.A shift toward visual reliance may be driven by poor auditory performance in ECIs and CICs. The restoration of auditory input through a CI does not necessarily facilitate a shift back toward auditory reliance. Findings suggest that individual listeners with HL may rely on both auditory and visual information during AV speech recognition, to varying degrees based on their own performance and experience, to optimize communication performance in real-world listening situations.


2019 ◽  
Vol 32 (2) ◽  
pp. 87-109 ◽  
Author(s):  
Galit Buchs ◽  
Benedetta Heimler ◽  
Amir Amedi

Abstract Visual-to-auditory Sensory Substitution Devices (SSDs) are a family of non-invasive devices for visual rehabilitation aiming at conveying whole-scene visual information through the intact auditory modality. Although proven effective in lab environments, the use of SSDs has yet to be systematically tested in real-life situations. To start filling this gap, in the present work we tested the ability of expert SSD users to filter out irrelevant background noise while focusing on the relevant audio information. Specifically, nine blind expert users of the EyeMusic visual-to-auditory SSD performed a series of identification tasks via SSDs (i.e., shape, color, and conjunction of the two features). Their performance was compared in two separate conditions: silent baseline, and with irrelevant background sounds from real-life situations, using the same stimuli in a pseudo-random balanced design. Although the participants described the background noise as disturbing, no significant performance differences emerged between the two conditions (i.e., noisy; silent) for any of the tasks. In the conjunction task (shape and color) we found a non-significant trend for a disturbing effect of the background noise on performance. These findings suggest that visual-to-auditory SSDs can indeed be successfully used in noisy environments and that users can still focus on relevant auditory information while inhibiting irrelevant sounds. Our findings take a step towards the actual use of SSDs in real-life situations while potentially impacting rehabilitation of sensory deprived individuals.


2020 ◽  
pp. 095679762095485
Author(s):  
Mathieu Landry ◽  
Jason Da Silva Castanheira ◽  
Jérôme Sackur ◽  
Amir Raz

Suggestions can cause some individuals to miss or disregard existing visual stimuli, but can they infuse sensory input with nonexistent information? Although several prominent theories of hypnotic suggestion propose that mental imagery can change our perceptual experience, data to support this stance remain sparse. The present study addressed this lacuna, showing how suggesting the presence of physically absent, yet critical, visual information transforms an otherwise difficult task into an easy one. Here, we show how adult participants who are highly susceptible to hypnotic suggestion successfully hallucinated visual occluders on top of moving objects. Our findings support the idea that, at least in some people, suggestions can add perceptual information to sensory input. This observation adds meaningful weight to theoretical, clinical, and applied aspects of the brain and psychological sciences.


Sign in / Sign up

Export Citation Format

Share Document