Effects of Order and Sensory Modality in Stiffness Perception

2012 ◽  
Vol 21 (3) ◽  
pp. 295-304 ◽  
Author(s):  
Maria Korman ◽  
Kinneret Teodorescu ◽  
Adi Cohen ◽  
Miriam Reiner ◽  
Daniel Gopher

The stiffness properties of an environment are perceived during active manual manipulation primarily by processing force cues and position-based tactile, kinesthetic, and visual information. Using a two alternative forced choice (2AFC) stiffness discrimination task, we tested how the perceiver integrates stiffness-related information based on sensory feedback from one or two modalities and the origins of within-session shifts in stiffness discrimination ability. Two factors were investigated: practice and the amount of available sensory information. Subjects discriminated between the stiffness of two targets that were presented either haptically or visuohaptically in two subsequent blocks. Our results show that prior experience in a unisensory haptic stiffness discrimination block greatly improved performance when visual feedback was subsequently provided along with haptic feedback. This improvement could not be attributed to effects induced by practice or multisensory stimulus presentation. Our findings suggest that optimization integration theories of multisensory perception need to account for past sensory experience that may affect current perception of the task even within a single session.

2018 ◽  
Vol 5 (2) ◽  
pp. 171785 ◽  
Author(s):  
Martin F. Strube-Bloss ◽  
Wolfgang Rössler

Flowers attract pollinating insects like honeybees by sophisticated compositions of olfactory and visual cues. Using honeybees as a model to study olfactory–visual integration at the neuronal level, we focused on mushroom body (MB) output neurons (MBON). From a neuronal circuit perspective, MBONs represent a prominent level of sensory-modality convergence in the insect brain. We established an experimental design allowing electrophysiological characterization of olfactory, visual, as well as olfactory–visual induced activation of individual MBONs. Despite the obvious convergence of olfactory and visual pathways in the MB, we found numerous unimodal MBONs. However, a substantial proportion of MBONs (32%) responded to both modalities and thus integrated olfactory–visual information across MB input layers. In these neurons, representation of the olfactory–visual compound was significantly increased compared with that of single components, suggesting an additive, but nonlinear integration. Population analyses of olfactory–visual MBONs revealed three categories: (i) olfactory, (ii) visual and (iii) olfactory–visual compound stimuli. Interestingly, no significant differentiation was apparent regarding different stimulus qualities within these categories. We conclude that encoding of stimulus quality within a modality is largely completed at the level of MB input, and information at the MB output is integrated across modalities to efficiently categorize sensory information for downstream behavioural decision processing.


2014 ◽  
Vol 27 (3-4) ◽  
pp. 247-262 ◽  
Author(s):  
Emiliano Ricciardi ◽  
Leonardo Tozzi ◽  
Andrea Leo ◽  
Pietro Pietrini

Cross-modal responses in occipital areas appear to be essential for sensory processing in visually deprived subjects. However, it is yet unclear whether this functional recruitment might be dependent on the sensory channel conveying the information. In order to characterize brain areas showing task-independent, but sensory specific, cross-modal responses in blind individuals, we pooled together distinct brain functional studies in a single based meta-analysis according only to the modality conveying experimental stimuli (auditory or tactile). Our approach revealed a specific functional cortical segregation according to the sensory modality conveying the non-visual information, irrespectively from the cognitive features of the tasks. In particular, dorsal and posterior subregions of occipital and superior parietal cortex showed a higher cross-modal recruitment across tactile tasks in blind as compared to sighted individuals. On the other hand, auditory stimuli activated more medial and ventral clusters within early visual areas, the lingual and inferior temporal cortex. These findings suggest a modality-specific functional modification of cross-modal responses within different portions of the occipital cortex of blind individuals. Cross-modal recruitment can thus be specifically influenced by the intrinsic features of sensory information.


2020 ◽  
Author(s):  
Madeline S. Cappelloni ◽  
Sabyasachi Shivkumar ◽  
Ralf M. Haefner ◽  
Ross K. Maddox

ABSTRACTThe brain combines information from multiple sensory modalities to interpret the environment. Multisensory integration is often modeled by ideal Bayesian causal inference, a model proposing that perceptual decisions arise from a statistical weighting of information from each sensory modality based on its reliability and relevance to the observer’s task. However, ideal Bayesian causal inference fails to describe human behavior in a simultaneous auditory spatial discrimination task in which spatially aligned visual stimuli improve performance despite providing no information about the correct response. This work tests the hypothesis that humans weight auditory and visual information in this task based on their relative reliabilities, even though the visual stimuli are task-uninformative, carrying no information about the correct response, and should be given zero weight. Listeners perform an auditory spatial discrimination task with relative reliabilities modulated by the stimulus durations. By comparing conditions in which task-uninformative visual stimuli are spatially aligned with auditory stimuli or centrally located (control condition), listeners are shown to have a larger multisensory effect when their auditory thresholds are worse. Even in cases in which visual stimuli are not task-informative, the brain combines sensory information that is scene-relevant, especially when the task is difficult due to unreliable auditory information.


Electronics ◽  
2021 ◽  
Vol 10 (8) ◽  
pp. 953
Author(s):  
Uran Oh ◽  
Hwayeon Joh ◽  
YunJung Lee

A number of studies have been conducted to improve the accessibility of images using touchscreen devices for screen reader users. In this study, we conducted a systematic review of 33 papers to get a holistic understanding of existing approaches and to suggest a research road map given identified gaps. As a result, we identified types of images, visual information, input device and feedback modalities that were studied for improving image accessibility using touchscreen devices. Findings also revealed that there is little study how the generation of image-related information can be automated. Moreover, we confirmed that the involvement of screen reader users is mostly limited to evaluations, while input from target users during the design process is particularly important for the development of assistive technologies. Then we introduce two of our recent studies on the accessibility of artwork and comics, AccessArt and AccessComics, respectively. Based on the identified key challenges, we suggest a research agenda for improving image accessibility for screen reader users.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jacques Pesnot Lerousseau ◽  
Gabriel Arnold ◽  
Malika Auvray

AbstractSensory substitution devices aim at restoring visual functions by converting visual information into auditory or tactile stimuli. Although these devices show promise in the range of behavioral abilities they allow, the processes underlying their use remain underspecified. In particular, while an initial debate focused on the visual versus auditory or tactile nature of sensory substitution, since over a decade, the idea that it reflects a mixture of both has emerged. In order to investigate behaviorally the extent to which visual and auditory processes are involved, participants completed a Stroop-like crossmodal interference paradigm before and after being trained with a conversion device which translates visual images into sounds. In addition, participants' auditory abilities and their phenomenologies were measured. Our study revealed that, after training, when asked to identify sounds, processes shared with vision were involved, as participants’ performance in sound identification was influenced by the simultaneously presented visual distractors. In addition, participants’ performance during training and their associated phenomenology depended on their auditory abilities, revealing that processing finds its roots in the input sensory modality. Our results pave the way for improving the design and learning of these devices by taking into account inter-individual differences in auditory and visual perceptual strategies.


Perception ◽  
2017 ◽  
Vol 46 (12) ◽  
pp. 1412-1426 ◽  
Author(s):  
Elmeri Syrjänen ◽  
Marco Tullio Liuzza ◽  
Håkan Fischer ◽  
Jonas K. Olofsson

Disgust is a core emotion evolved to detect and avoid the ingestion of poisonous food as well as the contact with pathogens and other harmful agents. Previous research has shown that multisensory presentation of olfactory and visual information may strengthen the processing of disgust-relevant information. However, it is not known whether these findings extend to dynamic facial stimuli that changes from neutral to emotionally expressive, or if individual differences in trait body odor disgust may influence the processing of disgust-related information. In this preregistered study, we tested whether a classification of dynamic facial expressions as happy or disgusted, and an emotional evaluation of these facial expressions, would be affected by individual differences in body odor disgust sensitivity, and by exposure to a sweat-like, negatively valenced odor (valeric acid), as compared with a soap-like, positively valenced odor (lilac essence) or a no-odor control. Using Bayesian hypothesis testing, we found evidence that odors do not affect recognition of emotion in dynamic faces even when body odor disgust sensitivity was used as moderator. However, an exploratory analysis suggested that an unpleasant odor context may cause faster RTs for faces, independent of their emotional expression. Our results further our understanding of the scope and limits of odor effects on facial perception affect and suggest further studies should focus on reproducibility, specifying experimental circumstances where odor effects on facial expressions may be present versus absent.


1974 ◽  
Vol 38 (3_suppl) ◽  
pp. 1271-1274
Author(s):  
Robert M. Alworth

This research was intended to investigate the difficulty experienced by retarded readers in acquiring associations between auditory and visual information. First- and second-grade above- and below-average readers ( ns = 41, 42) were presented paired-associate tasks involving: (a) simultaneous and delayed stimulus presentation, (b) visual-visual and visual-auditory stimuli, and (c) stimuli in which within-stimulus element sequence was and was not relevant in determining the associated response. Inferior paired-associate learning was noted in below-average readers, delayed-presentation tasks, and sequence-relevant tasks. No significant interactions were noted.


2012 ◽  
Vol 25 (0) ◽  
pp. 111
Author(s):  
Shuichi Sakamoto ◽  
Gen Hasegawa ◽  
Akio Honda ◽  
Yukio Iwaya ◽  
Yôiti Suzuki ◽  
...  

High-definition multimodal displays are necessary to advance information and communications technologies. Such systems mainly present audio–visual information because this sensory information includes rich spatiotemporal information. Recently, not only audio–visual information but also other sensory information, for example touch, smell, and vibration, has come to be presented easily. The potential of such information is expanded to realize high-definition multimodal displays. We specifically examined the effects of full body vibration information on perceived reality from audio–visual content. As indexes of perceived reality, we used the sense of presence and the sense of verisimilitude. The latter is the appreciative role of foreground components in multimodal contents, although the former is related more closely to background components included in a scene. Our previous report described differences of characteristics of both senses to audio–visual contents (Kanda et al., IMRF2011). In the present experiments, various amounts of full body vibration were presented with an audio–visual movie, which was recorded via a camera and microphone set on wheelchair. Participants reported the amounts of perceived sense of presence and verisimilitude. Results revealed that the intensity of full body vibration characterized both senses differently. The sense of presence increased linearly according to the intensity of full body vibration, while the sense of verisimilitude showed a nonlinear tendency. These results suggest that not only audio–visual information but also full body vibration is importantto develop high-definition multimodal displays.


2021 ◽  
Vol 11 (11) ◽  
pp. 1506
Author(s):  
Annalisa Tosoni ◽  
Emanuele Cosimo Altomare ◽  
Marcella Brunetti ◽  
Pierpaolo Croce ◽  
Filippo Zappasodi ◽  
...  

One fundamental principle of the brain functional organization is the elaboration of sensory information for the specification of action plans that are most appropriate for interaction with the environment. Using an incidental go/no-go priming paradigm, we have previously shown a facilitation effect for the execution of a walking-related action in response to far vs. near objects/locations in the extrapersonal space, and this effect has been called “macro-affordance” to reflect the role of locomotion in the coverage of extrapersonal distance. Here, we investigated the neurophysiological underpinnings of such an effect by recording scalp electroencephalography (EEG) from 30 human participants during the same paradigm. The results of a whole-brain analysis indicated a significant modulation of the event-related potentials (ERPs) both during prime and target stimulus presentation. Specifically, consistent with a mechanism of action anticipation and automatic activation of affordances, a stronger ERP was observed in response to prime images framing the environment from a far vs. near distance, and this modulation was localized in dorso-medial motor regions. In addition, an inversion of polarity for far vs. near conditions was observed during the subsequent target period in dorso-medial parietal regions associated with spatially directed foot-related actions. These findings were interpreted within the framework of embodied models of brain functioning as arising from a mechanism of motor-anticipation and subsequent prediction error which was guided by the preferential affordance relationship between the distant large-scale environment and locomotion. More in general, our findings reveal a sensory-motor mechanism for the processing of walking-related environmental affordances.


2019 ◽  
Author(s):  
David A. Tovar ◽  
Micah M. Murray ◽  
Mark T. Wallace

AbstractObjects are the fundamental building blocks of how we create a representation of the external world. One major distinction amongst objects is between those that are animate versus inanimate. Many objects are specified by more than a single sense, yet the nature by which multisensory objects are represented by the brain remains poorly understood. Using representational similarity analysis of human EEG signals, we show enhanced encoding of audiovisual objects when compared to their corresponding visual and auditory objects. Surprisingly, we discovered the often-found processing advantages for animate objects was not evident in a multisensory context due to greater neural enhancement of inanimate objects—the more weakly encoded objects under unisensory conditions. Further analysis showed that the selective enhancement of inanimate audiovisual objects corresponded with an increase in shared representations across brain areas, suggesting that neural enhancement was mediated by multisensory integration. Moreover, a distance-to-bound analysis provided critical links between neural findings and behavior. Improvements in neural decoding at the individual exemplar level for audiovisual inanimate objects predicted reaction time differences between multisensory and unisensory presentations during a go/no-go animate categorization task. Interestingly, links between neural activity and behavioral measures were most prominent 100 to 200ms and 350 to 500ms after stimulus presentation, corresponding to time periods associated with sensory evidence accumulation and decision-making, respectively. Collectively, these findings provide key insights into a fundamental process the brain uses to maximize information it captures across sensory systems to perform object recognition.Significance StatementOur world is filled with an ever-changing milieu of sensory information that we are able to seamlessly transform into meaningful perceptual experience. We accomplish this feat by combining different features from our senses to construct objects. However, despite the fact that our senses do not work in isolation but rather in concert with each other, little is known about how the brain combines the senses together to form object representations. Here, we used EEG and machine learning to study how the brain processes auditory, visual, and audiovisual objects. Surprisingly, we found that non-living objects, the objects which were more difficult to process with one sense alone, benefited the most from engaging multiple senses.


Sign in / Sign up

Export Citation Format

Share Document