scholarly journals How do blind people know that blue is cold? Distributional semantics encode color-adjective associations.

2021 ◽  
Author(s):  
Jeroen van Paridon ◽  
Qiawen Liu ◽  
Gary Lupyan

Certain colors are strongly associated with certain adjectives (e.g. red is hot, blue is cold). Some of these associations are grounded in visual experiences like seeing hot embers glow red. Surprisingly, many congenitally blind people show similar color associations, despite lacking all visual experience of color. Presumably, they learn these associations via language. Can we detect these associations in the statistics of language? And if so, what form do they take? We apply a projection method to word embeddings trained on corpora of spoken and written text to identify color-adjective associations as they are represented in language. We show that these projections are predictive of color-adjective ratings collected from blind and sighted people, and that the effect size depends on the training corpus. Finally, we examine how color-adjective associations might be represented in language by training word embeddings on corpora from which various sources of color-semantic information are removed.

2018 ◽  
Author(s):  
Virginie Crollen ◽  
Tiffany Spruyt ◽  
Pierre Mahau ◽  
Roberto Bottini ◽  
Olivier Collignon

Recent studies proposed that the use of internal and external coordinate systems may be more flexible in congenitally blind when compared to sighted individuals. To investigate this hypothesis further, we asked congenitally blind and sighted people to perform, with the hands uncrossed and crossed over the body midline, a tactile TOJ and an auditory Simon task. Crucially, both tasks were carried out under task instructions either favoring the use of an internal (left vs. right hand) or an external (left vs. right hemispace) frame of reference. In the internal condition of the TOJ task, our results replicated previous findings (Röder et al., 2004) showing that hand crossing only impaired sighted participants’ performance, suggesting that blind people did not activate by default a (conflicting) external frame of reference. However, under external instructions, a decrease of performance was observed in both groups, suggesting that even blind people activated an external coordinate system in this condition. In the Simon task, and in contrast with a previous study (Roder et al., 2007), both groups responded more efficiently when the sound was presented from the same side of the response (‘‘Simon effect’’) independently of the hands position. This was true under the internal and external conditions, therefore suggesting that blind and sighted by default activated an external coordinate system in this task. All together, these data comprehensively demonstrate how visual experience shapes the default weight attributed to internal and external coordinate systems for action and perception depending on task demand.


Perception ◽  
10.1068/p3340 ◽  
2002 ◽  
Vol 31 (10) ◽  
pp. 1263-1274 ◽  
Author(s):  
Morton A Heller ◽  
Deneen D Brackett ◽  
Kathy Wilson ◽  
Keiko Yoneyama ◽  
Amanda Boyer ◽  
...  

We examined the effect of visual experience on the haptic Müller-Lyer illusion. Subjects made size estimates of raised lines by using a sliding haptic ruler. Independent groups of blindfolded-sighted, late-blind, congenitally blind, and low-vision subjects judged the sizes of wings-in and wings-out stimuli, plain lines, and lines with short vertical ends. An illusion was found, since the wings-in stimuli were judged as shorter than the wings-out patterns and all of the other stimuli. Subjects generally underestimated the lengths of lines. In a second experiment we found a nonsignificant difference between length judgments of raised lines as opposed to smooth wooden dowels. The strength of the haptic illusion depends upon the angles of the wings, with a much stronger illusion for more acute angles. The effect of visual status was nonsignificant, suggesting that spatial distortion in the haptic Müller-Lyer illusion does not depend upon visual imagery or visual experience.


2017 ◽  
Vol 43 (3) ◽  
pp. 593-617 ◽  
Author(s):  
Sascha Rothe ◽  
Hinrich Schütze

We present AutoExtend, a system that combines word embeddings with semantic resources by learning embeddings for non-word objects like synsets and entities and learning word embeddings that incorporate the semantic information from the resource. The method is based on encoding and decoding the word embeddings and is flexible in that it can take any word embeddings as input and does not need an additional training corpus. The obtained embeddings live in the same vector space as the input word embeddings. A sparse tensor formalization guarantees efficiency and parallelizability. We use WordNet, GermaNet, and Freebase as semantic resources. AutoExtend achieves state-of-the-art performance on Word-in-Context Similarity and Word Sense Disambiguation tasks.


1983 ◽  
Vol 77 (4) ◽  
pp. 161-166 ◽  
Author(s):  
James F. Herman ◽  
Steven P. Chatman ◽  
Steven F. Roth

Examines the spatial ability of sighted, blindfolded sighted, and congenitally blind subjects. They walked through an unfamiliar, large-scale space in which target locations could not be seen simultaneously and were then taken to each target location and asked the position of the other locations. Results indicate that past visual experience helps individuals to acquire spatial information from large-scale environments.


2019 ◽  
Author(s):  
Ceren Battal ◽  
Valeria Occelli ◽  
Giorgia Bertonati ◽  
Federica Falagiarda ◽  
Olivier Collignon

Vision is thought to scaffold the development of spatial abilities in the other senses. How does spatial hearing therefore develop in people lacking visual experience? We comprehensively addressed this question by investigating auditory localization abilities in 17 congenitally blind and 17 sighted individuals using a psychophysical minimum audible angle task exempt of sensori-motor confounds. Participants were asked to compare the relative position of two sound sources located in central and peripheral, horizontal and vertical, frontal and rear spaces. We observed unequivocal enhancement of spatial hearing abilities in congenitally blind people, irrespective of the field of space that is assessed. Our results are conclusive in demonstrating that visual experience is not a mandatory prerequisite for developing optimal spatial hearing abilities and that, in striking contrast, the lack of vision leads to ubiquitous enhancement of auditory spatial skills.


2020 ◽  
Author(s):  
IRENE TOGOLI ◽  
Virginie Crollen ◽  
Roberto Arrighi ◽  
Olivier Collignon

Humans share with other animals a number sense, a system allowing a rapid and approximate estimate of the number of items in a scene. Recently it has been shown that numerosity is shared between action and perception as the number of repetitions of self-produced actions affects the perceived numerosity of subsequent visual stimuli presented around the area where actions occurred. Here we investigate whether this interplay between action and perception for numerosity depends on visual input and visual experience. We measured the effects of adaptation to motor routines (finger tapping) on numerical estimates of auditory sequences in sighted and congenitally blind people. In both groups, our results show a consistent adaptation effect with relative under- or over-estimation of perceived auditory numerosity following rapid or slow tapping adaptation, respectively. Moreover, adaptation occurred around the tapping area irrespective of the hand posture (crossed or uncrossed hands), indicating that motor adaptation was coded using external (not hand centred) coordinates in both groups. Overall, these results support the existence of a generalized interaction between action and perception for numerosity that occurs in external space and manifests independently of visual input or even visual experience.


2020 ◽  
Vol 31 (9) ◽  
pp. 1129-1139
Author(s):  
Ceren Battal ◽  
Valeria Occelli ◽  
Giorgia Bertonati ◽  
Federica Falagiarda ◽  
Olivier Collignon

Vision is thought to support the development of spatial abilities in the other senses. If this is true, how does spatial hearing develop in people lacking visual experience? We comprehensively addressed this question by investigating auditory-localization abilities in 17 congenitally blind and 17 sighted individuals using a psychophysical minimum-audible-angle task that lacked sensorimotor confounds. Participants were asked to compare the relative position of two sound sources located in central and peripheral, horizontal and vertical, or frontal and rear spaces. We observed unequivocal enhancement of spatial-hearing abilities in congenitally blind people, irrespective of the field of space that was assessed. Our results conclusively demonstrate that visual experience is not a prerequisite for developing optimal spatial-hearing abilities and that, in striking contrast, the lack of vision leads to a general enhancement of auditory-spatial skills.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Maria J. S. Guerreiro ◽  
Madita Linke ◽  
Sunitha Lingareddy ◽  
Ramesh Kekunnaya ◽  
Brigitte Röder

AbstractLower resting-state functional connectivity (RSFC) between ‘visual’ and non-‘visual’ neural circuits has been reported as a hallmark of congenital blindness. In sighted individuals, RSFC between visual and non-visual brain regions has been shown to increase during rest with eyes closed relative to rest with eyes open. To determine the role of visual experience on the modulation of RSFC by resting state condition—as well as to evaluate the effect of resting state condition on group differences in RSFC—, we compared RSFC between visual and somatosensory/auditory regions in congenitally blind individuals (n = 9) and sighted participants (n = 9) during eyes open and eyes closed conditions. In the sighted group, we replicated the increase of RSFC between visual and non-visual areas during rest with eyes closed relative to rest with eyes open. This was not the case in the congenitally blind group, resulting in a lower RSFC between ‘visual’ and non-‘visual’ circuits relative to sighted controls only in the eyes closed condition. These results indicate that visual experience is necessary for the modulation of RSFC by resting state condition and highlight the importance of considering whether sighted controls should be tested with eyes open or closed in studies of functional brain reorganization as a consequence of blindness.


2012 ◽  
Vol 25 (0) ◽  
pp. 222 ◽  
Author(s):  
Michael J. Proulx ◽  
Achille Pasqualotto ◽  
Shuichiro Taya

The topographic representation of space interacts with the mental representation of number. Evidence for such number–space relations have been reported in both synaesthetic and non-synaesthetic participants. Thus far most studies have only examined related effects in sighted participants. For example, the mental number line increases in magnitude from left to right in sighted individuals (Loetscher et al., 2008, Curr. Biol.). What is unclear is whether this association arises from innate mechanisms or requires visual experience early in life to develop in this way. Here we investigated the role of visual experience for the left to right spatial numerical association using a random number generation task in congenitally blind, late blind, and blindfolded sighted participants. Participants orally generated numbers randomly whilst turning their head to the left and right. Sighted participants generated smaller numbers when they turned their head to the left than to the right, consistent with past results. In contrast, congenitally blind participants generated smaller numbers when they turned their head to the right than to the left, exhibiting the opposite effect. The results of the late blind participants showed an intermediate profile between that of the sighted and congenitally blind participants. Visual experience early in life is therefore necessary for the development of the spatial numerical association of the mental number line.


2020 ◽  
Vol 8 ◽  
pp. 311-329
Author(s):  
Kushal Arora ◽  
Aishik Chakraborty ◽  
Jackie C. K. Cheung

In this paper, we propose LexSub, a novel approach towards unifying lexical and distributional semantics. We inject knowledge about lexical-semantic relations into distributional word embeddings by defining subspaces of the distributional vector space in which a lexical relation should hold. Our framework can handle symmetric attract and repel relations (e.g., synonymy and antonymy, respectively), as well as asymmetric relations (e.g., hypernymy and meronomy). In a suite of intrinsic benchmarks, we show that our model outperforms previous approaches on relatedness tasks and on hypernymy classification and detection, while being competitive on word similarity tasks. It also outperforms previous systems on extrinsic classification tasks that benefit from exploiting lexical relational cues. We perform a series of analyses to understand the behaviors of our model. 1 Code available at https://github.com/aishikchakraborty/LexSub .


Sign in / Sign up

Export Citation Format

Share Document