Large-Scale Slow Feature Analysis Using Spark for Visual Object Recognition

Author(s):  
Da Li ◽  
Zhang Zhang ◽  
Tieniu Tan
2019 ◽  
Author(s):  
Jeroen van Paridon ◽  
Markus Ostarek ◽  
Mrudula Arunkumar ◽  
Falk Huettig

Human cultural inventions, such as written language, are far too recent for dedicated neural infrastructure to have evolved in its service. Culturally newly acquired skills (e.g. reading) thus ‘recycle’ evolutionarily older circuits that originally evolved for different, but similar functions (e.g. visual object recognition). The destructive competition hypothesis predicts that this neuronal recycling has detrimental effects on the cognitive functions a cortical network originally evolved for. The converse possibility is that learning to read fine-tunes general object recognition mechanisms, resulting in improved recognition across categories. In a large-scale behavioral study with literate, low-literate, and illiterate participants from the same socioeconomic background we find that even after adjusting for cognitive ability and test-taking familiarity, literacy is associated with an increase, rather than a decrease, in object recognition abilities across object categories. These results are incompatible with the claim that neuronal recycling results in destructive competition.


2021 ◽  
Author(s):  
David Miralles ◽  
Guillem Garrofé ◽  
Calota Parés ◽  
Alejandro González ◽  
Gerard Serra ◽  
...  

Abstract The cognitive connection between the senses of touch and vision is probably the best-known case of cross-modality. Recent discoveries suggest that the mapping between both senses is learned rather than innate. These evidences open the door to a dynamic cross-modality that allows individuals to adaptively develop within their environment. Mimicking this aspect of human learning, we propose a new cross-modal mechanism that allows artificial cognitive systems (ACS) to adapt quickly to unforeseen perceptual anomalies generated by the environment or by the system itself. In this context, visual recognition systems have advanced remarkably in recent years thanks to the creation of large-scale datasets together with the advent of deep learning algorithms. However, such advances have not occurred on the haptic mode, mainly due to the lack of two-handed dexterous datasets that allow learning systems to process the tactile information of human object exploration. This data imbalance limits the creation of synchronized multimodal datasets that would enable the development of cross-modality in ACS during object exploration. In this work, we use a multimodal dataset recently generated from tactile sensors placed on a collection of objects that capture haptic data from human manipulation, together with the corresponding visual counterpart. Using this data, we create a cross-modal learning transfer mechanism capable of detecting both sudden and permanent anomalies in the visual channel and still maintain visual object recognition performance by retraining the visual mode for a few minutes using haptic information. Here we show the importance of cross-modality in perceptual awareness and its ecological capabilities to self-adapt to different environments.


2015 ◽  
Vol 45 (11) ◽  
pp. 2425-2436 ◽  
Author(s):  
Wai Keung Wong ◽  
Zhihui Lai ◽  
Yong Xu ◽  
Jiajun Wen ◽  
Chu Po Ho

2007 ◽  
Author(s):  
K. Suzanne Scherf ◽  
Marlene Behrmann ◽  
Kate Humphreys ◽  
Beatriz Luna

Sign in / Sign up

Export Citation Format

Share Document