scholarly journals A Deep Learning Account of How Language Affects Thought

2021 ◽  
Author(s):  
Xiaoliang Luo ◽  
Nicholas J. Sexton ◽  
Bradley C. Love

How can words shape meaning? Shared labels highlight commonalities between concepts whereas contrasting labels make differences apparent. To address such findings, we propose a deep learning account that spans perception to decision (i.e., labelling). The model takes photographs as input, transforms them to semantic representations through computations that parallel the ventral visual stream, and finally determines the appropriate linguistic label. The underlying theory is that minimising error on two prediction tasks (predicting the meaning and label of a stimulus) requires a compromise in the network's semantic representations. Thus, differences in label use, whether across languages or levels of expertise, manifest in differences in the semantic representations that support label discrimination. We confirm these predictions in simulations involving fine-grained and coarse-grained labels. We hope these and allied efforts which model perception, semantics, and labelling at scale will advance developmental and neurocomputational accounts of concept and language learning.

2021 ◽  
Author(s):  
Hayley E Pickering ◽  
Jessica L Peters ◽  
Sheila Crewther

Literature examining the role of visual memory in vocabulary development during childhood is limited, despite it being well known that preverbal infants rely on their visual abilities to form memories and learn new words. Hence, this systematic review and meta-analysis utilised a cognitive neuroscience perspective to examine the association between visual memory and vocabulary development, including moderators such as age and task selection, in neurotypical children aged 2- to 12-years. Visual memory tasks were classified as spatio-temporal span tasks, visuo-perceptual or spatial concurrent array tasks, and executive judgment tasks. Visuo-perceptual concurrent array tasks expected to rely on ventral visual stream processing showed a moderate association with vocabulary, while tasks measuring spatio-temporal spans expected to be associated with dorsal visual stream processing, and executive judgments (central executive), showed only weak correlations with vocabulary. These findings have important implications for all health professionals and researchers interested in language, as they can support the development of more targeted language learning interventions that require ventral visual stream processing.


2021 ◽  
Author(s):  
Loris Naspi ◽  
Paul Hoffman ◽  
Barry Devereux ◽  
Alexa Morcom

When encoding new episodic memories, visual and semantic processing are proposed to make distinct contributions to accurate memory and memory distortions. Here, we used functional magnetic resonance imaging (fMRI) and representational similarity analysis to uncover the representations that predict true and false recognition of unfamiliar objects. Two semantic models captured coarse-grained taxonomic categories and specific object features, respectively, while two perceptual models embodied low-level visual properties. Twenty-eight female and male participants encoded images of objects during fMRI scanning, and later had to discriminate studied objects from similar lures and novel objects in a recognition memory test. Both perceptual and semantic models predicted true memory. When studied objects were later identified correctly, neural patterns corresponded to low-level visual representations of these object images in the early visual cortex, lingual, and fusiform gyri. In a similar fashion, alignment of neural patterns with fine-grained semantic feature representations in the fusiform gyrus also predicted true recognition. However, emphasis on coarser taxonomic representations predicted forgetting more anteriorly in ventral anterior temporal lobe, left perirhinal cortex, and left inferior frontal gyrus. In contrast, false recognition of similar lure objects was associated with weaker visual analysis posteriorly in early visual and left occipitotemporal cortex. The results implicate multiple perceptual and semantic representations in successful memory encoding and suggest that fine-grained semantic as well as visual analysis contributes to accurate later recognition, while processing visual image detail is critical for avoiding false recognition errors.


Symmetry ◽  
2019 ◽  
Vol 11 (12) ◽  
pp. 1440 ◽  
Author(s):  
Erhu Zhang ◽  
Bo Li ◽  
Peilin Li ◽  
Yajun Chen

Deep learning has been successfully applied to classification tasks in many fields due to its good performance in learning discriminative features. However, the application of deep learning to printing defect classification is very rare, and there is almost no research on the classification method for printing defects with imbalanced samples. In this paper, we present a deep convolutional neural network model to extract deep features directly from printed image defects. Furthermore, considering the asymmetry in the number of different types of defect samples—that is, the number of different kinds of defect samples is unbalanced—seven types of over-sampling methods were investigated to determine the best method. To verify the practical applications of the proposed deep model and the effectiveness of the extracted features, a large dataset of printing detect samples was built. All samples were collected from practical printing products in the factory. The dataset includes a coarse-grained dataset with four types of printing samples and a fine-grained dataset with eleven types of printing samples. The experimental results show that the proposed deep model achieves a 96.86% classification accuracy rate on the coarse-grained dataset without adopting over-sampling, which is the highest accuracy compared to the well-known deep models based on transfer learning. Moreover, by adopting the proposed deep model combined with the SVM-SMOTE over-sampling method, the accuracy rate is improved by more than 20% in the fine-grained dataset compared to the method without over-sampling.


2020 ◽  
Author(s):  
Tyler Bonnen ◽  
Daniel L.K. Yamins ◽  
Anthony D. Wagner

The medial temporal lobe (MTL) supports a constellation of memory-related behaviors. Its involvement in perceptual processing, however, has been subject to an enduring debate. This debate centers on perirhinal cortex (PRC), an MTL structure at the apex of the ventral visual stream (VVS). Here we leverage a deep learning approach that approximates visual behaviors supported by the VVS. We first apply this approach retroactively, modeling 29 published concurrent visual discrimination experiments: Excluding misclassified stimuli, there is a striking correspondence between VVS-modeled and PRC-lesioned behavior, while each are outperformed by PRC-intact participants. We corroborate these results using high-throughput psychophysics experiments: PRC-intact participants outperform a linear readout of electrophysiological recordings from the macaque VVS. Finally, in silico experiments suggest PRC enables out-of-distribution visual behaviors at rapid timescales. By situating these lesion, electrophysiological, and behavioral results within a shared computational framework, this work resolves decades of seemingly inconsistent experimental findings surrounding PRC involvement in perception.


Author(s):  
Wang Zheng-fang ◽  
Z.F. Wang

The main purpose of this study highlights on the evaluation of chloride SCC resistance of the material,duplex stainless steel,OOCr18Ni5Mo3Si2 (18-5Mo) and its welded coarse grained zone(CGZ).18-5Mo is a dual phases (A+F) stainless steel with yield strength:512N/mm2 .The proportion of secondary Phase(A phase) accounts for 30-35% of the total with fine grained and homogeneously distributed A and F phases(Fig.1).After being welded by a specific welding thermal cycle to the material,i.e. Tmax=1350°C and t8/5=20s,microstructure may change from fine grained morphology to coarse grained morphology and from homogeneously distributed of A phase to a concentration of A phase(Fig.2).Meanwhile,the proportion of A phase reduced from 35% to 5-10°o.For this reason it is known as welded coarse grained zone(CGZ).In association with difference of microstructure between base metal and welded CGZ,so chloride SCC resistance also differ from each other.Test procedures:Constant load tensile test(CLTT) were performed for recording Esce-t curve by which corrosion cracking growth can be described, tf,fractured time,can also be recorded by the test which is taken as a electrochemical behavior and mechanical property for SCC resistance evaluation. Test environment:143°C boiling 42%MgCl2 solution is used.Besides, micro analysis were conducted with light microscopy(LM),SEM,TEM,and Auger energy spectrum(AES) so as to reveal the correlation between the data generated by the CLTT results and micro analysis.


2019 ◽  
Author(s):  
Sushrut Thorat

A mediolateral gradation in neural responses for images spanning animals to artificial objects is observed in the ventral temporal cortex (VTC). Which information streams drive this organisation is an ongoing debate. Recently, in Proklova et al. (2016), the visual shape and category (“animacy”) dimensions in a set of stimuli were dissociated using a behavioural measure of visual feature information. fMRI responses revealed a neural cluster (extra-visual animacy cluster - xVAC) which encoded category information unexplained by visual feature information, suggesting extra-visual contributions to the organisation in the ventral visual stream. We reassess these findings using Convolutional Neural Networks (CNNs) as models for the ventral visual stream. The visual features developed in the CNN layers can categorise the shape-matched stimuli from Proklova et al. (2016) in contrast to the behavioural measures used in the study. The category organisations in xVAC and VTC are explained to a large degree by the CNN visual feature differences, casting doubt over the suggestion that visual feature differences cannot account for the animacy organisation. To inform the debate further, we designed a set of stimuli with animal images to dissociate the animacy organisation driven by the CNN visual features from the degree of familiarity and agency (thoughtfulness and feelings). Preliminary results from a new fMRI experiment designed to understand the contribution of these non-visual features are presented.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Xin Mao ◽  
Jun Kang Chow ◽  
Pin Siang Tan ◽  
Kuan-fu Liu ◽  
Jimmy Wu ◽  
...  

AbstractAutomatic bird detection in ornithological analyses is limited by the accuracy of existing models, due to the lack of training data and the difficulties in extracting the fine-grained features required to distinguish bird species. Here we apply the domain randomization strategy to enhance the accuracy of the deep learning models in bird detection. Trained with virtual birds of sufficient variations in different environments, the model tends to focus on the fine-grained features of birds and achieves higher accuracies. Based on the 100 terabytes of 2-month continuous monitoring data of egrets, our results cover the findings using conventional manual observations, e.g., vertical stratification of egrets according to body size, and also open up opportunities of long-term bird surveys requiring intensive monitoring that is impractical using conventional methods, e.g., the weather influences on egrets, and the relationship of the migration schedules between the great egrets and little egrets.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4486
Author(s):  
Niall O’Mahony ◽  
Sean Campbell ◽  
Lenka Krpalkova ◽  
Anderson Carvalho ◽  
Joseph Walsh ◽  
...  

Fine-grained change detection in sensor data is very challenging for artificial intelligence though it is critically important in practice. It is the process of identifying differences in the state of an object or phenomenon where the differences are class-specific and are difficult to generalise. As a result, many recent technologies that leverage big data and deep learning struggle with this task. This review focuses on the state-of-the-art methods, applications, and challenges of representation learning for fine-grained change detection. Our research focuses on methods of harnessing the latent metric space of representation learning techniques as an interim output for hybrid human-machine intelligence. We review methods for transforming and projecting embedding space such that significant changes can be communicated more effectively and a more comprehensive interpretation of underlying relationships in sensor data is facilitated. We conduct this research in our work towards developing a method for aligning the axes of latent embedding space with meaningful real-world metrics so that the reasoning behind the detection of change in relation to past observations may be revealed and adjusted. This is an important topic in many fields concerned with producing more meaningful and explainable outputs from deep learning and also for providing means for knowledge injection and model calibration in order to maintain user confidence.


Sign in / Sign up

Export Citation Format

Share Document