Semantic Structural Alignment of Neural Representational Spaces Enables Translation between English and Chinese Words

2016 ◽  
Vol 28 (11) ◽  
pp. 1749-1759 ◽  
Author(s):  
Benjamin D. Zinszer ◽  
Andrew J. Anderson ◽  
Olivia Kang ◽  
Thalia Wheatley ◽  
Rajeev D. S. Raizada

Two sets of items can share the same underlying conceptual structure, while appearing unrelated at a surface level. Humans excel at recognizing and using alignments between such underlying structures in many domains of cognition, most notably in analogical reasoning. Here we show that structural alignment reveals how different people's neural representations of word meaning are preserved across different languages, such that patterns of brain activation can be used to translate words from one language to another. Groups of Chinese and English speakers underwent fMRI scanning while reading words in their respective native languages. Simply by aligning structures representing the two groups' neural semantic spaces, we successfully infer all seven Chinese–English word translations. Beyond language translation, conceptual structural alignment underlies many aspects of high-level cognition, and this work opens the door to deriving many such alignments directly from neural representational content.

2019 ◽  
Author(s):  
Marc Brysbaert ◽  
Emmanuel Keuleers ◽  
Paweł Mandera

We present a new dataset of English word recognition times for a total of 62 thousand words, called the English Crowdsourcing Project. The data were collected via an internet vocabulary test, in which more than one million people participated. The present dataset is limited to native English speakers. Participants were asked to indicate which words they knew. Their response times were registered, although at no point were the participants asked to respond as fast as possible. Still, the response times correlate around .75 with the response times of the English Lexicon Project for the shared words. Also results of virtual experiments indicate that the new response times are a valid addition to the English Lexicon Project. This not only means that we have useful response times for some 35 thousand extra words, but we now also have data on differences in response latencies as a functionof education and age.


2021 ◽  
Vol 12 ◽  
Author(s):  
Yong Zhang ◽  
Yuwen Wen ◽  
Min Hou

Previous studies on the Structural Alignment Model suggest that people compare the alignable attributes and nonalignable attributes during the decision-making process and preference formation process. Alignable attributes are easier to process and more effective in clue extracting. Thus, it is believed that people rely more on alignable than nonalignable attributes when comparing alternatives. This article supposes that consumers’ product experience and personal characteristics also play a significant role in regulating consumers’ reliance on attribute alignability. The authors conducted three experiments to examine the moderating role of consumers’ product familiarity and self-construal in the impact of attribute alignability on consumer product purchase. The results show the following: (1) When making a purchase decision, consumers with a high level of product familiarity will rely more on nonalignable attributes, while those with a low level of product familiarity will rely more on alignable attributes. (2) The difference in consumer dependency on attribute alignability is driven by their perceived diagnosticity of attributes. (3) The dependency of consumers with different levels of familiarity on attribute alignability will be further influenced by consumers’ self-construal. Individuals with interdependent self-construal rely more on alignable attributes when unfamiliar with the product, while relying more on nonalignable attributes when familiar with the product. Individuals with independent self-construal, however, rely more on nonalignable attributes regardless of the degree of product familiarity. The conclusions of this paper can be used as references for enterprises to establish product positioning and communication strategies.


2019 ◽  
Author(s):  
Rosemary Cowell ◽  
Morgan Barense ◽  
Patrick Sadil

Thanks to patients Phineas Gage and Henry Molaison, we have long known that behavioral control depends on the frontal lobes, whereas declarative memory depends on the medial temporal lobes. For decades, cognitive functions – behavioral control, declarative memory – have served as labels for characterizing the division of labor in cortex. This approach has made enormous contributions to understanding how the brain enables the mind, providing a systems-level explanation of brain function that constrains lower-level investigations of neural mechanism. Today, the approach has evolved such that functional labels are often applied to brain networks rather than focal brain regions. Furthermore, the labels have diversified to include both broadly-defined cognitive functions (declarative memory, visual perception) and more circumscribed mental processes (recollection, familiarity, priming). We ask whether a process – a high-level mental phenomenon corresponding to an introspectively-identifiable cognitive event – is the most productive label for dissecting memory. For example, the process of recollection conflates a neurocomputational operation (pattern completion-based retrieval) with a class of representational content (associative, high-dimensional, episodic-like memories). Because a full theory of memory must identify operations and representations separately, and specify how they interact, we argue that processes like recollection constitute inadequate labels for characterizing neural mechanisms. Instead, we advocate considering the component operations and representations of mnemonic processes in isolation, when examining their neural underpinnings. For the neuroanatomical organization of memory, the evidence suggests that pattern completion is recapitulated widely across cortex, but the division of labor between cortical sites can be explained by representational content.


1991 ◽  
Vol 12 (1) ◽  
pp. 47-73 ◽  
Author(s):  
Yoshinori Sasaki

ABSTRACTIn an experiment based on the competition model, 12 native Japanese speakers (J1 group) and 12 native English speakers studying Japanese (JFL group) were requested to report sentence subjects after listening to Japanese word strings which consisted of one verb and two nouns each. Similarly, 12 native English speakers (E1 group) and 12 native Japanese speakers studying English (EFL group) reported the sentence subjects of English word strings. In each word string, syntactic (word order) cues and lexical-semantic (animacy/inanimacy) cues converged or diverged as to the assignment of the sentence subjects. The results show that JFL-Ss (experimental subjects) closely approximated the response patterns of J1-Ss, while EFL-Ss showed evidence of transfer from their first language, Japanese. The results are consistent with the developmental precedence of a meaning-based comprehension strategy over a grammar-based one.


2010 ◽  
Vol 13 (1) ◽  
pp. 99-117 ◽  
Author(s):  
Nadya Dich

The study attempts to investigate factors underlying the development of spellers’ sensitivity to phonological context in English. Native English speakers and Russian speakers of English as a second language (ESL) were tested on their ability to use information about the coda to predict the spelling of vowels in English monosyllabic nonwords. In addition, the study assessed the participants’ spelling proficiency as their ability to correctly spell commonly misspelled words (Russian participants were assessed in both Russian and English). Both native and non-native English speakers were found to rely on the information about the coda when spelling vowels in nonwords. In both native and non-native speakers, context sensitivity was predicted by English word spelling; in Russian ESL speakers this relationship was mediated by English proficiency. L1 spelling proficiency did not facilitate L2 context sensitivity in Russian speakers. The results speak against a common factor underlying different aspects of spelling proficiency in L1 and L2 and in favor of the idea that spelling competence comprises different skills in different languages.


1973 ◽  
Vol 8 (11) ◽  
pp. 11
Author(s):  
J. W. Anderberg ◽  
C. L. Smith

2008 ◽  
Vol 55 (1) ◽  
pp. 20-28 ◽  
Author(s):  
Katijah Khoza ◽  
Lebogang Ramma ◽  
Munyane Mophosho ◽  
Duduetsang Moroka

The purpose of this study was to establish whether digit stimuli offer a more accurate measure for Speech Reception Threshold (SRT) testing when assessing first-language Tswana (or Setswana), second-language English speakers, as compared to an English word list (CID W-1) and a Tswana word list. Forty Tswana first language speaking participants (17 males and 23 females) aged between 18 and 25 years, participated in this study. All participants were undergraduate students at a tertiary institution in Johannesburg, Gauteng. This study utilized a quantitative single group correlation design which allowed for a comparison between three SRT scores (CID-SRT, T-SRT, and D-SRT). Participants underwent basic audiological assessment procedures comprising of otoscopy, tympanometry, conventional pure tone audiometry and SRT testing. SRT measures were established using monitored live voice testing. Basic audiometric data were descriptively analyzed to ensure that hearing function was with in normal limits, and PTA-SRT averages and means were calculated. Furthermore, analysis of the SRT-PTA correlation data was conducted through the use of the non-parametric Spearman's correlation co efficient and linear regression. Results from this study were statistically significant (p .05) and indicated that digit-pairs were not the most effective stimuli for establishing SRT, compared to the CIDW-1 and Tswana word lists. On the contrary, findings of the current study revealed that PTA-SRT comparison was best in Tswana (r= 0 .62), followed very closely by CID W-1 (r = 0.61), and lastly digit- pairs (r = 0.60). The results however, confirm the efficacy of using digit pairs as alternative stimuli when more appropriate speech stimuli for the establishment of SRT are unavailable, as the correlation between SRT for digit pairs and PTA was also a strong one (r= 0.60). Linear regression analyses indicated that all three lists were acceptable speech stimuli for the population under investigation with the standard error of estimate being significantly smaller than the 5dB-stepused to collect the data (1.62 for Tswana, 3.56 for CID W-1, and 3.80 for digit-pairs).


2021 ◽  
Author(s):  
Rohan Saha ◽  
Jennifer Campbell ◽  
Janet F. Werker ◽  
Alona Fyshe

Infants start developing rudimentary language skills and can start understanding simple words well before their first birthday. This development has also been shown primarily using Event Related Potential (ERP) techniques to find evidence of word comprehension in the infant brain. While these works validate the presence of semantic representations of words (word meaning) in infants, they do not tell us about the mental processes involved in the manifestation of these semantic representations or the content of the representations. To this end, we use a decoding approach where we employ machine learning techniques on Electroencephalography (EEG) data to predict the semantic representations of words found in the brain activity of infants. We perform multiple analyses to explore word semantic representations in two groups of infants (9-month-old and 12-month-old). Our analyses show significantly above chance decodability of overall word semantics, word animacy, and word phonetics. As we analyze brain activity, we observe that participants in both age groups show signs of word comprehension immediately after word onset, marked by our model's significantly above chance word prediction accuracy. We also observed strong neural representations of word phonetics in the brain data for both age groups, some likely correlated to word decoding accuracy and others not. Lastly, we discover that the neural representations of word semantics are similar in both infant age groups. Our results on word semantics, phonetics, and animacy decodability, give us insights into the evolution of neural representation of word meaning in infants.


Loquens ◽  
2019 ◽  
Vol 6 (1) ◽  
pp. 061
Author(s):  
Rana Almbark ◽  
Nadia Bouchhioua ◽  
Sam Hellmuth

This paper asks whether there is an ‘interlanguage intelligibility benefit’ in perception of word-stress, as has been reported for global sentence recognition. L1 English listeners, and L2 English listeners who are L1 speakers of Arabic dialects from Jordan and Egypt, performed a binary forced-choice identification task on English near-minimal pairs (such as[ˈɒbdʒɛkt] ~ [əbˈdʒɛkt]) produced by an L1 English speaker, and two L2 English speakers from Jordan and Egypt respectively. The results show an overall advantage for L1 English listeners, which replicates the findings of an earlier study for general sentence recognition, and which is also consistent with earlier findings that L1 listeners rely more on structural knowledge than on acoustic cues in stress perception. Non-target-like L2 productions of words with final stress (which are primarily cued in L1 production by vowel reduction in the initial unstressed syllable) were less accurately recognized by L1 English listeners than by L2 listeners, but there was no evidence of a generalized advantage for L2 listeners in response to other L2 stimuli.


Sign in / Sign up

Export Citation Format

Share Document