scholarly journals Neural oscillations reflect meaning identification for novel words in context

2021 ◽  
pp. 1-40
Author(s):  
Jacob Pohaku Momsen ◽  
Alyson D. Abel

Abstract During language processing, people make rapid use of contextual information to promote comprehension of upcoming words. When new words are learned implicitly, information contained in the surrounding context can provide constraints on their possible meaning. In the current study, EEG was recorded as participants listened to a series of three sentences, each containing an identical target pseudoword, with the aim of using contextual information in the surrounding language to identify a meaning representation for the novel word. In half of trials, sentences were semantically coherent so that participants could develop a single representation for the novel word that fit all contexts. Other trials contained unrelated sentence contexts so that meaning associations were not possible. We observed greater theta band enhancement over the left-hemisphere across central and posterior electrodes in response to pseudowords processed across semantically related compared to unrelated contexts. Additionally, relative alpha and beta band suppression was increased prior to pseudoword onset in trials where contextual information more readily promoted pseudoword-meaning associations. Under the hypothesis that theta enhancement indexes processing demands during lexical access, the current study provides evidence for selective online memory retrieval to novel words learned implicitly in a spoken context.

2014 ◽  
Vol 23 (2) ◽  
pp. 120-133 ◽  
Author(s):  
Kathryn W. Brady ◽  
Judith C. Goodman

Purpose The authors of this study examined whether the type and number of word-learning cues affect how children infer and retain word-meaning mappings and whether the use of these cues changes with age. Method Forty-eight 18- to 36-month-old children with typical language participated in a fast-mapping task in which 6 novel words were presented with 3 types of cues to the words' referents, either singly or in pairs. One day later, children were tested for retention of the novel words. Results By 24 months of age, children correctly inferred the referents of the novel words at a significant level. Children retained the meanings of words at a significant rate by 30 months of age. Children retained the first 3 of the 6 word-meaning mappings by 24 months of age. For both fast mapping and retention, the efficacy of different cue types changed with development, but children were equally successful whether the novel words were presented with 1 or 2 cues. Conclusion The type of information available to children at fast mapping affects their ability to both form and retain word-meaning associations. Providing children with more information in the form of paired cues had no effect on either fast mapping or retention.


Author(s):  
Timo Schick ◽  
Hinrich Schütze

Word embeddings are a key component of high-performing natural language processing (NLP) systems, but it remains a challenge to learn good representations for novel words on the fly, i.e., for words that did not occur in the training data. The general problem setting is that word embeddings are induced on an unlabeled training corpus and then a model is trained that embeds novel words into this induced embedding space. Currently, two approaches for learning embeddings of novel words exist: (i) learning an embedding from the novel word’s surface-form (e.g., subword n-grams) and (ii) learning an embedding from the context in which it occurs. In this paper, we propose an architecture that leverages both sources of information – surface-form and context – and show that it results in large increases in embedding quality. Our architecture obtains state-of-the-art results on the Definitional Nonce and Contextual Rare Words datasets. As input, we only require an embedding set and an unlabeled corpus for training our architecture to produce embeddings appropriate for the induced embedding space. Thus, our model can easily be integrated into any existing NLP system and enhance its capability to handle novel words.


2021 ◽  
Author(s):  
Siying Xie ◽  
Stefanie Hoehl ◽  
Merle Moeskops ◽  
Ezgi Kayhan ◽  
Christian Kliesch ◽  
...  

Visual categorization is a human core cognitive capacity that depends on the development of visual category representations in the infant brain. The nature of infant visual category representations and their relationship to the corresponding adult form however remain unknown. Our results clarify the nature of visual category representations in 6- to 8-month-old infants and their developmental trajectory towards adult maturity in the key characteristics temporal dynamics, representational format, and spectral properties. Temporal dynamics change from slowly emerging, developing representations in infants to quickly emerging, complex representations in adults. Despite those differences infants and adults already partly share visual category representations. The format of infants' representations are visual features of low to intermediate complexity, whereas adults' representations also encoded high complexity features. Theta band neural oscillations form the basis of visual category representations in infants, and these representations are shifted to the alpha/beta band in adults.


2020 ◽  
Author(s):  
Barbara Pomiechowska ◽  
Gergely Csibra

Whether young infants can exploit socio-pragmatic information to interpret new words is a matter of debate. Based on findings and theories from the action interpretation literature, we hypothesized that 12-month-olds should distinguish communicative object-directed actions expressing reference from instrumental object-directed actions indicative of one’s goals, and selectively use the former to identify referents of novel linguistic expressions. This hypothesis was tested across four eye-tracking experiments. Infants watched pairs of unfamiliar objects, one of which was first targeted by either a communicative action (e.g., pointing) or an instrumental action (e.g., grasping) and then labeled with a novel word. As predicted, infants fast-mapped the novel words onto the targeted objects after pointing (Experiments 1 and 4) but not after grasping (Experiment 2) unless the grasping action was preceded by an ostensive signal (Experiment 3). Moreover, whenever infants mapped a novel word onto the object indicated by a communicative action, they tended to map a different novel word onto the distractor object, displaying a mutual exclusivity effect. This reliance on nonverbal action interpretation in the disambiguation of novel words indicates that socio-pragmatic inferences about reference likely supplement associative and statistical learning mechanisms from the outset of word learning.


2019 ◽  
Author(s):  
K. Weber ◽  
A. Meyer ◽  
P. Hagoort

AbstractLanguage processing often involves learning new words and how they relate to each other. These relations are realized through syntactic information connected to a word, e.g. a word can be verb or a noun, or both, like the word ‘run’. In a behavioral and an fMRI task we showed that words and their syntactic properties, i.e. lexical items which were either syntactically ambiguous or unambiguous, can be learned through the probabilities of co-occurrence in an exposure session and subsequently used in a production task. Novel words were processed within regions of the language network (left inferior frontal and posterior middle temporal gyrus) and more syntactic options led to higher activations herein, even when the words were shown in isolation, suggesting combined lexical-syntactic representation. When words were shown in untrained grammatical contexts, activation in left inferior frontal cortex increased. This might reflect competition between the newly learned representation and the presented information. The results elucidate the lexical nature of the neural representations of lexical-syntactic information within the language network and the specific role of the left inferior frontal cortex in unification of the novel words with the surrounding context.


2019 ◽  
Vol 39 (5) ◽  
pp. 508-526 ◽  
Author(s):  
Kirsten Read ◽  
Erin Furay ◽  
Dana Zylstra

Preschoolers can learn vocabulary through shared book reading, especially when given the opportunity to predict and/or reflect on the novel words encountered in the story. Readers often pause and encourage children to guess or repeat novel words during shared reading, and prior research has suggested a positive correlation between how much readers dramatically pause and how well words are later retained. This experimental study of 60 3- to 5-year-olds compared the effects of placing pauses before target words to encourage predictions, placing pauses after target words to encourage reflection, or not pausing at all on children’s retention of novel monster names in a rhymed storybook. Children who heard dramatic pauses that invited prediction before the monsters were named identified more at test than children who heard either post-target pauses or the story read verbatim. In addition, there was an interaction between pre- vs. post-target pausing and whether the pauses were silent or replaced with an eliciting question, such that silent pauses were more effective before the target words, while eliciting questions were more effective after. Overall, dramatic silent pauses before new words in a story were found to best help children attend to and remember those new words.


2021 ◽  
pp. 1-17
Author(s):  
J. Shobana ◽  
M. Murali

Text Sentiment analysis is the process of predicting whether a segment of text has opinionated or objective content and analyzing the polarity of the text’s sentiment. Understanding the needs and behavior of the target customer plays a vital role in the success of the business so the sentiment analysis process would help the marketer to improve the quality of the product as well as a shopper to buy the correct product. Due to its automatic learning capability, deep learning is the current research interest in Natural language processing. Skip-gram architecture is used in the proposed model for better extraction of the semantic relationships as well as contextual information of words. However, the main contribution of this work is Adaptive Particle Swarm Optimization (APSO) algorithm based LSTM for sentiment analysis. LSTM is used in the proposed model for understanding complex patterns in textual data. To improve the performance of the LSTM, weight parameters are enhanced by presenting the Adaptive PSO algorithm. Opposition based learning (OBL) method combined with PSO algorithm becomes the Adaptive Particle Swarm Optimization (APSO) classifier which assists LSTM in selecting optimal weight for the environment in less number of iterations. So APSO - LSTM ‘s ability in adjusting the attributes such as optimal weights and learning rates combined with the good hyper parameter choices leads to improved accuracy and reduces losses. Extensive experiments were conducted on four datasets proved that our proposed APSO-LSTM model secured higher accuracy over the classical methods such as traditional LSTM, ANN, and SVM. According to simulation results, the proposed model is outperforming other existing models.


Author(s):  
Sharry Shakory ◽  
Xi Chen ◽  
S. Hélène Deacon

Purpose The value of shared reading as an opportunity for learning word meanings, or semantics, is well established; it is less clear whether children learn about the orthography, or word spellings, in this context. We tested whether children can learn the spellings and meanings of new words at the same time during a tightly controlled shared reading session. We also examined whether individual differences in either or both of orthographic and semantic learning during shared reading in English were related to word reading in English and French concurrently and 6 months longitudinally in emergent English–French bilinguals. Method Sixty-two Grade 1 children (35 girls; M age = 75.89 months) listened to 12 short stories, each containing four instances of a novel word, while the examiner pointed to the text. Choice measures of the spellings and meanings of the novel words were completed immediately after reading each set of three stories and again 1 week later. Standardized measures of word reading as well as controls for nonverbal reasoning, vocabulary, and phonological awareness were also administered. Results Children scored above chance on both immediate and delayed measures of orthographic and semantic learning. Orthographic learning was related to both English and French word reading at the same time point and 6 months later. In contrast, the relations between semantic learning and word reading were nonsignificant for both languages after including controls. Conclusion Shared reading is a valuable context for learning both word meanings and spellings, and the learning of orthographic representations in particular is related to word reading abilities. Supplemental Material https://doi.org/10.23641/asha.13877999


Author(s):  
Anne Cutler

AbstractListeners learn from their past experience of listening to spoken words, and use this learning to maximise the efficiency of future word recognition. This paper summarises evidence that the facilitatory effects of drawing on past experience are mediated by abstraction, enabling learning to be generalised across new words and new listening situations. Phoneme category retuning, which allows adaptation to speaker-specific articulatory characteristics, is generalised on the basis of relatively brief experience to words previously unheard from that speaker. Abstract knowledge of prosodic regularities is applied to recognition even of novel words for which these regularities were violated. Prosodic word-boundary regularities drive segmentation of speech into words independently of the membership of the lexical candidate set resulting from the segmentation operation. Each of these different cases illustrates how abstraction from past listening experience has contributed to the efficiency of lexical recognition.


Author(s):  
Ehsan T. Esfahani ◽  
Shrey Pareek ◽  
Pramod Chembrammel ◽  
Mostafa Ghobadi ◽  
Thenkurussi Kesavadas

Recognition of user’s mental engagement is imperative to the success of robotic rehabilitation. The paper explores the novel paradigm in robotic rehabilitation of using Passive BCI as opposed to the conventional Active ones. We have designed experiments to determine a user’s level of mental engagement. In our experimental study, we record the brain activity of 3 healthy subjects during multiple sessions where subjects need to navigate through a maze using a haptic system with variable resistance/assistance. Using the data obtained through the experiments we highlight the drawbacks of using conventional workload metrics as indicators of human engagement, thus asserting that Motor and Cognitive Workloads be differentiated. Additionally we propose a new set of features: differential PSD of Cz-Poz at alpha, Beta and Sigma band, (Mental engagement) and relative C3-C4 at beta (Motor Workload) to distinguish Normal Cases from those instances when haptic where applied with an accuracy of 92.93%. Mental engagement is calculated using the power spectral density of the Theta band (4–7 Hz) in the parietal-midline (Pz) with respect to the central midline (Cz). The above information can be used to adjust robotic rehabilitation parameters I accordance with the user’s needs. The adjustment may be in the force levels, difficulty level of the task or increasing the speed of the task.


Sign in / Sign up

Export Citation Format

Share Document