word prediction
Recently Published Documents


TOTAL DOCUMENTS

148
(FIVE YEARS 48)

H-INDEX

15
(FIVE YEARS 2)

2022 ◽  
Author(s):  
Babu Chinta ◽  
Moorthi M

Abstract Brain Computer Interface (BCI) is one of the fast-growing technological trends, which finds its applications in the field of the healthcare sector. In this work, 16 electrodes of Electroencephalography (EEG) placed according to the 10-20 electrode system are used to acquire the EEG signals. A BCI with EEG based imagined word prediction using Convolutional Neural Network (CNN) is modeled and trained to recognize the words imagined through the EEG brain signal, where the CNN model Alexnet and Googlenet are able to recognize the words due to visual stimuli namely, up, down, right, left and up to ten words. The performance metrics are improved with the Morlet Continuous wavelet transform applied at the pre-processing stage, with seven extracted features such as mean, standard deviation, skewness, kurtosis, bandpower, root mean square, and Shannon entropy. Based on the testing, Alexnet transfer learning model performed better as compared to Googlenet transfer learning model, as it achieved an accuracy of 90.3%, recall, precision, and F1 score of 91.4%, 90%, and 90.7% respectively for seven extracted features. However, the performance metrics decreased when the number of extracted features was reduced from seven to four, to 83.8%, 84.4%, 82.9%, and 83.6% respectively. This high accuracy further paves the way to future work on cross participant analysis, plan to involve a larger number of participants for testing and to enhance the deep learning neural networks to create the system developed to be suitable for EEG based mobile applications, which helps to identify what the words are imagined to be uttered by the speech-disabled persons.


2021 ◽  
Author(s):  
Babu Chinta ◽  
Moorthi M

Abstract Background: Brain Computer Interface (BCI) is one of the fast-growing technological trends, which finds its applications in the field of the healthcare sector. In this work, 16 electrodes of Electroencephalography (EEG) placed according to the 10-20 electrode system is used to acquire the EEG signals. A BCI with EEG based imagined word prediction using Convolutional Neural Network (CNN) is modeled and trained to recognize the words imagined through the EEG brain signal, where the CNN model Alexnet and Googlenet are able to recognize the words due to visual stimuli namely, up, down, right, left and up to ten words. The performance metrics are improved with the Morlet Continuous wavelet transform applied at the pre-processing stage, with seven extracted features such as mean, standard deviation, skewness, kurtosis, bandpower, root mean square, and Shannon entropy.Results: Based on the testing, Alexnet transfer learning model performed better as compared to Googlenet transfer learning model, as it achieved an accuracy of 90.3%, recall, precision, and F1 score of 91.4%, 90%, and 90.7% respectively for seven extracted features. However, the performance metrics decreased when the number of extracted features was reduced from seven to four, to 83.8%, 84.4%, 82.9%, and 83.6% respectively.Conclusions: The Alexnet transfer learning model is selected to be the best model as compared Googlenet, as it achieved an accuracy of 90.3% and the final training option of 80 epoch, 64 batch size, scalogram pre-processing method, ratio of 80:20 training and validation set and initial learning rate of 0.0001.


2021 ◽  
pp. 279-300
Author(s):  
Akshay Kulkarni ◽  
Adarsha Shivananda ◽  
Anoosh Kulkarni

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Khrystyna Shakhovska ◽  
Iryna Dumyn ◽  
Natalia Kryvinska ◽  
Mohan Krishna Kagita

Text generation, in particular, next-word prediction, is convenient for users because it helps to type without errors and faster. Therefore, a personalized text prediction system is a vital analysis topic for all languages, primarily for Ukrainian, because of limited support for the Ukrainian language tools. LSTM and Markov chains and their hybrid were chosen for next-word prediction. Their sequential nature (current output depends on previous) helps to successfully cope with the next-word prediction task. The Markov chains presented the fastest and adequate results. The hybrid model presents adequate results but it works slowly. Using the model, user can generate not only one word but also a few or a sentence or several sentences, unlike T9.


Author(s):  
Petra van Alphen ◽  
Susanne Brouwer ◽  
Nina Davids ◽  
Emma Dijkstra ◽  
Paula Fikkert

Purpose This study compares online word recognition and prediction in preschoolers with (a suspicion of) a developmental language disorder (DLD) and typically developing (TD) controls. Furthermore, it investigates correlations between these measures and the link between online and off-line language scores in the DLD group. Method Using the visual world paradigm, Dutch children ages 3;6 (years;months) with (a suspicion of) DLD ( n = 51) and TD peers ( n = 31) listened to utterances such as, “Kijk, een hoed!” ( Look, a hat! ) in a word recognition task, and sentences such as, “Hé, hij leest gewoon een boek” (literal translation: Hey, he reads just a book ) in a word prediction task, while watching a target and distractor picture. Results Both groups demonstrated a significant word recognition effect that looked similar directly after target onset. However, the DLD group looked longer at the target than the TD group and shifted slower from the distractor to target pictures. Within the DLD group, word recognition was linked to off-line expressive language scores. For word prediction, the DLD group showed a smaller effect and slower shifts from verb onset compared to the TD group. Interestingly, within the DLD group, prediction behavior varied considerably, and was linked to receptive and expressive language scores. Finally, slower shifts in word recognition were related to smaller prediction effects. Conclusions While the groups' word recognition abilities looked similar, and only differed in processing speed and dwell time, the DLD group showed atypical verb-based prediction behavior. This may be due to limitations in their processing capacity and/or their linguistic knowledge, in particular of verb argument structure.


2021 ◽  
Author(s):  
Yuta Takahashi ◽  
Yohei Oseki ◽  
Hiromu Sakai ◽  
Michiru Makuuchi ◽  
Rieko Osu

Recently, a neuroscientific approach has revealed that humans understand language while subconsciously predicting the next word from the preceding context. Most studies on human word prediction have investigated the correlations between brain activity while reading or listening to sentences on functional magnetic resonance imaging (fMRI) and the predictive difficulty of each word in a sentence calculated by the N-gram language model. However, because of its low temporal resolution, fMRI is not optimal for identifying the changes in brain activity that accompany language comprehension. In addition, the N-gram language model is a simple computational structure that does not account for the structure of the human brain. Furthermore, it is necessary for humans to retain information prior to the N-1 word in order to form a contextual understanding of a presented story. Therefore, in the present study, we measured brain activity using magnetoencephalography (MEG), which has a higher temporal resolution than fMRI, and calculated the prediction difficulty of words using a long short-term memory language model (LSTMLM), which is based on a neural network inspired by the structure of the human brain and has longer information retention than the N-gram language model. We then identified the brain regions involved in language prediction during Japanese-language speech listening using encoding and decoding analyses. In addition to surprisal-related regions revealed in previous studies, such as the superior temporal gyrus, fusiform gyrus, and temporal pole, we also found relationships between surprisal and brain activity in other regions, including the insula, superior temporal sulcus, and middle temporal gyrus, which are believed to be involved in longer-term, sentence-level cognitive processing.


2021 ◽  
Vol 9 ◽  
pp. 160-175
Author(s):  
Yanai Elazar ◽  
Shauli Ravfogel ◽  
Alon Jacovi ◽  
Yoav Goldberg

Abstract A growing body of work makes use of probing in order to investigate the working of neural models, often considered black boxes. Recently, an ongoing debate emerged surrounding the limitations of the probing paradigm. In this work, we point out the inability to infer behavioral conclusions from probing results, and offer an alternative method that focuses on how the information is being used, rather than on what information is encoded. Our method, Amnesic Probing, follows the intuition that the utility of a property for a given task can be assessed by measuring the influence of a causal intervention that removes it from the representation. Equipped with this new analysis tool, we can ask questions that were not possible before, for example, is part-of-speech information important for word prediction? We perform a series of analyses on BERT to answer these types of questions. Our findings demonstrate that conventional probing performance is not correlated to task importance, and we call for increased scrutiny of claims that draw behavioral or causal conclusions from probing results.1


Sign in / Sign up

Export Citation Format

Share Document