False memory in aging: Effects of emotional valence on word recognition accuracy.

2008 ◽  
Vol 23 (2) ◽  
pp. 307-314 ◽  
Author(s):  
Olivier Piguet ◽  
Emily Connally ◽  
Anne C. Krendl ◽  
Jessica R. Huot ◽  
Suzanne Corkin
2020 ◽  
Vol 5 (2) ◽  
pp. 504
Author(s):  
Matthias Omotayo Oladele ◽  
Temilola Morufat Adepoju ◽  
Olaide ` Abiodun Olatoke ◽  
Oluwaseun Adewale Ojo

Yorùbá language is one of the three main languages that is been spoken in Nigeria. It is a tonal language that carries an accent on the vowel alphabets. There are twenty-five (25) alphabets in Yorùbá language with one of the alphabets a digraph (GB). Due to the difficulty in typing handwritten Yorùbá documents, there is a need to develop a handwritten recognition system that can convert the handwritten texts to digital format. This study discusses the offline Yorùbá handwritten word recognition system (OYHWR) that recognizes Yorùbá uppercase alphabets. Handwritten characters and words were obtained from different writers using the paint application and M708 graphics tablets. The characters were used for training and the words were used for testing. Pre-processing was done on the images and the geometric features of the images were extracted using zoning and gradient-based feature extraction. Geometric features are the different line types that form a particular character such as the vertical, horizontal, and diagonal lines. The geometric features used are the number of horizontal lines, number of vertical lines, number of right diagonal lines, number of left diagonal lines, total length of all horizontal lines, total length of all vertical lines, total length of all right slanting lines, total length of all left-slanting lines and the area of the skeleton. The characters are divided into 9 zones and gradient feature extraction was used to extract the horizontal and vertical components and geometric features in each zone. The words were fed into the support vector machine classifier and the performance was evaluated based on recognition accuracy. Support vector machine is a two-class classifier, hence a multiclass SVM classifier least square support vector machine (LSSVM) was used for word recognition. The one vs one strategy and RBF kernel were used and the recognition accuracy obtained from the tested words ranges between 66.7%, 83.3%, 85.7%, 87.5%, and 100%. The low recognition rate for some of the words could be as a result of the similarity in the extracted features.


2021 ◽  
pp. 174702182110564
Author(s):  
Jacob Namias ◽  
Mark Huff ◽  
Allison Smith ◽  
Nicholas Maxwell

We examined the effects of drawing on correct and false recognition within the Deese/Roediger-McDermott (DRM) false memory paradigm. In Experiment 1, we compared drawing of a word’s referent using either a standard black pencil or colored pencils relative to a read-only control group. Relative to reading, drawing in either black or colored pencil similarly boosted correct recognition and reduced false recognition. Signal-detection analyses indicated that drawing reduced the amount of encoded memory information for critical lures and increased monitoring, indicating that both processes contributed to the false recognition reduction. Experiment 2 compared drawing of individual images of DRM list items relative to drawing integrated images using sets of DRM list items. False recognition was lower for drawing of individual images relative to integrated images—a pattern that reflected a decrease in encoded memory information but not monitoring. Therefore, drawing individual images improves memory accuracy in the DRM paradigm relative to a standard read-control task and an integrated drawing task, which we argue is due to the recruitment of item-specific processing.


2005 ◽  
Vol 57 (3) ◽  
pp. 165-173 ◽  
Author(s):  
Midori Inaba ◽  
Michio Nomura ◽  
Hideki Ohira

2016 ◽  
Vol 27 (07) ◽  
pp. 567-587 ◽  
Author(s):  
Sin Tung Lau ◽  
M. Kathleen Pichora-Fuller ◽  
Karen Z. H. Li ◽  
Gurjit Singh ◽  
Jennifer L. Campos

Background: Most activities of daily living require the dynamic integration of sights, sounds, and movements as people navigate complex environments. Nevertheless, little is known about the effects of hearing loss (HL) or hearing aid (HA) use on listening during multitasking challenges. Purpose: The objective of the current study was to investigate the effect of age-related hearing loss (ARHL) on word recognition accuracy in a dual-task experiment. Virtual reality (VR) technologies in a specialized laboratory (Challenging Environment Assessment Laboratory) were used to produce a controlled and safe simulated environment for listening while walking. Research Design: In a simulation of a downtown street intersection, participants completed two single-task conditions, listening-only (standing stationary) and walking-only (walking on a treadmill to cross the simulated intersection with no speech presented), and a dual-task condition (listening while walking). For the listening task, they were required to recognize words spoken by a target talker when there was a competing talker. For some blocks of trials, the target talker was always located at 0° azimuth (100% probability condition); for other blocks, the target talker was more likely (60% of trials) to be located at the center (0° azimuth) and less likely (40% of trials) to be located at the left (270° azimuth). Study Sample: The participants were eight older adults with bilateral HL (mean age = 73.3 yr, standard deviation [SD] = 8.4; three males) who wore their own HAs during testing and eight controls with normal hearing (NH) thresholds (mean age = 69.9 yr, SD = 5.4; two males). No participant had clinically significant visual, cognitive, or mobility impairments. Data Collection and Analysis: Word recognition accuracy and kinematic parameters (head and trunk angles, step width and length, stride time, cadence) were analyzed using mixed factorial analysis of variances with group as a between-subjects factor. Task condition (single versus dual) and probability (100% versus 60%) were within-subject factors. In analyses of the 60% listening condition, spatial expectation (likely versus unlikely) was a within-subject factor. Differences between groups in age and baseline measures of hearing, mobility, and cognition were tested using t tests. Results: The NH group had significantly better word recognition accuracy than the HL group. Both groups performed better when the probability was higher and the target location more likely. For word recognition, dual-task costs for the HL group did not depend on condition, whereas the NH group demonstrated a surprising dual-task benefit in conditions with lower probability or spatial expectation. For the kinematic parameters, both groups demonstrated a more upright and less variable head position and more variable trunk position during dual-task conditions compared to the walking-only condition, suggesting that safe walking was prioritized. The HL group demonstrated more overall stride time variability than the NH group. Conclusions: This study provides new knowledge about the effects of ARHL, HA use, and aging on word recognition when individuals also perform a mobility-related task that is typically experienced in everyday life. This research may help inform the development of more effective function-based approaches to assessment and intervention for people who are hard-of-hearing.


2018 ◽  
Author(s):  
Wilhelmiina Toivo ◽  
Christoph Scheepers

Late bilinguals often report less emotional involvement in their second language, a phenomenon called reduced emotional resonance in L2. The present study measured pupil dilation in response to high- versus low-arousing words (e.g., riot vs. swamp) in German-English and Finnish-English late bilinguals, both in their first and in their second language. A third sample of English monolingual speakers (tested only in English) served as a control group. To improve on previous research, we controlled for lexical confounds such as length, frequency, emotional valence, and abstractness – both within and across languages. Results showed no appreciable differences in post-trial word recognition judgements (98% recognition on average), but reliably stronger pupillary effects of the arousal manipulation when stimuli were presented in participants' first rather than second language. This supports the notion of reduced emotional resonance in L2. Our findings are unlikely to be due to differences in stimulus-specific control variables or to potential word-recognition difficulties in participants' second language. Linguistic relatedness between first and second language (German-English vs. Finnish-English) was also not found to have a modulating influence.


2021 ◽  
Vol 3 (1) ◽  
pp. 68-83
Author(s):  
Wiqas Ghai ◽  
Navdeep Singh

Punjabi language is a tonal language belonging to an Indo-Aryan language family and has a number of speakers all around the world. Punjabi language has gained acceptability in the media & communication and therefore deserves to have a place in the growing field of automatic speech recognition which has been explored already for a number of other Indian and foreign languages successfully. Some work has been done in the field of isolated word speech recognition for Punjabi language, but only using whole word based acoustic models. A phone based approach has yet to be applied for Punjabi language speech recognition. This paper describes an automatic speech recognizer that recognizes isolated word speech and connected word speech using a triphone based acoustic model on the HTK 3.4.1 speech Engine and compares the performance with acoustic whole word model based ASR system. Word recognition accuracy of isolated word speech was 92.05% for acoustic whole word model based system and 97.14% for acoustic triphone model based system whereas word recognition accuracy of connected word speech was 87.75% for acoustic whole word model based system and 91.62% for acoustic triphone model based system.


2019 ◽  
Vol 28 (3S) ◽  
pp. 742-755 ◽  
Author(s):  
Annalise Fletcher ◽  
Megan McAuliffe ◽  
Sarah Kerr ◽  
Donal Sinex

Purpose This study aims to examine the combined influence of vocabulary knowledge and statistical properties of language on speech recognition in adverse listening conditions. Furthermore, it aims to determine whether any effects identified are more salient at particular levels of signal degradation. Method One hundred three young healthy listeners transcribed phrases presented at 4 different signal-to-noise ratios, which were coded for recognition accuracy. Participants also completed tests of hearing acuity, vocabulary knowledge, nonverbal intelligence, processing speed, and working memory. Results Vocabulary knowledge and working memory demonstrated independent effects on word recognition accuracy when controlling for hearing acuity, nonverbal intelligence, and processing speed. These effects were strongest at the same moderate level of signal degradation. Although listener variables were statistically significant, their effects were subtle in comparison to the influence of word frequency and phonological content. These language-based factors had large effects on word recognition at all signal-to-noise ratios. Discussion Language experience and working memory may have complementary effects on accurate word recognition. However, adequate glimpses of acoustic information appear necessary for speakers to leverage vocabulary knowledge when processing speech in adverse conditions.


Sign in / Sign up

Export Citation Format

Share Document