The Effect of Learning Modality and Auditory Feedback on Word Memory: Cochlear-Implanted versus Normal-Hearing Adults

2017 ◽  
Vol 28 (03) ◽  
pp. 222-231 ◽  
Author(s):  
Riki Taitelbaum-Swead ◽  
Michal Icht ◽  
Yaniv Mama

AbstractIn recent years, the effect of cognitive abilities on the achievements of cochlear implant (CI) users has been evaluated. Some studies have suggested that gaps between CI users and normal-hearing (NH) peers in cognitive tasks are modality specific, and occur only in auditory tasks.The present study focused on the effect of learning modality (auditory, visual) and auditory feedback on word memory in young adults who were prelingually deafened and received CIs before the age of 5 yr, and their NH peers.A production effect (PE) paradigm was used, in which participants learned familiar study words by vocal production (saying aloud) or by no-production (silent reading or listening). Words were presented (1) in the visual modality (written) and (2) in the auditory modality (heard). CI users performed the visual condition twice—once with the implant ON and once with it OFF. All conditions were followed by free recall tests.Twelve young adults, long-term CI users, implanted between ages 1.7 and 4.5 yr, and who showed ≥50% in monosyllabic consonant-vowel-consonant open-set test with their implants were enrolled. A group of 14 age-matched NH young adults served as the comparison group.For each condition, we calculated the proportion of study words recalled. Mixed-measures analysis of variances were carried out with group (NH, CI) as a between-subjects variable, and learning condition (aloud or silent reading) as a within-subject variable. Following this, paired sample t tests were used to evaluate the PE size (differences between aloud and silent words) and overall recall ratios (aloud and silent words combined) in each of the learning conditions.With visual word presentation, young adults with CIs (regardless of implant status CI-ON or CI-OFF), showed comparable memory performance (and a similar PE) to NH peers. However, with auditory presentation, young adults with CIs showed poorer memory for nonproduced words (hence a larger PE) relative to their NH peers.The results support the construct that young adults with CIs will benefit more from learning via the visual modality (reading), rather than the auditory modality (listening). Importantly, vocal production can largely improve auditory word memory, especially for the CI group.

2020 ◽  
Vol 63 (11) ◽  
pp. 3865-3876
Author(s):  
Michal Icht ◽  
Yaniv Mama ◽  
Riki Taitelbaum-Swead

Purpose The aim of this study was to test whether a group of older postlingually deafened cochlear implant users (OCIs) use similar verbal memory strategies to those used by older normal-hearing adults (ONHs). Verbal memory functioning was assessed in the visual and auditory modalities separately, enabling us to eliminate possible modality-based biases. Method Participants performed two separate visual and auditory verbal memory tasks. In each task, the visually or aurally presented study words were learned by vocal production (saying aloud) or by no production (reading silently or listening), followed by a free recall test. Twenty-seven older adults (> 60 years) participated (OCI = 13, ONH = 14), all of whom demonstrated intact cognitive abilities. All OCIs showed good open-set speech perception results in quiet. Results Both ONHs and OCIs showed production benefits (higher recall rates for vocalized than nonvocalized words) in the visual and auditory tasks. The ONHs showed similar production benefits in the visual and auditory tasks. The OCIs demonstrated a smaller production effect in the auditory task. Conclusions These results may indicate that different modality-specific memory strategies were used by the ONHs and the OCIs. The group differences in memory performance suggest that, even when deafness occurs after the completion of language acquisition, the reduced and distorted external auditory stimulation leads to a deterioration in the phonological representation of sounds. Possibly, this deterioration leads to a less efficient auditory long-term verbal memory.


Gerontology ◽  
2021 ◽  
pp. 1-9
Author(s):  
Michal Icht ◽  
Riki Taitelbaum-Swead ◽  
Yaniv Mama

<b><i>Introduction:</i></b> The production effect refers to memory benefits for materials that were produced (e.g., read aloud) relative to not produced (e.g., read silently) at study. Previous works have found a production benefit for younger and older adults studying written words and for young adults studying written text. The present study aimed to extend these findings by examining the effect of production on text memory in younger and older adults, in the visual, and in the auditory modalities. <b><i>Methods:</i></b> A group of young adults (<i>n</i> = 30) and a group of older adults (<i>n</i> = 30) learned informational texts, presented either visually or aurally. In each text, half of the sentences were learned by production (reading aloud or writing) and half by no production (reading silently or listening), followed by fill-in-the-blank tests. <b><i>Results:</i></b> An overall memory performance was found to be similar for both groups, with an advantage for the auditory modality. For both groups, more test items were filled in correctly when the relevant information appeared in the produced than in nonproduced sentences, showing the learners’ ability to use distinctiveness information. The production effects were larger for older than younger adults, in both modalities. <b><i>Discussion:</i></b> Since older adults are increasingly engage in learning, it is important to develop high-quality structured learning programs for this population. The current results demonstrate the preserved ability of older adults to successfully memorize texts and may guide planning of such programs. Specifically, since learning via the auditory modality yields superior performance for learners across age-groups, it may be recommended for text learning. Because older adults showed larger benefits from active production of the study material, it may be used to better remember educationally relevant material.


2018 ◽  
Vol 29 (10) ◽  
pp. 875-884 ◽  
Author(s):  
Riki Taitelbaum Swead ◽  
Yaniv Mama ◽  
Michal Icht

AbstractProduction effect (PE) is a memory phenomenon referring to better memory for produced (vocalized) than for non-produced (silently read) items. Reading aloud was found to improve verbal memory for normal-hearing individuals, as well as for cochlear implant users, studying visually and aurally presented material.The present study tested the effect of presentation mode (written or signed) and production type (vocalization or signing) on word memory in a group of hearing impaired young adults, sign-language users.A PE paradigm was used, in which participants learned lexical items by two presentation modes, written or signed. We evaluated the efficacy of two types of productions: vocalization and signing, using a free recall test.Twenty hearing-impaired young adults, Israeli sign language (ISL) users, participated in the study, ten individuals who mainly use manual communication (MC) (ISL as a first language), and ten who mainly use total communication (TC).For each condition, we calculated the proportion of study words recalled. A mixed-design analysis of variance was conducted, with learning condition (written-vocalize, written-signed, and manual-signed) and production type (production and no-production) as within-subject variables, and group (MC and TC) as a between-subject variable.Production benefit was documented across all learning conditions, with better memory for produced over non-produced words. Recall rates were higher when learning written words relative to signed words. Production by signing yielded better memory relative to vocalizing.The results are explained in light of the encoding distinctiveness account, namely, the larger the number of unique encoding processes involved at study, the better the memory benefit.


2019 ◽  
Author(s):  
Olivier White ◽  
Marie Barbiero ◽  
Quentin Maréchal ◽  
Jean-Jacques Orban de Xivry

AbstractSuccessful completion of natural motor actions relies on feedback information delivered through different modalities, including vision and audition. The nervous system weights these sensory inflows according to the context and they contribute to the calibration and maintenance of internal models. Surprisingly, the influence of auditory feedback on the control of fine motor actions has only been scarcely investigated alone or together with visual feedback. Here, we tested how 46 participants learned a reaching task when they were provided with either visual, auditory or both feedback about terminal error. In the VA condition, participant received visual (V) feedback during learning and auditory (A) feedback during relearning. The AV group received the opposite treatment. A third group received visual and auditory feedback in both learning periods. Our experimental design allowed us to assess how learning with one modality transferred to relearning in another modality. We found that adaptation was high in the visual modality both during learning and relearning. It was absent in the learning period under the auditory modality but present in the relearning period (learning period was with visual feedback). An additional experiment suggests that transfer of adaptation between visual and auditory modalities occurs through a memory of the learned reaching direction that acts as an attractor for the reaching direction, and not via error-based mechanisms or an explicit strategy. This memory of the learned reaching direction allowed the participants to learn a task that they could not learn otherwise independently of any memory of errors or explicit strategy.


1970 ◽  
Vol 11 (4) ◽  
pp. 348-365
Author(s):  
Jin Sook Kim ◽  
Eun Bith Cho ◽  
Sun Mi Ma ◽  
Yeon Kyoung Vark ◽  
Ji Eun Yoon

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Candice Frances ◽  
Eugenia Navarra-Barindelli ◽  
Clara D. Martin

AbstractLanguage perception studies on bilinguals often show that words that share form and meaning across languages (cognates) are easier to process than words that share only meaning. This facilitatory phenomenon is known as the cognate effect. Most previous studies have shown this effect visually, whereas the auditory modality as well as the interplay between type of similarity and modality remain largely unexplored. In this study, highly proficient late Spanish–English bilinguals carried out a lexical decision task in their second language, both visually and auditorily. Words had high or low phonological and orthographic similarity, fully crossed. We also included orthographically identical words (perfect cognates). Our results suggest that similarity in the same modality (i.e., orthographic similarity in the visual modality and phonological similarity in the auditory modality) leads to improved signal detection, whereas similarity across modalities hinders it. We provide support for the idea that perfect cognates are a special category within cognates. Results suggest a need for a conceptual and practical separation between types of similarity in cognate studies. The theoretical implication is that the representations of items are active in both modalities of the non-target language during language processing, which needs to be incorporated to our current processing models.


Author(s):  
Aaron Crowson ◽  
Zachary H. Pugh ◽  
Michael Wilkinson ◽  
Christopher B. Mayhorn

The development of head-mounted display virtual reality systems (e.g., Oculus Rift, HTC Vive) has resulted in an increasing need to represent the physical world while immersed in the virtual. Current research has focused on representing static objects in the physical room, but there has been little research into notifying VR users of changes in the environment. This study investigates how different sensory modalities affect noticeability and comprehension of notifications designed to alert head-mounted display users when a person enters his/her area of use. In addition, this study investigates how the use of an orientation type notification aids in perception of alerts that manifest outside a virtual reality users’ visual field. Results of a survey indicated that participants perceived the auditory modality as more effective regardless of notification type. An experiment corroborated these findings for the person notifications; however, the visual modality was in practice more effective for orientation notifications.


2014 ◽  
Vol 18 (3) ◽  
pp. 490-501 ◽  
Author(s):  
ROBERTO FILIPPI ◽  
JOHN MORRIS ◽  
FIONA M. RICHARDSON ◽  
PETER BRIGHT ◽  
MICHAEL S.C. THOMAS ◽  
...  

Studies measuring inhibitory control in the visual modality have shown a bilingual advantage in both children and adults. However, there is a lack of developmental research on inhibitory control in the auditory modality. This study compared the comprehension of active and passive English sentences in 7–10 years old bilingual and monolingual children. The task was to identify the agent of a sentence in the presence of verbal interference. The target sentence was cued by the gender of the speaker. Children were instructed to focus on the sentence in the target voice and ignore the distractor sentence. Results indicate that bilinguals are more accurate than monolinguals in comprehending syntactically complex sentences in the presence of linguistic noise. This supports previous findings with adult participants (Filippi, Leech, Thomas, Green & Dick, 2012). We therefore conclude that the bilingual advantage in interference control begins early in life and is maintained throughout development.


1981 ◽  
Vol 24 (3) ◽  
pp. 351-357 ◽  
Author(s):  
Paula Tallal ◽  
Rachel Stark ◽  
Clayton Kallman ◽  
David Mellits

A battery of nonverbal perceptual and memory tests were given to 35 language-impaired (LI) and 38 control subjects. Three modalities of tests were given: auditory, visual, and cross-modal (auditory and visual). The purpose was to reexamine some nonverbal perceptual and memory abilities of LI children as a function of age and modality of stimulation. Results failed to replicate previous findings of a temporal processing deficit that is specific to the auditory modality in LI children. The LI group made significantly more errors than did controls regardless of modality of stimulation when 2-item sequences were presented rapidly, or when more than two stimuli were presented in series. However, further analyses resolved this apparent conflict between the present and earlier studies by demonstrating that age is an important variable underlying modality specificity of perceptual performance in LI children. Whereas younger LI children were equally impaired when responding to stimuli presented rapidly to the auditory and visual modality, older LI subjects made nearly twice as many errors responding to rapidly presented auditory rather than visual stimuli. This developmental difference did not occur for the control group.


Sign in / Sign up

Export Citation Format

Share Document