phonological information
Recently Published Documents


TOTAL DOCUMENTS

150
(FIVE YEARS 32)

H-INDEX

29
(FIVE YEARS 2)

Author(s):  
Gareth J. Williams ◽  
Rebecca F. Larkin ◽  
Naomi V. Rose ◽  
Emily Whitaker ◽  
Jens Roeser ◽  
...  

Purpose This study investigated the orthographic knowledge and how orthographic and phonological information could support children with developmental language disorder (DLD) to make more accurate spelling attempts. Method Children with DLD ( N = 37) were matched with chronological age–matched (CAM) children and language age–matched children. These children completed specific and general orthographic knowledge tasks as well as spelling task conditions with either no clue word (pretest), a phonological clue word, or an orthographic clue word. Results Children with DLD were significantly less accurate in their specific orthographic knowledge, compared with CAM children, but had similar scores for general orthographic knowledge to CAM children. Children with DLD and both controls had significantly higher spelling scores in the orthographic clue word condition compared with a pretest pseudoword spelling task. Conclusions Children with DLD acquire the general knowledge of a written language's orthography but, possibly through less print exposure, have less well-represented word-specific orthographic knowledge. Moreover, children with DLD are able to extract the orthographic features of a clue word and employ these to produce more accurate spellings. These findings offer support for a spelling intervention approach based on orthography.


Symmetry ◽  
2021 ◽  
Vol 13 (9) ◽  
pp. 1655
Author(s):  
Miguel Ángel Rivas-Fernández ◽  
Benxamín Varela-López ◽  
Susana Cid-Fernández ◽  
Santiago Galdo-Álvarez

Being language a paradigm of structural and functional asymmetry in cognitive processing, the left Inferior Frontal Gyrus has been consistently related to speech production. In fact, it has been considered a key node in cortical networks responsible for different components of naming. However, isolating these components (e.g., lexical, syntactic, and phonological retrieval) in neuroimaging studies is difficult due to the use of different baselines and tasks. In the present study, functional activation and connectivity of the left inferior frontal gyrus was explored using functional magnetic resonance imaging. Participants performed a covert naming task (pressing a button based on a phonological characteristic). Two conditions were compared: drawings of objects and single letters (baseline condition). Differences in activation and functional connectivity were obtained for objects and letters in different areas of the left Inferior Frontal Gyrus. The pars triangularis was involved in the retrieval of lexical-phonological information, showing a pattern of connectivity with temporal areas in the search for the name of objects and with perisylvanian areas for letters. Selection of phonological information seems to involve the pars opercularis both to letters and objects but recruiting supramarginal and superior temporal areas to letters, probably related to orthographic-phonological conversion. The results support the notion of the left Inferior Frontal Gyrus as a buffer forwarding neural information across cortical networks responsible for different components of speech production.


Author(s):  
Jie Zhang ◽  
Hong Li ◽  
Yang Liu

Abstract The present study investigated the effects of exposure to Chinese orthography on learning phonological forms of new words in learners of Chinese as a second language. A total of 30 adult learners of Chinese studied spoken label and picture associations presented either with phonologically accurate characters, characters with partial phonological information, or no orthography. Half the phonologically accurate or partially accurate characters were semantically transparent or opaque. Spoken labels were recalled without orthography presence. Results showed that exposure to phonologically accurate and semantically transparent characters during learning did not enhance the recall of the spoken labels compared to no orthography. But exposure to characters with partial phonological information and semantically opaque characters significantly hindered vocabulary learning. The implications for Chinese as a second language vocabulary acquisition and instruction are discussed.


Author(s):  
Pavel Rudnev ◽  
Anna Kuznetsova

Abstract This squib documents exceptions to the main strategy of expressing sentential negation in Russian Sign Language (RSL). The postverbal sentential negation particle in RSL inverts the basic SVO order characteristic of the language turning it into SOV (Pasalskaya 2018a). We show that this reversal requirement under negation is not absolute and does not apply to prosodically heavy object NPs. The resulting picture accords well with the view of RSL word order laid out by Kimmelman (2012) and supports a model of grammar where syntactic computation has access to phonological information (Kremers 2014; Bruening 2019).


2021 ◽  
Author(s):  
Nina Suess ◽  
Anne Hauswald ◽  
Patrick Reisinger ◽  
Sebastian Rösch ◽  
Anne Keitel ◽  
...  

AbstractThe integration of visual and auditory cues is crucial for successful processing of speech, especially under adverse conditions. Recent reports have shown that when participants watch muted videos of speakers, the phonological information about the acoustic speech envelope is tracked by the visual cortex. However, the speech signal also carries much richer acoustic details, e.g. about the fundamental frequency and the resonant frequencies, whose visuo-phonological transformation could aid speech processing. Here, we investigated the neural basis of the visuo-phonological transformation processes of these more fine-grained acoustic details and assessed how they change with ageing. We recorded whole-head magnetoencephalography (MEG) data while participants watched silent intelligible and unintelligible videos of a speaker. We found that the visual cortex is able to track the unheard intelligible modulations of resonant frequencies and the pitch linked to lip movements. Importantly, only the processing of intelligible unheard formants decreases significantly with age in the visual and also in the cingulate cortex. This is not the case for the processing of the unheard speech envelope, the fundamental frequency or the purely visual information carried by lip movements. These results show that unheard spectral fine-details (along with the unheard acoustic envelope) are transformed from a mere visual to a phonological representation. Aging affects especially the ability to derive spectral dynamics at formant frequencies. Since listening in noisy environments should capitalize on the ability to track spectral fine-details, our results provide a novel focus on compensatory processes in such challenging situations.Significance statementThe multisensory integration of speech cues from visual and auditory modalities is crucial for optimal speech perception in noisy environments or for elderly individuals with progressive hearing loss. It has already been shown that the visual cortex is able to extract global acoustic information like amplitude modulations from silent visual speech, but whether this extends to fine-detailed spectral acoustic information remains unclear. Here, we demonstrate that the visual cortex is indeed able to extract fine-detailed phonological cues just from watching silent lip movements. Furthermore, this tracking of acoustic fine-details is deteriorating with age. These results suggest that the human brain is able to transform visual information into useful phonological information, and this process might be crucially affected in ageing individuals.


2021 ◽  
Vol 12 ◽  
Author(s):  
Jiaqiang Zhu ◽  
Xiaoxiang Chen ◽  
Yuxiao Yang

Music impacting on speech processing is vividly evidenced in most reports involving professional musicians, while the question of whether the facilitative effects of music are limited to experts or may extend to amateurs remains to be resolved. Previous research has suggested that analogous to language experience, musicianship also modulates lexical tone perception but the influence of amateur musical experience in adulthood is poorly understood. Furthermore, little is known about how acoustic information and phonological information of lexical tones are processed by amateur musicians. This study aimed to provide neural evidence of cortical plasticity by examining categorical perception of lexical tones in Chinese adults with amateur musical experience relative to the non-musician counterparts. Fifteen adult Chinese amateur musicians and an equal number of non-musicians participated in an event-related potential (ERP) experiment. Their mismatch negativities (MMNs) to lexical tones from Mandarin Tone 2–Tone 4 continuum and non-speech tone analogs were measured. It was hypothesized that amateur musicians would exhibit different MMNs to their non-musician counterparts in processing two aspects of information in lexical tones. Results showed that the MMN mean amplitude evoked by within-category deviants was significantly larger for amateur musicians than non-musicians regardless of speech or non-speech condition. This implies the strengthened processing of acoustic information by adult amateur musicians without the need of focused attention, as the detection of subtle acoustic nuances of pitch was measurably improved. In addition, the MMN peak latency elicited by across-category deviants was significantly shorter than that by within-category deviants for both groups, indicative of the earlier processing of phonological information than acoustic information of lexical tones at the pre-attentive stage. The results mentioned above suggest that cortical plasticity can still be induced in adulthood, hence non-musicians should be defined more strictly than before. Besides, the current study enlarges the population demonstrating the beneficial effects of musical experience on perceptual and cognitive functions, namely, the effects of enhanced speech processing from music are not confined to a small group of experts but extend to a large population of amateurs.


Author(s):  
Devi Ratna Safitri ◽  
Ulfatul Ma’rifah

The positive relationship between literacy on writing and phonological toward hearing impaired students in which the use of phonological information writing is important to produce a good composition in writing that indicated as a problem of hearing impaired student. Based on the previous studies, interactive writing instruction can be used for teaching learning writing especially for hearing impaired students. Further, the study suggests that it will be more effective if the researcher provides visual aids such as flash card an access for phonological information in a visual form. This study was designed to investigate the significant effect of interactive writing instruction toward hearing impaired students’ ability in writing skill. The design of this study was pre experimental design in the form of one group pre-test and post-test design because only one subject was treated in this study. It means that, the study was done in one group only without other control group. The researcher chose students of first grade in SMALB-B Kemala Bhayangkari 2 Gresik. The data was collected by using test pre-test and post-test about announcement text. After getting the data, the researcher analyzed the data by using SPSS 16.00 and Wilcoxon Sign Rank test. The research finding showed that sign. (2-tailed) is .046. The sig. (2-tailed) was lower than 0.05 (0.04 < 0.05). Ho can be rejected because p value was above 0.05 that was 0.046 at 5% level. There was enough evidence to conclude that the use of interactive instruction on writing skill toward 5 hearing impaired students were significantly different. Therefore, the researcher suggests to English teacher for implementing of interactive writing instruction as an alternative strategy in English teaching. The researcher hopes for the further researcher to apply interactive writing instruction in other skills, and levels.


2021 ◽  
pp. 1-14 ◽  
Author(s):  
Chen Feng ◽  
Markus F. Damian ◽  
Qingqing Qu

Spoken language production involves lexical-semantic access and phonological encoding. A theoretically important question concerns the relative time course of these two cognitive processes. The predominant view has been that semantic and phonological codes are accessed in successive stages. However, recent evidence seems difficult to reconcile with a sequential view but rather suggests that both types of codes are accessed in parallel. Here, we used ERPs combined with the “blocked cyclic naming paradigm” in which items overlapped either semantically or phonologically. Behaviorally, both semantic and phonological overlap caused interference relative to unrelated baseline conditions. Crucially, ERP data demonstrated that the semantic and phonological effects emerged at a similar latency (∼180 msec after picture onset) and within a similar time window (180–380 msec). These findings suggest that access to phonological information takes place at a relatively early stage during spoken planning, largely in parallel with semantic processing.


2020 ◽  
pp. 026565902096076
Author(s):  
Alycia Cummings ◽  
Kristen Giesbrecht ◽  
Janet Hallgrimson

This study examined how intervention dose frequency affects phonological acquisition and generalization in preschool children with speech sound disorders (SSD). Using a multiple-baseline, single-participants experimental design, eight English-speaking children with SSD (4;0 to 5;6) were split into two dose frequency conditions (4 children/condition) targeting word-initial complex singleton phonemes: /ɹ l ʧ/. All children received twenty 50-minute sessions that were either provided twice a week (2×/week) for ten weeks or four times a week (4×/week) for five weeks. Tau- U effect sizes for two generalization measures, treated phoneme and percent consonants correct (PCC), were calculated for each participant. Group d-scores were calculated to measure generalization of the treated phoneme in untreated words for each condition. All eight children demonstrated gains in their phonological measures. Two children in 2×/week condition demonstrated significant changes in generalization of treated phonemes in untreated words. One child in each condition demonstrated significant changes in PCC scores. Group d-scores were similar suggesting children in both conditions generalized their treated phoneme in untreated words to a similar level. Regardless of whether speech intervention occurred 2×/week or 4×/week, children demonstrated similar phonological gains. This suggests that both dose frequencies are viable intervention schedules for preschoolers with SSD. Children in the 4×/week condition made their phonological gains in approximately half the time of children in the 2×/week condition. Thus, more frequent weekly speech intervention sessions could be more efficient in teaching phonological information than less frequent sessions.


2020 ◽  
Vol 18 (2) ◽  
pp. 350-371
Author(s):  
Sara Feijoo

Abstract One of the most important tasks for language learning children is the identification of the grammatical category to which words belong. This is essential in order to be able to form grammatically correct utterances. The present study investigates how phonological information might help English-learning infants in the categorization of nouns. We analyze four different corpora of English child-directed speech in order to explore the reliability with which words are represented in mothers’ speech based on several phonological criteria. The results of the analysis confirm the prediction that most of the nouns to which English-learning children are exposed share several phonological characteristics, which would allow their early classification in the same grammatical category.


Sign in / Sign up

Export Citation Format

Share Document