perceptual identification
Recently Published Documents


TOTAL DOCUMENTS

157
(FIVE YEARS 20)

H-INDEX

26
(FIVE YEARS 1)

2021 ◽  
pp. 1-22
Author(s):  
Shaylyn Kress ◽  
Josh Neudorf ◽  
Chelsea Ekstrand ◽  
Ron Borowsky

2021 ◽  
Author(s):  
Shaylyn Kress ◽  
Josh Neudorf ◽  
Chelsea Ekstrand ◽  
Ron Borowsky

In the two-alternative forced-choice (2AFC) task, the target stimulus is presented very briefly, and the participants must choose between two options as to which was the presented target. Some past research (Grossi et al., 2009; Haro et al., 2019) has assumed that the 2AFC word identification task isolates orthographic effects, despite orthographic, semantic, and phonological differences between the alternative options. If so, performance should not differ between word target/nonword foil pairs and British/American word pairs, the latter of which only differ orthographically. In Experiment 1, accuracy and sensitivity were higher during word/nonword trials than British/American trials when participants stated their response was not a guess, demonstrating that phonological/semantic processing contributes to 2AFC performance. In Experiment 2, target visibility was manipulated by increasing the contrast between target and mask for half the trials. Experiment 2 showed that target visibility did not interact with pair type on reaction time, which suggests phonological/semantic processing did not result in feedback to orthographic encoding in this task. This study demonstrates the influence of phonological/semantic processing on word perceptual identification, and shows that 2AFC word identification does not isolate orthographic effects when word/nonword pairs are used, but using British/American word pairs provides a method for doing so. Implications for models and future research are discussed.


2021 ◽  
Vol 15 ◽  
Author(s):  
Hung-Shao Cheng ◽  
Caroline A. Niziolek ◽  
Adam Buchwald ◽  
Tara McAllister

Several studies have demonstrated that individuals’ ability to perceive a speech sound contrast is related to the production of that contrast in their native language. The theoretical account for this relationship is that speech perception and production have a shared multimodal representation in relevant sensory spaces (e.g., auditory and somatosensory domains). This gives rise to a prediction that individuals with more narrowly defined targets will produce greater separation between contrasting sounds, as well as lower variability in the production of each sound. However, empirical studies that tested this hypothesis, particularly with regard to variability, have reported mixed outcomes. The current study investigates the relationship between perceptual ability and production ability, focusing on the auditory domain. We examined whether individuals’ categorical labeling consistency for the American English /ε/–/æ/ contrast, measured using a perceptual identification task, is related to distance between the centroids of vowel categories in acoustic space (i.e., vowel contrast distance) and to two measures of production variability: the overall distribution of repeated tokens for the vowels (i.e., area of the ellipse) and the proportional within-trial decrease in variability as defined as the magnitude of self-correction to the initial acoustic variation of each token (i.e., centering ratio). No significant associations were found between categorical labeling consistency and vowel contrast distance, between categorical labeling consistency and area of the ellipse, or between categorical labeling consistency and centering ratio. These null results suggest that the perception-production relation may not be as robust as suggested by a widely adopted theoretical framing in terms of the size of auditory target regions. However, the present results may also be attributable to choices in implementation (e.g., the use of model talkers instead of continua derived from the participants’ own productions) that should be subject to further investigation.


2021 ◽  
Author(s):  
Alice Hodapp ◽  
Milena Rabovsky

The functional significance of the N400 ERP component is still actively debated. Based on neural network modeling it was recently proposed that the N400 component can be interpreted as the change in a probabilistic representation corresponding to an internal temporal-difference prediction error at the level of meaning that drives adaptation in language processing. These computational modeling results imply that increased N400 amplitudes should correspond to greater adaptation. To investigate this model derived hypothesis, the current study manipulated expectancy in a sentence reading task, which influenced N400 amplitudes, and critically also later implicit memory for the manipulated word: reaction times in a perceptual identification task were significantly faster for previously unexpected words. Additionally, it could be demonstrated that this adaptation seems to specifically depend on the process underlying N400 amplitudes, as participants with larger N400 differences also exhibited a larger implicit memory benefit. These findings support the interpretation of the N400 as an implicit learning signal in language processing.


Author(s):  
Cristina Lozano-Argüelles ◽  
Laura Fernández Arroyo ◽  
Nicole Rodríguez ◽  
Ezequiel M. Durand López ◽  
Juan J. Garrido Pozú ◽  
...  

Abstract Previous studies attest that early bilinguals can modify their perceptual identification according to the fine-grained phonetic detail of the language they believe they are hearing. Following Gonzales et al. (2019), we replicate the double phonemic boundary effect in late learners (LBs) using conceptual-based cueing. We administered a forced choice identification task to 169 native English adult learners of Spanish in two sessions. In both sessions, participants identified the same /b/-/p/ voicing continuum, but language context was cued conceptually using the instructions. The data were analyzed using Bayesian multilevel regression. Learners categorized the continuum in a similar manner when they believed they were hearing English. However, when they believed they were hearing Spanish, “voiceless” responses increased as a function of L2 proficiency. This research demonstrates the double phonemic boundary effect can be conceptually cued in LBs and supports accounts positing selective activation of independent perception grammars in L2 learning.


2020 ◽  
Author(s):  
Cristina Lozano Argüelles ◽  
Laura Fernandez Arroyo ◽  
Nicole Rodriguez ◽  
Ezequiel Martin Durand López ◽  
Juan Jose Garrido Pozu ◽  
...  

Previous studies attest that early bilinguals can modify their perceptual identification according to the fine-grained phonetic detail of the language they believe they are hearing. Following Gonzales, Byers-Heinlein, and Lotto (2019), we replicate the double phonemic boundary effect in late learners (LBs) using conceptual-based cueing. We administered a forced choice identification task to 169 native English adult learners of Spanish in two sessions. In both sessions participants identified the same /b/-/p/ voicing continuum, but language context was cued conceptually via the instructions. The data were analyzed using Bayesian multilevel regression. Learners categorized the continuum in a similar manner when they believed they were hearing English. However, when they believed they were hearing Spanish, ‘voiceless’ responses increased as a function of L2 proficiency. This research demonstrates the double phonemic boundary effect can be conceptually cued in LBs and supports accounts positing selective activation of independent perception grammars in L2 learning.


Cognition ◽  
2020 ◽  
Vol 197 ◽  
pp. 104168
Author(s):  
Audrey Mazancieux ◽  
Tifany Pandiani ◽  
Chris J.A. Moulin

2020 ◽  
pp. 002383091989888
Author(s):  
Luma Miranda ◽  
Marc Swerts ◽  
João Moraes ◽  
Albert Rilliard

This paper presents the results of three perceptual experiments investigating the role of auditory and visual channels for the identification of statements and echo questions in Brazilian Portuguese. Ten Brazilian speakers (five male) were video-recorded (frontal view of the face) while they produced a sentence (“ Como você sabe”), either as a statement (meaning “ As you know.”) or as an echo question (meaning “ As you know?”). Experiments were set up including the two different intonation contours. Stimuli were presented in conditions with clear and degraded audio as well as congruent and incongruent information from both channels. Results show that Brazilian listeners were able to distinguish statements and questions prosodically and visually, with auditory cues being dominant over visual ones. In noisy conditions, the visual channel improved the interpretation of prosodic cues robustly, while it degraded them in conditions where the visual information was incongruent with the auditory information. This study shows that auditory and visual information are integrated during speech perception, also when applied to prosodic patterns.


2019 ◽  
Vol 63 (4) ◽  
pp. 877-897
Author(s):  
Chenhao Chiu ◽  
Po-Chun Wei ◽  
Masaki Noguchi ◽  
Noriko Yamane

In Taiwan Mandarin, retroflex [ʂ] is allegedly merging with dental [s], reducing the traditional three-way contrast between sibilant fricatives (i.e., dental [s]–retroflex [ʂ]–alveopalatal [ɕ]) to a two-way contrast. Most of the literature on the observed merging focuses on the acoustic properties and perceptual identification of the sibilants, whereas much less attention has been drawn to the articulatory evidence accounting for the aforementioned sibilant merging. The current study employed ultrasound imaging techniques to uncover the tongue postures for the three sibilant fricatives [s, ʂ, ɕ] in Taiwan Mandarin occurring before vowels [a], [ɨ], and [o]. Results revealed varying classes of the [s–ʂ] merger: complete merging ( overlap), no merging ( non-overlap), and context-dependent merging ( context-dependent overlap, which only occurred before [a]). The observed [s–ʂ] merger was also confirmed by the perceptual identification by trained phoneticians. Center of gravity (CoG), a reliable spectral moment of identifying different sibilant fricatives, was also measured to reflect the articulatory–acoustic correspondence. Results showed that the [s–ʂ] merger varies across speakers and may also be conditioned by vowel contexts and that articulatory mergers may not be entirely reflected in CoG values, suggesting that auxiliary articulatory gestures may be employed to maintain the acoustic contrast.


Sign in / Sign up

Export Citation Format

Share Document