Aging, narrative organization, presentation mode, and referent choice strategies

1992 ◽  
Vol 18 (2) ◽  
pp. 75-84 ◽  
Author(s):  
Daniel Morrow ◽  
Patsy Altieri ◽  
Von Leirer
2020 ◽  
Vol 63 (4) ◽  
pp. 931-947
Author(s):  
Teresa L. D. Hardy ◽  
Carol A. Boliek ◽  
Daniel Aalto ◽  
Justin Lewicke ◽  
Kristopher Wells ◽  
...  

Purpose The purpose of this study was twofold: (a) to identify a set of communication-based predictors (including both acoustic and gestural variables) of masculinity–femininity ratings and (b) to explore differences in ratings between audio and audiovisual presentation modes for transgender and cisgender communicators. Method The voices and gestures of a group of cisgender men and women ( n = 10 of each) and transgender women ( n = 20) communicators were recorded while they recounted the story of a cartoon using acoustic and motion capture recording systems. A total of 17 acoustic and gestural variables were measured from these recordings. A group of observers ( n = 20) rated each communicator's masculinity–femininity based on 30- to 45-s samples of the cartoon description presented in three modes: audio, visual, and audio visual. Visual and audiovisual stimuli contained point light displays standardized for size. Ratings were made using a direct magnitude estimation scale without modulus. Communication-based predictors of masculinity–femininity ratings were identified using multiple regression, and analysis of variance was used to determine the effect of presentation mode on perceptual ratings. Results Fundamental frequency, average vowel formant, and sound pressure level were identified as significant predictors of masculinity–femininity ratings for these communicators. Communicators were rated significantly more feminine in the audio than the audiovisual mode and unreliably in the visual-only mode. Conclusions Both study purposes were met. Results support continued emphasis on fundamental frequency and vocal tract resonance in voice and communication modification training with transgender individuals and provide evidence for the potential benefit of modifying sound pressure level, especially when a masculine presentation is desired.


1998 ◽  
Vol 41 (6) ◽  
pp. 1282-1293 ◽  
Author(s):  
Jane Mertz Garcia ◽  
Paul A. Dagenais

This study examined changes in the sentence intelligibility scores of speakers with dysarthria in association with different signal-independent factors (contextual influences). This investigation focused on the presence or absence of iconic gestures while speaking sentences with low or high semantic predictiveness. The speakers were 4 individuals with dysarthria, who varied from one another in terms of their level of speech intelligibility impairment, gestural abilities, and overall level of motor functioning. Ninety-six inexperienced listeners (24 assigned to each speaker) orthographically transcribed 16 test sentences presented in an audio + video or audio-only format. The sentences had either low or high semantic predictiveness and were spoken by each speaker with and without the corresponding gestures. The effects of signal-independent factors (presence or absence of iconic gestures, low or high semantic predictiveness, and audio + video or audio-only presentation formats) were analyzed for individual speakers. Not all signal-independent information benefited speakers similarly. Results indicated that use of gestures and high semantic predictiveness improved sentence intelligibility for 2 speakers. The other 2 speakers benefited from high predictive messages. The audio + video presentation mode enhanced listener understanding for all speakers, although there were interactions related to specific speaking situations. Overall, the contributions of relevant signal-independent information were greater for the speakers with more severely impaired intelligibility. The results are discussed in terms of understanding the contribution of signal-independent factors to the communicative process.


2007 ◽  
Vol 19 (8) ◽  
pp. 1259-1274 ◽  
Author(s):  
Dietmar Roehm ◽  
Ina Bornkessel-Schlesewsky ◽  
Frank Rösler ◽  
Matthias Schlesewsky

We report a series of event-related potential experiments designed to dissociate the functionally distinct processes involved in the comprehension of highly restricted lexical-semantic relations (antonyms). We sought to differentiate between influences of semantic relatedness (which are independent of the experimental setting) and processes related to predictability (which differ as a function of the experimental environment). To this end, we conducted three ERP studies contrasting the processing of antonym relations (black-white) with that of related (black-yellow) and unrelated (black-nice) word pairs. Whereas the lexical-semantic manipulation was kept constant across experiments, the experimental environment and the task demands varied: Experiment 1 presented the word pairs in a sentence context of the form The opposite of X is Y and used a sensicality judgment. Experiment 2 used a word pair presentation mode and a lexical decision task. Experiment 3 also examined word pairs, but with an antonymy judgment task. All three experiments revealed a graded N400 response (unrelated > related > antonyms), thus supporting the assumption that semantic associations are processed automatically. In addition, the experiments revealed that, in highly constrained task environments, the N400 gradation occurs simultaneously with a P300 effect for the antonym condition, thus leading to the superficial impression of an extremely “reduced” N400 for antonym pairs. Comparisons across experiments and participant groups revealed that the P300 effect is not only a function of stimulus constraints (i.e., sentence context) and experimental task, but that it is also crucially influenced by individual processing strategies used to achieve successful task performance.


1997 ◽  
Vol 40 (2) ◽  
pp. 432-443 ◽  
Author(s):  
Karen S. Helfer

Research has shown that speaking in a deliberately clear manner can improve the accuracy of auditory speech recognition. Allowing listeners access to visual speech cues also enhances speech understanding. Whether the nature of information provided by speaking clearly and by using visual speech cues is redundant has not been determined. This study examined how speaking mode (clear vs. conversational) and presentation mode (auditory vs. auditory-visual) influenced the perception of words within nonsense sentences. In Experiment 1, 30 young listeners with normal hearing responded to videotaped stimuli presented audiovisually in the presence of background noise at one of three signal-to-noise ratios. In Experiment 2, 9 participants returned for an additional assessment using auditory-only presentation. Results of these experiments showed significant effects of speaking mode (clear speech was easier to understand than was conversational speech) and presentation mode (auditoryvisual presentation led to better performance than did auditory-only presentation). The benefit of clear speech was greater for words occurring in the middle of sentences than for words at either the beginning or end of sentences for both auditory-only and auditory-visual presentation, whereas the greatest benefit from supplying visual cues was for words at the end of sentences spoken both clearly and conversationally. The total benefit from speaking clearly and supplying visual cues was equal to the sum of each of these effects. Overall, the results suggest that speaking clearly and providing visual speech information provide complementary (rather than redundant) information.


Sign in / Sign up

Export Citation Format

Share Document