scholarly journals Contributions of Semantic and Facial Information to Perception of Nonsibilant Fricatives

2003 ◽  
Vol 46 (6) ◽  
pp. 1367-1377 ◽  
Author(s):  
Allard Jongman ◽  
Yue Wang ◽  
Brian H. Kim

Most studies have been unable to identify reliable acoustic cues for the recognition of the English nonsibilant fricatives /f, v, θ, ð/. The present study was designed to test the extent to which the perception of these fricatives by normal-hearing adults is based on other sources of information, namely, linguistic context and visual information. In Experiment 1, target words beginning with /f/, /θ/, /s/, or /∫/ were preceded by either a semantically congruous or incongruous precursor sentence. Results showed an effect of linguistic context on the perception of the distinction between /f/ and /θ/ and on the acoustically more robust distinction between /s/ and /∫/. In Experiment 2, participants identified syllables consisting of the fricatives /f, v, θ, ð/ paired with the vowels /i, a, u/. Three conditions were contrasted: Stimuli were presented with (a) both auditory and visual information, (b) auditory information alone, or (c) visual information alone. When errors in terms of voicing were ignored in all 3 conditions, results indicated that perception of these fricatives is as good with visual information alone as with both auditory and visual information combined, and better than for auditory information alone. These findings suggest that accurate perception of nonsibilant fricatives derives from a combination of acoustic, linguistic, and visual information.

2018 ◽  
Vol 72 (5) ◽  
pp. 1141-1154 ◽  
Author(s):  
Daniele Nardi ◽  
Brian J Anzures ◽  
Josie M Clark ◽  
Brittany V Griffith

Among the environmental stimuli that can guide navigation in space, most attention has been dedicated to visual information. The process of determining where you are and which direction you are facing (called reorientation) has been extensively examined by providing the navigator with two sources of information—typically the shape of the environment and its features—with an interest in the extent to which they are used. Similar questions with non-visual cues are lacking. Here, blindfolded sighted participants had to learn the location of a target in a real-world, circular search space. In Experiment 1, two ecologically relevant non-visual cues were provided: the slope of the floor and an array of two identical auditory landmarks. Slope successfully guided behaviour, suggesting that proprioceptive/kinesthetic access is sufficient to navigate on a slanted environment. However, despite the fact that participants could localise the auditory sources, this information was not encoded. In Experiment 2, the auditory cue was made more useful for the task because it had greater predictive value and there were no competing spatial cues. Nonetheless, again, the auditory landmark was not encoded. Finally, in Experiment 3, after being prompted, participants were able to reorient by using the auditory landmark. Overall, participants failed to spontaneously rely on the auditory cue, regardless of how informative it was.


1976 ◽  
Vol 19 (4) ◽  
pp. 628-638 ◽  
Author(s):  
Ronald R. Kelly ◽  
C. Tomlinson-Keasey

Eleven hearing-impaired children and 11 normal-hearing children (mean = four years 11 months) were visually presented familiar items in either picture or word form. Subjects were asked to recognize the stimuli they had seen from cue cards consisting of pictures or words. They were then asked to recall the sequence of stimuli by arranging the cue cards selected. The hearing-impaired group and normal-hearing subjects performed differently with the picture/picture (P/P) and word/ word (W/W) modes in the recognition phase. The hearing impaired performed equally well with both modes (P/P and W/W), while the normal hearing did significantly better on the P/P mode. Furthermore, the normal-hearing group showed no difference in processing like modes (P/P and W/W) when compared to unlike modes (W/P and P/W). In contrast, the hearing-impaired subjects did better on like modes. The results were interpreted, in part, as supporting the position that young normal-hearing children dual code their visual information better than hearing-impaired children.


2021 ◽  
Author(s):  
Tobias Gerstenberg ◽  
Max H Siegel ◽  
Joshua Tenenbaum

We introduce a novel experimental paradigm for studying multi-modal integration in causal inference. Our experiments feature a physically realistic Plinko machine in which a ball is dropped through one of three holes and comes to rest at the bottom after colliding with a number of obstacles. We develop a hypothetical simulation model which postulates that people figure out what happened by integrating visual and auditory evidence through mental simulation. We test the model in a series of three experiments. In Experiment 1, participants only receive visual information and either predict where the ball will land, or infer in what hole it was dropped based on where it landed. In Experiment 2, participants receive both visual and auditory information - they hear what sounds the dropped ball makes. We find that participants are capable of integrating both sources of information, and that the sounds help them figure out what happened. In Experiment 3, we show strong cue integration: even when vision and sound are individually completely non-diagnostic, participants succeed by combining both sources of evidence.


i-Perception ◽  
2019 ◽  
Vol 10 (1) ◽  
pp. 204166951881901 ◽  
Author(s):  
Jason M. Haberman ◽  
Lauren Ulrich

Humans can recognize faces in the presence of environmental noise. Here, we explore whether ensemble perception of faces is similarly robust. Is summary statistical information available from crowds of faces that are visually incomplete? Observers viewed sets of faces varying in identity or expression and adjusted a test face to match the perceived average. In one condition, faces amodally completed behind horizontal bars. In another condition, identical facial information was presented, but in the foreground (i.e., face parts appeared on fragmented strips in front of a background). Baseline performance was determined by performance on sets of fully visible faces. The results revealed that the ensemble representation of amodally completing sets was significantly better than the fragmented sets and marginally worse than in the fully visible condition. These results suggest that some ensemble information is available given limited visual input and supports a growing body of work suggesting that ensembles may be represented in the absence of complete visual information.


eLife ◽  
2018 ◽  
Vol 7 ◽  
Author(s):  
Muge Ozker ◽  
Daniel Yoshor ◽  
Michael S Beauchamp

Human faces contain multiple sources of information. During speech perception, visual information from the talker’s mouth is integrated with auditory information from the talker's voice. By directly recording neural responses from small populations of neurons in patients implanted with subdural electrodes, we found enhanced visual cortex responses to speech when auditory speech was absent (rendering visual speech especially relevant). Receptive field mapping demonstrated that this enhancement was specific to regions of the visual cortex with retinotopic representations of the mouth of the talker. Connectivity between frontal cortex and other brain regions was measured with trial-by-trial power correlations. Strong connectivity was observed between frontal cortex and mouth regions of visual cortex; connectivity was weaker between frontal cortex and non-mouth regions of visual cortex or auditory cortex. These results suggest that top-down selection of visual information from the talker’s mouth by frontal cortex plays an important role in audiovisual speech perception.


2019 ◽  
Vol 28 (4) ◽  
pp. 986-992 ◽  
Author(s):  
Lisa R. Park ◽  
Erika B. Gagnon ◽  
Erin Thompson ◽  
Kevin D. Brown

Purpose The aims of this study were to (a) determine a metric for describing full-time use (FTU), (b) establish whether age at FTU in children with cochlear implants (CIs) predicts language at 3 years of age better than age at surgery, and (c) describe the extent of FTU and length of time it took to establish FTU in this population. Method This retrospective analysis examined receptive and expressive language outcomes at 3 years of age for 40 children with CIs. Multiple linear regression analyses were run with age at surgery and age at FTU as predictor variables. FTU definitions included 8 hr of device use and 80% of average waking hours for a typically developing child. Descriptive statistics were used to describe the establishment and degree of FTU. Results Although 8 hr of daily wear is typically considered FTU in the literature, the 80% hearing hours percentage metric accounts for more variability in outcomes. For both receptive and expressive language, age at FTU was found to be a better predictor of outcomes than age at surgery. It took an average of 17 months for children in this cohort to establish FTU, and only 52.5% reached this milestone by the time they were 3 years old. Conclusions Children with normal hearing can access spoken language whenever they are awake, and the amount of time young children are awake increases with age. A metric that incorporates the percentage of time that children with CIs have access to sound as compared to their same-aged peers with normal hearing accounts for more variability in outcomes than using an arbitrary number of hours. Although early FTU is not possible without surgery occurring at a young age, device placement does not guarantee use and does not predict language outcomes as well as age at FTU.


2000 ◽  
Vol 1719 (1) ◽  
pp. 165-174 ◽  
Author(s):  
Peter R. Stopher ◽  
David A. Hensher

Transportation planners increasingly include a stated choice (SC) experiment as part of the armory of empirical sources of information on how individuals respond to current and potential travel contexts. The accumulated experience with SC data has been heavily conditioned on analyst prejudices about the acceptable complexity of the data collection instrument, especially the number of profiles (or treatments) given to each sampled individual (and the number of attributes and alternatives to be processed). It is not uncommon for transport demand modelers to impose stringent limitations on the complexity of an SC experiment. A review of the marketing and transport literature suggests that little is known about the basis for rejecting complex designs or accepting simple designs. Although more complex designs provide the analyst with increasing degrees of freedom in the estimation of models, facilitating nonlinearity in main effects and independent two-way interactions, it is not clear what the overall behavioral gains are in increasing the number of treatments. A complex design is developed as the basis for a stated choice study, producing a fractional factorial of 32 rows. The fraction is then truncated by administering 4, 8, 16, 24, and 32 profiles to a sample of 166 individuals (producing 1, 016 treatments) in Australia and New Zealand faced with the decision to fly (or not to fly) between Australia and New Zealand by either Qantas or Ansett under alternative fare regimes. Statistical comparisons of elasticities (an appropriate behavioral basis for comparisons) suggest that the empirical gains within the context of a linear specification of the utility expression associated with each alternative in a discrete choice model may be quite marginal.


1995 ◽  
Vol 80 (3_suppl) ◽  
pp. 1075-1082 ◽  
Author(s):  
Salvatore De Marco ◽  
Roxanne M. Harrell

A comparative study was undertaken to assess the relative magnitude of the effects of linguistic context on the perception of word-juncture boundaries in 30 young school-aged children, 30 older school-aged children, and 30 adults. Minimally contrasted two-word phrases differing in word-juncture boundaries were embedded in a meaningful sentence context, nonmeaningful sentence context, and in neutral phrase context. Groups performed similarly in the neutral phrase context, and two older groups performed better than the young group in the meaningful context. The poorest performances occurred during the nonmeaningful context, with a significant difference among age groups. Heavier reliance upon top-down processing and less developed linguistic and metalinguistic competence may account for the observed differences among groups.


1976 ◽  
Vol 19 (2) ◽  
pp. 279-289 ◽  
Author(s):  
Randall B. Monsen

Although it is well known that the speech produced by the deaf is generally of low intelligibility, the sources of this low speech intelligibility have generally been ascribed either to aberrant articulation of phonemes or inappropriate prosody. This study was designed to determine to what extent a nonsegmental aspect of speech, formant transitions, may differ in the speech of the deaf and of the normal hearing. The initial second formant transitions of the vowels /i/ and /u/ after labial and alveolar consonants (/b, d, f/) were compared in the speech of six normal-hearing and six hearing-impaired adolescents. In the speech of the hearing-impaired subjects, the second formant transitions may be reduced both in time and in frequency. At its onset, the second formant may be nearer to its eventual target frequency than in the speech of the normal subjects. Since formant transitions are important acoustic cues for the adjacent consonants, reduced F 2 transitions may be an important factor in the low intelligibility of the speech of the deaf.


2019 ◽  
Vol 5 (1) ◽  
Author(s):  
Syahrul Syarifudin

There are companies that stand in Indonesia that are owned by foreigners. People tend to judge that the performance of foreign companies is better than domestic companies. This is due to the assumption that foreign companies have relatively larger capital, technology, and expertise that is better than domestic companies. Another presumption is that before, during, and after the crisis the performance of foreign-owned companies is better than domestic companies. In addition, to find out the good and bad performance of a company, it can use a stock capital ratio analysis. With this stock capital ratio, it can be seen the rate of return on equity, the ratio of earning per share, profit price, capitalization rate, and dividend income. So that the analysis can help investors and potential investors as sources of information support in investing in the company. The results of the data analysis using the T-test (Difference Test) found that there was no significant difference between the return on equity ratio, earnings per share ratio, the profit price ratio, the capitalization rate and dividend income. Thus the performance of domestic companies is significantly similar to the performance of foreign companies.Keywords: Earning per share, profit ratio, , capitalization ratio


Sign in / Sign up

Export Citation Format

Share Document