A cross-modal account for synchronic and diachronic patterns of /f/ and /θ/ in English

Author(s):  
Grant McGuire ◽  
Molly Babel

AbstractWhile the role of auditory saliency is well accepted as providing insight into the shaping of phonological systems, the influence of visual saliency on such systems has been neglected. This paper provides evidence for the importance of visual information in historical phonological change and synchronic variation through a series of audio-visual experiments with the /f/∼/θ/ contrast. /θ/ is typologically rare, an atypical target in sound change, acquired comparatively late, and synchronically variable in language inventories. Previous explanations for these patterns have focused on either the articulatory difficulty of an interdental tongue gesture or the perceptual similarity /θ/ shares with labiodental fricatives. We hypothesize that the bias is due to an asymmetry in audio-visual phonetic cues and cue variability within and across talkers. Support for this hypothesis comes from a speech perception study that explored the weighting of audio and visual cues for /f/ and /θ/ identification in CV, VC, and VCV syllabic environments in /i/, /a/, or /u/ vowel contexts in Audio, Visual, and Audio-Visual experimental conditions using stimuli from ten different talkers. The results indicate that /θ/ is more variable than /f/, both in Audio and Visual conditions. We propose that it is this variability which contributes to the unstable nature of /θ/ across time and offers an improved explanation for the observed synchronic and diachronic asymmetries in its patterning.

1993 ◽  
Vol 3 (3) ◽  
pp. 307-314 ◽  
Author(s):  
H. Mittelstaedt ◽  
S. Glasauer

This contribution examines the consequences of two remarkable experiences of subjects in weightlessness, 1) the missing of sensations of trunk tilt and of the respective concomitant reflexes when the head is tilted with respect to the trunk, and 2) the persistence of a perception of “up” and “down,” that is, of the polarity of the subjective vertical (SV) in the absence of, as well as in contradiction to, visual cues. The first disproves that the necessary head-to-trunk coordinate transformation be achieved by adding representations of the respective angles gained by utricles and neck receptors, but corroborates an extant model of cross-multiplication of utricular, saccular, and neck receptor components. The second indicates the existence of force-independent components in the determination of the SV. Although the number of subjects is still small and experimental conditions are not as homogeneous as desired, measurements and/or reports on the ground, in parabolic, and in space flight point to the decisive role of the saccular z-bias, that is, of a difference of the mean resting discharges of saccular units polarized in the rostrad and the caudad (±z-) direction.


2018 ◽  
Vol 40 (1) ◽  
pp. 93-109
Author(s):  
YI ZHENG ◽  
ARTHUR G. SAMUEL

AbstractIt has been documented that lipreading facilitates the understanding of difficult speech, such as noisy speech and time-compressed speech. However, relatively little work has addressed the role of visual information in perceiving accented speech, another type of difficult speech. In this study, we specifically focus on accented word recognition. One hundred forty-two native English speakers made lexical decision judgments on English words or nonwords produced by speakers with Mandarin Chinese accents. The stimuli were presented as either as videos that were of a relatively far speaker or as videos in which we zoomed in on the speaker’s head. Consistent with studies of degraded speech, listeners were more accurate at recognizing accented words when they saw lip movements from the closer apparent distance. The effect of apparent distance tended to be larger under nonoptimal conditions: when stimuli were nonwords than words, and when stimuli were produced by a speaker who had a relatively strong accent. However, we did not find any influence of listeners’ prior experience with Chinese accented speech, suggesting that cross-talker generalization is limited. The current study provides practical suggestions for effective communication between native and nonnative speakers: visual information is useful, and it is more useful in some circumstances than others.


Neurology ◽  
2018 ◽  
Vol 90 (11) ◽  
pp. e977-e984 ◽  
Author(s):  
Motoyasu Honma ◽  
Yuri Masaoka ◽  
Takeshi Kuroda ◽  
Akinori Futamura ◽  
Azusa Shiromaru ◽  
...  

ObjectiveTo determine whether Parkinson disease (PD) affects cross-modal function of vision and olfaction because it is known that PD impairs various cognitive functions, including olfaction.MethodsWe conducted behavioral experiments to identify the influence of PD on cross-modal function by contrasting patient performance with age-matched normal controls (NCs). We showed visual effects on the strength and preference of odor by manipulating semantic connections between picture/odorant pairs. In addition, we used brain imaging to identify the role of striatal presynaptic dopamine transporter (DaT) deficits.ResultsWe found that odor evaluation in participants with PD was unaffected by visual information, while NCs overestimated smell when sniffing odorless liquid while viewing pleasant/unpleasant visual cues. Furthermore, DaT deficit in striatum, for the posterior putamen in particular, correlated to few visual effects in participants with PD.ConclusionsThese findings suggest that PD impairs cross-modal function of vision/olfaction as a result of posterior putamen deficit. This cross-modal dysfunction may serve as the basis of a novel precursor assessment of PD.


2017 ◽  
Vol 30 (7-8) ◽  
pp. 653-679 ◽  
Author(s):  
Nida Latif ◽  
Agnès Alsius ◽  
K. G. Munhall

During conversations, we engage in turn-taking behaviour that proceeds back and forth effortlessly as we communicate. In any given day, we participate in numerous face-to-face interactions that contain social cues from our partner and we interpret these cues to rapidly identify whether it is appropriate to speak. Although the benefit provided by visual cues has been well established in several areas of communication, the use of visual information to make turn-taking decisions during conversation is unclear. Here we conducted two experiments to investigate the role of visual information in identifying conversational turn exchanges. We presented clips containing single utterances spoken by single individuals engaged in a natural conversation with another. These utterances were from either right before a turn exchange (i.e., when the current talker would finish and the other would begin) or were utterances where the same talker would continue speaking. In Experiment 1, participants were presented audiovisual, auditory-only and visual-only versions of our stimuli and identified whether a turn exchange would occur or not. We demonstrated that although participants could identify turn exchanges with unimodal information alone, they performed best in the audiovisual modality. In Experiment 2, we presented participants audiovisual turn exchanges where the talker, the listener or both were visible. We showed that participants suffered a cost at identifying turns exchanges when visual cues from the listener were not available. Overall, we demonstrate that although auditory information is sufficient for successful conversation, visual information plays an important role in the overall efficiency of communication.


Blood ◽  
1997 ◽  
Vol 89 (1) ◽  
pp. 135-145 ◽  
Author(s):  
Rossella Manfredini ◽  
Raffaella Balestri ◽  
Enrico Tagliafico ◽  
Francesca Trevisan ◽  
Michela Pizzanelli ◽  
...  

To gain some insight into the role of c-fes in macrophage differentiation, we have analyzed the ability of HL60 leukemic promyelocytic cells and FDC-P1/MAC-11 murine myeloid precursor cells to differentiate in response to phorbol esters after inhibition of c-fes function. Fes inactivation has been obtained by using oligodeoxynucleotides (ODN) complementary to the 5′ region of c-fes mRNA and to 5′ splice junctions of c-fes primary transcript. After 5 days (d) in culture, in several separate experiments performed with different ODN preparations, a complete inhibition of c-fes expression was observed in HL60 and in FDC-P1/MAC-11 cells. No perturbation of cell growth was evident in our experimental conditions in both cell lines after c-fes inhibition. Furthermore, in HL60 cells lacking c-fes product, an almost complete downregulation of the α4β1 fibronectin receptor occurred. However, in both cell lines, the induction of macrophage differentiation by phorbol esters resulted in an almost complete maturation arrest as evaluated by morphological, cytochemical, immunological criteria, and by the cytofluorimetric cell cycle analysis. A loss of the adhesion capacity of both myeloid cell lines, when compared to terminally differentated macrophages, was also observed. These results suggest that HL60 and FDC-P1/MAC-11 cells, when treated with phorbol 12-myristate 13-acetate, require c-fes protein expression to activate the genetic program underlying macrophage differentiation.


2022 ◽  
Author(s):  
Nicole E Wynne ◽  
Karthikeyan Chandrasegaran ◽  
Lauren Fryzlewicz ◽  
Clément Vinauger

The diurnal mosquitoes Aedes aegypti are vectors of several arboviruses, including dengue, yellow fever, and Zika viruses. To find a host to feed on, they rely on the sophisticated integration of olfactory, visual, thermal, and gustatory cues reluctantly emitted by the hosts. If detected by their target, this latter may display defensive behaviors that mosquitoes need to be able to detect and escape. In humans, a typical response is a swat of the hand, which generates both mechanical and visual perturbations aimed at a mosquito. While the neuro-sensory mechanisms underlying the approach to the host have been the focus of numerous studies, the cues used by mosquitoes to detect and identify a potential threat remain largely understudied. In particular, the role of vision in mediating mosquitoes' ability to escape defensive hosts has yet to be analyzed. Here, we used programmable visual displays to generate expanding objects sharing characteristics with the visual component of an approaching hand and quantified the behavioral response of female mosquitoes. Results show that Ae. aegypti is capable of using visual information to decide whether to feed on an artificial host mimic. Stimulations delivered in a LED flight arena further reveal that landed females Ae. aegypti display a stereotypical escape strategy by taking off at an angle that is a function of the distance and direction of stimulus introduction. Altogether, this study demonstrates mosquitoes can use isolated visual cues to detect and avoid a potential threat.


1990 ◽  
Vol 33 (1) ◽  
pp. 163-173 ◽  
Author(s):  
Brian E. Walden ◽  
Allen A. Montgomery ◽  
Robert A. Prosek ◽  
David B. Hawkins

Intersensory biasing occurs when cues in one sensory modality influence the perception of discrepant cues in another modality. Visual biasing of auditory stop consonant perception was examined in two related experiments in an attempt to clarify the role of hearing impairment on susceptibility to visual biasing of auditory speech perception. Fourteen computer-generated acoustic approximations of consonant-vowel syllables forming a /ba-da-ga/ continuum were presented for labeling as one of the three exemplars, via audition alone and in synchrony with natural visual articulations of /ba/ and of /ga/. Labeling functions were generated for each test condition showing the percentage of /ba/, /da/, and /ga/ responses to each of the 14 synthetic syllables. The subjects of the first experiment were 15 normal-hearing and 15 hearing-impaired observers. The hearing-impaired subjects demonstrated a greater susceptibility to biasing from visual cues than did the normal-hearing subjects. In the second experiment, the auditory stimuli were presented in a low-level background noise to 15 normal-hearing observers. A comparison of their labeling responses with those from the first experiment suggested that hearing-impaired persons may develop a propensity to rely on visual cues as a result of long-term hearing impairment. The results are discussed in terms of theories of intersensory bias.


2019 ◽  
Author(s):  
Meike Scheller ◽  
Francine Matorres ◽  
Lucy Tompkins ◽  
Anthony C. Little ◽  
Alexandra A. de Sousa

Cross-cultural research has repeatedly demonstrated sex differences in the importance of different partner characteristics when choosing a mate. Men typically report higher preferences for younger, more physically attractive women, while women prefer men that are wealthier and of higher status. As the assessment of such partner characteristics often relies on visual cues, this raises the question whether visual experience is necessary for sex-specific mate preferences to develop. To shed more light onto the emergence of sex differences in mate choice, the current study assessed how preferences for attractiveness, resources, and personality factors differ between sighted and blind individuals using an online questionnaire. We further investigate the role of social factors and sensory cue selection in these sex differences. Our sample consisted of 94 sighted and blind participants with different ages of blindness-onset, 19 blind/28 sighted males, and 19 blind/28 sighted females. Results replicated well-documented findings in the sighted, with men placing more importance on physical attractiveness and women placing more importance on status and resources. However, while physical attractiveness was less important to blind men, blind women considered physical attractiveness as important as sighted women. The importance of a high status and likeable personality was not influenced by sightedness. Blind individuals considered auditory cues more important than visual cues, while sighted males showed the opposite pattern. Further, relationship status and indirect, social influences were related to preferences. Overall, our findings shed light on the availability of visual information for the emergence of sex differences in mate preference.


Perception ◽  
10.1068/p7153 ◽  
2012 ◽  
Vol 41 (2) ◽  
pp. 175-192 ◽  
Author(s):  
Esteban R Calcagno ◽  
Ezequiel L Abregú ◽  
Manuel C Eguía ◽  
Ramiro Vergara

In humans, multisensory interaction is an important strategy for improving the detection of stimuli of different nature and reducing the variability of response. It is known that the presence of visual information affects the auditory perception in the horizontal plane (azimuth), but there are few researches that study the influence of vision in the auditory distance perception. In general, the data obtained from these studies are contradictory and do not completely define the way in which visual cues affect the apparent distance of a sound source. Here psychophysical experiments on auditory distance perception in humans are performed, including and excluding visual cues. The results show that the apparent distance from the source is affected by the presence of visual information and that subjects can store in their memory a representation of the environment that later improves the perception of distance.


2021 ◽  
pp. 1-23
Author(s):  
Hye-Jung CHO ◽  
Jieun KIAER ◽  
Naya CHOI ◽  
Jieun SONG

Abstract In Korean language, questions containing ambiguous wh-words may be interpreted as either wh-questions or yes-no questions. This study investigated 43 Korean three-year-olds’ ability to disambiguate eight indeterminate questions using prosodic and visual cues. The intonation of each question provided a cue as to whether it should be interpreted as a wh-question or a yes-no question. The questions were presented alongside picture stimuli, which acted as either a matched (presentation of corresponding auditory-visual stimuli) or a mismatched contextual cue (presentation conflicting auditory-visual stimuli). Like adults, the children preferred to comprehend questions involving ambiguous wh-words as wh-questions, rather than yes-no questions. In addition, children were as effective as adults in disambiguating indeterminate questions using prosodic cues regardless of the visual cue. However, when confronted with conflicting auditory-visual stimuli (mismatched), the quality of children's responses was less accurate than adults’ responses.


Sign in / Sign up

Export Citation Format

Share Document