Visual Feedback and Lip-Positioning Skills of Children with and without Impaired Hearing

1986 ◽  
Vol 29 (2) ◽  
pp. 231-239 ◽  
Author(s):  
Samuel G. Fletcher

Interplay between visual feedback and lip-positioning skill was studied in 10 5- to 14-year-old children with normal hearing and 10 with severe to profound hearing impairment. With visual feedback, the subjects in both groups had similar response times and accuracy in matching six visually specified lip separation "targets." Special skill in processing visual information by the hearing-impaired subjects was suggested by higher velocities of lip movement toward the targets and shorter latencies in reaching the goal positions. In the responses of the hearing children, lip-closing movements were executed more accurately than opening movements both with and without visual feedback. In general, the findings showed that, given visually displayed lip-position targets and feedback from positioning actions, children can achieve the targets with high accuracy regardless of hearing status or prior speaking experience.

2000 ◽  
Vol 84 (4) ◽  
pp. 1708-1718 ◽  
Author(s):  
Andrew B. Slifkin ◽  
David E. Vaillancourt ◽  
Karl M. Newell

The purpose of the current investigation was to examine the influence of intermittency in visual information processes on intermittency in the control continuous force production. Adult human participants were required to maintain force at, and minimize variability around, a force target over an extended duration (15 s), while the intermittency of on-line visual feedback presentation was varied across conditions. This was accomplished by varying the frequency of successive force-feedback deliveries presented on a video display. As a function of a 128-fold increase in feedback frequency (0.2 to 25.6 Hz), performance quality improved according to hyperbolic functions (e.g., force variability decayed), reaching asymptotic values near the 6.4-Hz feedback frequency level. Thus, the briefest interval over which visual information could be integrated and used to correct errors in motor output was approximately 150 ms. The observed reductions in force variability were correlated with parallel declines in spectral power at about 1 Hz in the frequency profile of force output. In contrast, power at higher frequencies in the force output spectrum were uncorrelated with increases in feedback frequency. Thus, there was a considerable lag between the generation of motor output corrections (1 Hz) and the processing of visual feedback information (6.4 Hz). To reconcile these differences in visual and motor processing times, we proposed a model where error information is accumulated by visual information processes at a maximum frequency of 6.4 per second, and the motor system generates a correction on the basis of the accumulated information at the end of each 1-s interval.


1983 ◽  
Vol 27 (5) ◽  
pp. 354-354
Author(s):  
Bruce W. Hamill ◽  
Robert A. Virzi

This investigation addresses the problem of attention in the processing of symbolic information from visual displays. Its scope includes the nature of attentive processes, the structural properties of stimuli that influence visual information processing mechanisms, and the manner in which these factors interact in perception. Our purpose is to determine the effects of configural feature structure on visual information processing. It is known that for stimuli comprising separable features, one can distinguish between conditions in which only one relevant feature differs among stimuli in the array being searched and conditions in which conjunctions of two (or more) features differ: Since the visual process of conjoining separable features is additive, this fact is reflected in search time as a function of array size, with feature conditions yielding flat curves associated with parallel search (no increase in search time across array sizes) and conjunction conditions yielding linearly increasing curves associated with serial search. We studied configural-feature stimuli within this framework to determine the nature of visual processing for such stimuli as a function of their feature structure. Response times of subjects searching for particular targets among structured arrays of distractors were measured in a speeded visual search task. Two different sets of stimulus materials were studied in array sizes of up to 32 stimuli, using both tachistoscope and microcomputer-based CRT presentation for each. Our results with configural stimuli indicate serial search in all of the conditions, with the slope of the response-time-by-array-size function being steeper for conjunction conditions than for feature conditions. However, for each of the two sets of stimuli we studied, there was one configuration that stood apart from the others in its set in that it yielded significantly faster response times, and in that conjunction conditions involving these particular stimuli tended to cluster with the feature conditions rather than with the other conjunction conditions. In addition to these major effects of particular targets, context effects also appeared in our results as effects of the various distractor sets used; certain of these context effects appear to be reversible. The effects of distractor sets on target search were studied in considerable detail. We have found interesting differences in visual processing between stimuli comprising separable features and those comprising configural features. We have also been able to characterize the effects we have found with configural-feature stimuli as being related to the specific feature structure of the target stimulus in the context of the specific feature structure of distractor stimuli. These findings have strong implications for the design of symbology that can enhance visual performance in the use of automated displays.


2002 ◽  
Vol 45 (5) ◽  
pp. 1027-1038 ◽  
Author(s):  
Rosalie M. Uchanski ◽  
Ann E. Geers ◽  
Athanassios Protopapas

Exposure to modified speech has been shown to benefit children with languagelearning impairments with respect to their language skills (M. M. Merzenich et al., 1998; P. Tallal et al., 1996). In the study by Tallal and colleagues, the speech modification consisted of both slowing down and amplifying fast, transitional elements of speech. In this study, we examined whether the benefits of modified speech could be extended to provide intelligibility improvements for children with severe-to-profound hearing impairment who wear sensory aids. In addition, the separate effects on intelligibility of slowing down and amplifying speech were evaluated. Two groups of listeners were employed: 8 severe-to-profoundly hearingimpaired children and 5 children with normal hearing. Four speech-processing conditions were tested: (1) natural, unprocessed speech; (2) envelope-amplified speech; (3) slowed speech; and (4) both slowed and envelope-amplified speech. For each condition, three types of speech materials were used: words in sentences, isolated words, and syllable contrasts. To degrade the performance of the normal-hearing children, all testing was completed with a noise background. Results from the hearing-impaired children showed that all varieties of modified speech yielded either equivalent or poorer intelligibility than unprocessed speech. For words in sentences and isolated words, the slowing-down of speech had no effect on intelligibility scores whereas envelope amplification, both alone and combined with slowing-down, yielded significantly lower scores. Intelligibility results from normal-hearing children listening in noise were somewhat similar to those from hearing-impaired children. For isolated words, the slowing-down of speech had no effect on intelligibility whereas envelope amplification degraded intelligibility. For both subject groups, speech processing had no statistically significant effect on syllable discrimination. In summary, without extensive exposure to the speech processing conditions, children with impaired hearing and children with normal hearing listening in noise received no intelligibility advantage from either slowed speech or envelope-amplified speech.


2006 ◽  
Vol 95 (2) ◽  
pp. 922-931 ◽  
Author(s):  
David E. Vaillancourt ◽  
Mary A. Mayka ◽  
Daniel M. Corcos

The cerebellum, parietal cortex, and premotor cortex are integral to visuomotor processing. The parameters of visual information that modulate their role in visuomotor control are less clear. From motor psychophysics, the relation between the frequency of visual feedback and force variability has been identified as nonlinear. Thus we hypothesized that visual feedback frequency will differentially modulate the neural activation in the cerebellum, parietal cortex, and premotor cortex related to visuomotor processing. We used functional magnetic resonance imaging at 3 Tesla to examine visually guided grip force control under frequent and infrequent visual feedback conditions. Control conditions with intermittent visual feedback alone and a control force condition without visual feedback were examined. As expected, force variability was reduced in the frequent compared with the infrequent condition. Three novel findings were identified. First, infrequent (0.4 Hz) visual feedback did not result in visuomotor activation in lateral cerebellum (lobule VI/Crus I), whereas frequent (25 Hz) intermittent visual feedback did. This is in contrast to the anterior intermediate cerebellum (lobule V/VI), which was consistently active across all force conditions compared with rest. Second, confirming previous observations, the parietal and premotor cortices were active during grip force with frequent visual feedback. The novel finding was that the parietal and premotor cortex were also active during grip force with infrequent visual feedback. Third, right inferior parietal lobule, dorsal premotor cortex, and ventral premotor cortex had greater activation in the frequent compared with the infrequent grip force condition. These findings demonstrate that the frequency of visual information reduces motor error and differentially modulates the neural activation related to visuomotor processing in the cerebellum, parietal cortex, and premotor cortex.


2015 ◽  
Vol 28 (2) ◽  
pp. 241-249
Author(s):  
Fabiane Maria Klitzke dos Santos ◽  
Franciely Voltolini Mendes ◽  
Simone Suzuki Woellner ◽  
Noé Gomes Borges Júnior ◽  
Antonio Vinicius Soares

Introduction Hemiparetic Stroke patients have their daily activities affected by the balance impairment. Techniques that used visual information for training this impairment it seems to be effective. Objective To analyze the effects of the unstable balance board training and compare two ways of visual feedback: the biomechanical instrumentation and the mirror. Materials and methods Eight chronic hemiparetic Stroke patients participated in the research, randomized in two groups. The first group (G1) accomplished the training with biomechanical instrumentation, and the second group (G2) trained in front of the mirror. Sixteen training sessions were done with feet together, and feet apart. The evaluation instruments that were used before and after the period of training were the Time Up and Go Test (TUGT), Berg Balance Scale (BBS) and the Instrumented Balance Board (IBB), that quantified the functional mobility, the balance and the posture control respectively. Results The TUGT showed significant results (p < 0.05) favorable to G1. Despite the results of BBS were significant for G2, the intergroup comparison did not reveal statistical significance. Both groups obtained decrease in levels of IBB oscillation, what can indicate a higher stability, however the results did not indicate statistical significance (p > 0.05). A strong correlation between all the applied tests was observed in this research. Conclusion Although the advantages found were different between the groups, in both it could be observed that the training brought benefits, with the transference to the functional mobility.


Author(s):  
Liesbeth Vanormelingen ◽  
Sven De Maeyer ◽  
Steven Gillis

The present study examines the amount of input and output in congenitally hearing-impaired children with a cochlear implant (CI) and normally-hearing children (NH) and their normally-hearing mothers. The aim of the study was threefold: (a) to investigate the input provided by the two groups of mothers, (b) to investigate the output of the two groups of children, and (c) to investigate the influence of the mothers’ input on child output and expressive vocabulary size. Mothers are less influenced by their children’s hearing status than the children are: CI children are more talkative and slower speakers. Mothers influenced their children on most parameters, but strikingly, it was not maternal talkativeness as such, but the number of maternal turns that is the best predictor of a child’s expressive vocabulary size.


2011 ◽  
Vol 105 (2) ◽  
pp. 846-859 ◽  
Author(s):  
Lore Thaler ◽  
Melvyn A. Goodale

Studies that have investigated how sensory feedback about the moving hand is used to control hand movements have relied on paradigms such as pointing or reaching that require subjects to acquire target locations. In the context of these target-directed tasks, it has been found repeatedly that the human sensory-motor system relies heavily on visual feedback to control the ongoing movement. This finding has been formalized within the framework of statistical optimality according to which different sources of sensory feedback are combined such as to minimize variance in sensory information during movement control. Importantly, however, many hand movements that people perform every day are not target-directed, but based on allocentric (object-centered) visual information. Examples of allocentric movements are gesture imitation, drawing, or copying. Here we tested if visual feedback about the moving hand is used in the same way to control target-directed and allocentric hand movements. The results show that visual feedback is used significantly more to reduce movement scatter in the target-directed as compared with the allocentric movement task. Furthermore, we found that differences in the use of visual feedback between target-directed and allocentric hand movements cannot be explained based on differences in uncertainty about the movement goal. We conclude that the role played by visual feedback for movement control is fundamentally different for target-directed and allocentric movements. The results suggest that current computational and neural models of sensorimotor control that are based entirely on data derived from target-directed paradigms have to be modified to accommodate performance in the allocentric tasks used in our experiments. As a consequence, the results cast doubt on the idea that models of sensorimotor control developed exclusively from data obtained in target-directed paradigms are also valid in the context of allocentric tasks, such as drawing, copying, or imitative gesturing, that characterize much of human behavior.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Yuki Harada ◽  
Junji Ohyama

Abstract The spatiotemporal characteristics of basic attention are important for understanding attending behaviours in real-life situations, and they are useful for evaluating the accessibility of visual information. However, although people are encircled by their 360-degree surroundings in real life, no study has addressed the general characteristics of attention to 360-degree surroundings. Here, we conducted an experiment using virtual reality technology to examine the spatiotemporal characteristics of attention in a highly controlled basic visual context consisting of a 360-degree surrounding. We measured response times and gaze patterns during the 360-degree search task and examined the spatial distribution of attention and its temporal variations in a 360-degree environment based on the participants’ physical position. Data were collected from both younger adults and older adults to consider age-related differences. The results showed the fundamental spatiotemporal characteristics of 360-degree attention, which can be used as basic criteria to analyse the structure of exogenous effects on attention in complex 360-degree surroundings in real-life situations. For practical purposes, we created spherical criteria maps of 360-degree attention, which are useful for estimating attending behaviours to 360-degree environmental information or for evaluating visual information design in living environments, workspaces, or other real-life contexts.


2009 ◽  
Vol 48 (03) ◽  
pp. 254-262 ◽  
Author(s):  
Y. Shahar ◽  
M. Taieb-Maimon ◽  
D. Klimov

Summary Objectives: To design, implement and evaluate the functionality and usability of a methodology and a tool for interactive exploration of time and value associations among multiple-patient longitudinal data and among meaningful concepts derivable from these data. Methods: We developed a new, user-driven, interactive knowledge-based visualization technique, called Temporal Association Charts (TACs). TACs support the investigation of temporal and statistical associations within multiple patient records among both con cepts and the temporal abstractions derived from them. The TAC methodology was implemented as part of an interactive system, called VISITORS, which supports intelligent visualization and exploration of longitudinal patient data. The TAC module was evaluated for functionality and usability by a group of ten users, five clinicians and five medical informaticians. Users were asked to answer ten questions using the VISITORS system, five of which required the use of TACs. Results: Both types of users were able to answer the questions in reasonably short periods of time (a mean of 2.5 ± 0.27 minutes) and with high accuracy (95.3 ± 4.5 on a 0–100 scale), without a significant difference between the two groups. All five questions requiring the use of TACs were answered with similar response times and accuracy levels. Similar accuracy scores were achieved for questions requiring the use of TACs and for questions requiring the use only of general exploration operators. However, response times when using TACs were slightly longer. Conclusions: TACs are functional and usable. Their use results in a uniform performance level, regardless of the type of clinical question or user group involved.


2013 ◽  
Vol 56 (2) ◽  
pp. 416-426 ◽  
Author(s):  
Fiona E. Kyle ◽  
Ruth Campbell ◽  
Tara Mohammed ◽  
Mike Coleman ◽  
Mairéad MacSweeney

Purpose In this article, the authors describe the development of a new instrument, the Test of Child Speechreading (ToCS), which was specifically designed for use with deaf and hearing children. Speechreading is a skill that is required for deaf children to access the language of the hearing community. ToCS is a deaf-friendly, computer-based test that measures child speechreading (silent lipreading) at 3 psycholinguistic levels: (a) Words, (b) Sentences, and (c) Short Stories. The aims of the study were to standardize the ToCS with deaf and hearing children and to investigate the effects of hearing status, age, and linguistic complexity on speechreading ability. Method Eighty-six severely and profoundly deaf children and 91 hearing children participated. All children were between the ages of 5 and 14 years. The deaf children were from a range of language and communication backgrounds, and their preferred mode of communication varied. Results Speechreading skills significantly improved with age for both groups of children. There was no effect of hearing status on speechreading ability, and children from both groups showed similar performance across all subtests of the ToCS. Conclusion The ToCS is a valid and reliable assessment of speechreading ability in school-age children that can be used to measure individual differences in performance in speechreading ability.


Sign in / Sign up

Export Citation Format

Share Document