Increased visual information gain improves bimanual force coordination

2015 ◽  
Vol 608 ◽  
pp. 23-27 ◽  
Author(s):  
Amitoj Bhullar ◽  
Nyeonju Kang ◽  
Jerelyne Idica ◽  
Evangelos A. Christou ◽  
James H. Cauraugh
2011 ◽  
Vol 111 (6) ◽  
pp. 1671-1680 ◽  
Author(s):  
Xiaogang Hu ◽  
Karl M. Newell

This study investigated the coordination and control strategies that the elderly adopt during a redundant finger force coordination task and how the amount of visual information regulates the coordination patterns. Three age groups (20–24, 65–69, and 75–79 yr) performed a bimanual asymmetric force task. Task asymmetry was manipulated via imposing different coefficients on the finger forces such that the weighted sum of the two index finger forces equaled the total force. The amount of visual information was manipulated by changing the visual information gain of the total force output. Two hypotheses were tested: the reduced adaptability hypothesis predicts that the elderly show less degree of force asymmetry between hands compared with young adults in the asymmetric coefficient conditions, whereas the compensatory hypothesis predicts that the elderly exhibit more asymmetric force coordination patterns with asymmetric coefficients. Under the compensatory hypothesis, two contrasting directions of force sharing strategies (i.e., more efficient coordination strategy and minimum variance strategy) are expected. A deteriorated task performance (high performance error and force variability) was found in the two elderly groups, but enhanced visual information improved the task performance in all age groups. With low visual information gain, the elderly showed reduced adaptability (i.e., less asymmetric forces between hands) to the unequal weighting coefficients, which supported the reduced adaptability hypothesis; however, the elderly revealed the same degree of adaptation as the young group under high visual gain. The findings are consistent with the notion that the age-related reorganization of force coordination and control patterns is mediated by visual information and, more generally, the interactive influence of multiple categories of constraints.


Author(s):  
Osama Alfarraj ◽  
Amr Tolba

Abstract The computer vision (CV) paradigm is introduced to improve the computational and processing system efficiencies through visual inputs. These visual inputs are processed using sophisticated techniques for improving the reliability of human–machine interactions (HMIs). The processing of visual inputs requires multi-level data computations for achieving application-specific reliability. Therefore, in this paper, a two-level visual information processing (2LVIP) method is introduced to meet the reliability requirements of HMI applications. The 2LVIP method is used for handling both structured and unstructured data through classification learning to extract the maximum gain from the inputs. The introduced method identifies the gain-related features on its first level and optimizes the features to improve information gain. In the second level, the error is reduced through a regression process to stabilize the precision to meet the HMI application demands. The two levels are interoperable and fully connected to achieve better gain and precision through the reduction in information processing errors. The analysis results show that the proposed method achieves 9.42% higher information gain and a 6.51% smaller error under different classification instances compared with conventional methods.


2014 ◽  
Vol 111 (7) ◽  
pp. 1519-1528 ◽  
Author(s):  
Qiushi Fu ◽  
Marco Santello

Humans adjust digit forces to compensate for trial-to-trial variability in digit placement during object manipulation, but the underlying control mechanisms remain to be determined. We hypothesized that such digit position/force coordination was achieved by both visually guided feed-forward planning and haptic-based feedback control. The question arises about the time course of the interaction between these two mechanisms. This was tested with a task in which subjects generated torque (± 70 N·mm) on a virtual object to control a cursor moving to target positions to catch a falling ball, using a virtual reality environment and haptic devices. The width of the virtual object was varied between large (L) and small (S). These object widths result in significantly different horizontal digit relative positions and require different digit forces to exert the same task torque. After training, subjects were tested with random sequences of L and S widths with or without visual information about object width. We found that visual cues allowed subjects to plan manipulation forces before contact. In contrast, when visual cues were not available to predict digit positions, subjects implemented a “default” digit force plan that was corrected after digit contact to eventually accomplish the task. The time course of digit forces revealed that force development was delayed in the absence of visual cues. Specifically, the appropriate digit force adjustments were made 250–300 ms after initial object contact. This result supports our hypothesis and further reveals that haptic feedback alone is sufficient to implement digit force-position coordination.


2009 ◽  
Vol 23 (2) ◽  
pp. 63-76 ◽  
Author(s):  
Silke Paulmann ◽  
Sarah Jessen ◽  
Sonja A. Kotz

The multimodal nature of human communication has been well established. Yet few empirical studies have systematically examined the widely held belief that this form of perception is facilitated in comparison to unimodal or bimodal perception. In the current experiment we first explored the processing of unimodally presented facial expressions. Furthermore, auditory (prosodic and/or lexical-semantic) information was presented together with the visual information to investigate the processing of bimodal (facial and prosodic cues) and multimodal (facial, lexic, and prosodic cues) human communication. Participants engaged in an identity identification task, while event-related potentials (ERPs) were being recorded to examine early processing mechanisms as reflected in the P200 and N300 component. While the former component has repeatedly been linked to physical property stimulus processing, the latter has been linked to more evaluative “meaning-related” processing. A direct relationship between P200 and N300 amplitude and the number of information channels present was found. The multimodal-channel condition elicited the smallest amplitude in the P200 and N300 components, followed by an increased amplitude in each component for the bimodal-channel condition. The largest amplitude was observed for the unimodal condition. These data suggest that multimodal information induces clear facilitation in comparison to unimodal or bimodal information. The advantage of multimodal perception as reflected in the P200 and N300 components may thus reflect one of the mechanisms allowing for fast and accurate information processing in human communication.


Author(s):  
Weiyu Zhang ◽  
Se-Hoon Jeong ◽  
Martin Fishbein†

This study investigates how multitasking interacts with levels of sexually explicit content to influence an individual’s ability to recognize TV content. A 2 (multitasking vs. nonmultitasking) by 3 (low, medium, and high sexual content) between-subjects experiment was conducted. The analyses revealed that multitasking not only impaired task performance, but also decreased TV recognition. An inverted-U relationship between degree of sexually explicit content and recognition of TV content was found, but only when subjects were multitasking. In addition, multitasking interfered with subjects’ ability to recognize audio information more than their ability to recognize visual information.


Sign in / Sign up

Export Citation Format

Share Document