The effects of different visual information and perception-action experiences on judgment of landing directions in table tennis

2011 ◽  
Author(s):  
Ming-Young Tang ◽  
Chih-Hui Chang
2012 ◽  
Vol 21 (3) ◽  
pp. 281-294 ◽  
Author(s):  
Stephan Streuber ◽  
Betty J. Mohler ◽  
Heinrich H. Bülthoff ◽  
Stephan de la Rosa

Theories of social interaction (i.e., common coding theory) suggest that visual information about the interaction partner is critical for successful interpersonal action coordination. Seeing the interaction partner allows an observer to understand and predict the interaction partner's behavior. However, it is unknown which of the many sources of visual information about an interaction partner (e.g., body, end effectors, and/or interaction objects) are used for action understanding and thus for the control of movements in response to observed actions. We used a novel immersive virtual environment to investigate this further. Specifically, we asked participants to perform table tennis strokes in response to table tennis balls stroked by a virtual table tennis player. We tested the effect of the visibility of the ball, the paddle, and the body of the virtual player on task performance and movement kinematics. Task performance was measured as the minimum distance between the center of the paddle and the center of the ball (radial error). Movement kinematics was measured as variability in the paddle speed of repeatedly executed table tennis strokes (stroke speed variability). We found that radial error was reduced when the ball was visible compared to invisible. However, seeing the body and/or the racket of the virtual players only reduced radial error when the ball was invisible. There was no influence of seeing the ball on stroke speed variability. However, we found that stroke speed variability was reduced when either the body or the paddle of the virtual player was visible. Importantly, the differences in stroke speed variability were largest in the moment when the virtual player hit the ball. This suggests that seeing the virtual player's body or paddle was important for preparing the stroke response. These results demonstrate for the first time that the online control of arm movements is coupled with visual body information about an opponent.


2016 ◽  
Vol 41 (3) ◽  
pp. 244-248 ◽  
Author(s):  
Seoung Hoon Park ◽  
Seonjin Kim ◽  
MinHyuk Kwon ◽  
Evangelos A. Christou

Vision and auditory information are critical for perception and to enhance the ability of an individual to respond accurately to a stimulus. However, it is unknown whether visual and auditory information contribute differentially to identify the direction and rotational motion of the stimulus. The purpose of this study was to determine the ability of an individual to accurately predict the direction and rotational motion of the stimulus based on visual and auditory information. In this study, we recruited 9 expert table-tennis players and used table-tennis service as our experimental model. Participants watched recorded services with different levels of visual and auditory information. The goal was to anticipate the direction of the service (left or right) and the rotational motion of service (topspin, sidespin, or cut). We recorded their responses and quantified the following outcomes: (i) directional accuracy and (ii) rotational motion accuracy. The response accuracy was the accurate predictions relative to the total number of trials. The ability of the participants to predict the direction of the service accurately increased with additional visual information but not with auditory information. In contrast, the ability of the participants to predict the rotational motion of the service accurately increased with the addition of auditory information to visual information but not with additional visual information alone. In conclusion, this finding demonstrates that visual information enhances the ability of an individual to accurately predict the direction of the stimulus, whereas additional auditory information enhances the ability of an individual to accurately predict the rotational motion of stimulus.


2020 ◽  
Vol 45 ◽  
pp. 62-76
Author(s):  
Katharina Petri ◽  
Timon Schmidt ◽  
Kerstin Witte

It is well-known that visual information is essential for anticipation in table tennis but it not clarified whether auditory cues are also used. Therefore, we performed two in-situ studies, in which novices (study A) and advanced players (study B) responded to strokes of a real opponent or a ball machine by returning with forehand counters (study A) and forehand top spins (study B) to a given target area on the table. We assessed the parameters “hit quality” and “subjective effort”. In study A, we provided four conditions: normal, a noise-cancelling headphone and earplugs to dampen auditory information, other noise-cancelling headphones and earplugs to remove almost all environmental sounds, and the same head-phones with additional bright noise to remove all sounds. In study B, we performed three tests (irregular play and regular play with an opponent and response to regular balls of a ball machine) under two conditions: normal and noise-cancelling headphones with the additional bright noise. In both studies, no significant differences between all conditions for “hit quality” and “subjective effort” (all p>0.05) were found. We conclude that auditory information, as well as their volume, have no influence on the hit quality in table tennis for novices and advanced players.


2010 ◽  
Vol 23 (2) ◽  
pp. 89-151 ◽  
Author(s):  
Andrei Gorea ◽  
Pedro Cardoso-Leite

AbstractWith its roots in Ungerleider and Mishkin's (1982) uncovering of two distinct — ventral and dorsal — anatomical pathways for the processing of visual information, and boosted by Goodale and Milner's (1992; Milner and Goodale, 1995) behavioral study of patients with lesions of either of these pathways, the perception–action dissociation became a standard reference in the sensorimotor literature. Here we present briefly the anatomical, neuropsychological and, more extensively, the psychophysical evidence favoring such dissociation and pit it against counteracting evidence as well as against potential methodological and conceptual pitfalls. We also discuss classes of models accounting for a number of 'dissociation' results and conclude that the most general and parsimonious one posits the existence of one single processing stream that accumulates information up to a decision criterion modulated by stimulation conditions, response mode (motor vs. verbal/perceptual), task constraints (speeded vs. free time responses) and the nature of the task (detection, discrimination, temporal order judgment, etc.). The reviewed evidence is not meant to refute or validate the hypothesis of a perceptual–motor dissociation. Rather, its main objective is to show that, beyond its self-evidence, such dissociation is difficult if not impossible to test.


2005 ◽  
Vol 22 (1) ◽  
pp. 39-56 ◽  
Author(s):  
Paula F. Polastri ◽  
and José A. Barela

This study examined the effects of experience and practice on the coupling between visual information and trunk sway in infants with Down syndrome (DS). Five experienced and five novice sitters were exposed to a moving room, which was oscillated at 0.2 and 0.5 Hz. Infants remained in a sitting position and data were collected on the first, fourth, and seventh days. On the first day, experienced sitters were more influenced by room oscillation than were novices. On the following days, however, the influence of room oscillation decreased for experienced but increased for novice sitters. These results suggest that the relationship between sensory information and motor action in infants with DS can be changed with experience and practice.


2022 ◽  
Vol 3 ◽  
Author(s):  
Chisa Aoyama ◽  
Ryoma Goya ◽  
Naofumi Suematsu ◽  
Koji Kadota ◽  
Yuji Yamamoto ◽  
...  

In a table tennis rally, players perform interceptive actions on a moving ball continuously in a short time, such that the acquisition process of visual information is an important determinant of the performance of the action. However, because it is technically hard to measure gaze movement in a real game, little is known about how gaze behavior is conducted during the continuous visuomotor actions and contributes to the performance. To examine these points, we constructed a novel psychophysical experiment model enabling a continuous visuomotor task without spatial movement of any body parts, including the arm and head, and recorded the movement of the gaze and effector simultaneously at high spatiotemporal resolution. In the task, Gabor patches (target) moved one after another at a constant speed from right to left at random vertical positions on an LC display. Participants hit the target with a cursor moving vertically on the left side of the display by controlling their prehensile force on a force sensor. Participants hit the target with the cursor using a rapid-approaching movement (rapid cursor approach, RCA). Their gaze also showed rapid saccadic approaching movement (saccadic eye approach, SEA), reaching the predicted arrival point of the target earlier than the cursor. The RCA reached in or near the Hit zone in the successful (Hit) trial, but ended up away from it in the unsuccessful (Miss) trial, suggesting the spatial accuracy of the RCA determines the task's success. The SEA in the Hit trial ended nearer the target than the Miss trial. The spatial accuracy of the RCA diminished when the target disappeared 100 ms just after the end of the SEA, suggesting that visual information acquired after the saccade acted as feedback information to correct the cursor movement online for the cursor to reach the target. There was a target speed condition that the target disappearance did not compromise RCA's spatial accuracy, implying the possible RCA correction based on the post-saccadic gaze location information. These experiments clarified that gaze behavior conducted during fast continuous visuomotor actions enables online correction of the ongoing interceptive movement of an effector, improving visuomotor performance.


2009 ◽  
Vol 06 (03) ◽  
pp. 387-434 ◽  
Author(s):  
JEANNETTE BOHG ◽  
CARL BARCK-HOLST ◽  
KAI HUEBNER ◽  
MARIA RALPH ◽  
BABAK RASOLZADEH ◽  
...  

A distinct property of robot vision systems is that they are embodied. Visual information is extracted for the purpose of moving in and interacting with the environment. Thus, different types of perception-action cycles need to be implemented and evaluated. In this paper, we study the problem of designing a vision system for the purpose of object grasping in everyday environments. This vision system is firstly targeted at the interaction with the world through recognition and grasping of objects and secondly at being an interface for the reasoning and planning module to the real world. The latter provides the vision system with a certain task that drives it and defines a specific context, i.e. search for or identify a certain object and analyze it for potential later manipulation. We deal with cases of: (i) known objects, (ii) objects similar to already known objects, and (iii) unknown objects. The perception-action cycle is connected to the reasoning system based on the idea of affordances. All three cases are also related to the state of the art and the terminology in the neuroscientific area.


2009 ◽  
Vol 23 (2) ◽  
pp. 63-76 ◽  
Author(s):  
Silke Paulmann ◽  
Sarah Jessen ◽  
Sonja A. Kotz

The multimodal nature of human communication has been well established. Yet few empirical studies have systematically examined the widely held belief that this form of perception is facilitated in comparison to unimodal or bimodal perception. In the current experiment we first explored the processing of unimodally presented facial expressions. Furthermore, auditory (prosodic and/or lexical-semantic) information was presented together with the visual information to investigate the processing of bimodal (facial and prosodic cues) and multimodal (facial, lexic, and prosodic cues) human communication. Participants engaged in an identity identification task, while event-related potentials (ERPs) were being recorded to examine early processing mechanisms as reflected in the P200 and N300 component. While the former component has repeatedly been linked to physical property stimulus processing, the latter has been linked to more evaluative “meaning-related” processing. A direct relationship between P200 and N300 amplitude and the number of information channels present was found. The multimodal-channel condition elicited the smallest amplitude in the P200 and N300 components, followed by an increased amplitude in each component for the bimodal-channel condition. The largest amplitude was observed for the unimodal condition. These data suggest that multimodal information induces clear facilitation in comparison to unimodal or bimodal information. The advantage of multimodal perception as reflected in the P200 and N300 components may thus reflect one of the mechanisms allowing for fast and accurate information processing in human communication.


Sign in / Sign up

Export Citation Format

Share Document