Multimodal perception in table tennis: the effect of auditory and visual information on anticipation and planning of action

Author(s):  
Timo Klein-Soetebier ◽  
Benjamin Noël ◽  
Stefanie Klatt

2021 ◽  
pp. 002383092199872
Author(s):  
Solène Inceoglu

The present study investigated native (L1) and non-native (L2) speakers’ perception of the French vowels /ɔ̃, ɑ̃, ɛ̃, o/. Thirty-four American-English learners of French and 33 native speakers of Parisian French were asked to identify 60 monosyllabic words produced by a native speaker in three modalities of presentation: auditory-only (A-only); audiovisual (AV); and visual-only (V-only). The L2 participants also completed a vocabulary knowledge test of the words presented in the perception experiment that aimed to explore whether subjective word familiarity affected speech perception. Results showed that overall performance was better in the AV and A-only conditions for the two groups with the pattern of confusion differing across modalities. The lack of audiovisual benefit was not due to the vowel contrasts being not visually salient enough, as shown by the native group’s performance in the V-only modality, but to the L2 group’s weaker sensitivity to visual information. Additionally, a significant relationship was found between subjective word familiarity and AV and A-only (but not V-only) perception of non-native contrasts.



2012 ◽  
Vol 21 (3) ◽  
pp. 281-294 ◽  
Author(s):  
Stephan Streuber ◽  
Betty J. Mohler ◽  
Heinrich H. Bülthoff ◽  
Stephan de la Rosa

Theories of social interaction (i.e., common coding theory) suggest that visual information about the interaction partner is critical for successful interpersonal action coordination. Seeing the interaction partner allows an observer to understand and predict the interaction partner's behavior. However, it is unknown which of the many sources of visual information about an interaction partner (e.g., body, end effectors, and/or interaction objects) are used for action understanding and thus for the control of movements in response to observed actions. We used a novel immersive virtual environment to investigate this further. Specifically, we asked participants to perform table tennis strokes in response to table tennis balls stroked by a virtual table tennis player. We tested the effect of the visibility of the ball, the paddle, and the body of the virtual player on task performance and movement kinematics. Task performance was measured as the minimum distance between the center of the paddle and the center of the ball (radial error). Movement kinematics was measured as variability in the paddle speed of repeatedly executed table tennis strokes (stroke speed variability). We found that radial error was reduced when the ball was visible compared to invisible. However, seeing the body and/or the racket of the virtual players only reduced radial error when the ball was invisible. There was no influence of seeing the ball on stroke speed variability. However, we found that stroke speed variability was reduced when either the body or the paddle of the virtual player was visible. Importantly, the differences in stroke speed variability were largest in the moment when the virtual player hit the ball. This suggests that seeing the virtual player's body or paddle was important for preparing the stroke response. These results demonstrate for the first time that the online control of arm movements is coupled with visual body information about an opponent.



2016 ◽  
Vol 41 (3) ◽  
pp. 244-248 ◽  
Author(s):  
Seoung Hoon Park ◽  
Seonjin Kim ◽  
MinHyuk Kwon ◽  
Evangelos A. Christou

Vision and auditory information are critical for perception and to enhance the ability of an individual to respond accurately to a stimulus. However, it is unknown whether visual and auditory information contribute differentially to identify the direction and rotational motion of the stimulus. The purpose of this study was to determine the ability of an individual to accurately predict the direction and rotational motion of the stimulus based on visual and auditory information. In this study, we recruited 9 expert table-tennis players and used table-tennis service as our experimental model. Participants watched recorded services with different levels of visual and auditory information. The goal was to anticipate the direction of the service (left or right) and the rotational motion of service (topspin, sidespin, or cut). We recorded their responses and quantified the following outcomes: (i) directional accuracy and (ii) rotational motion accuracy. The response accuracy was the accurate predictions relative to the total number of trials. The ability of the participants to predict the direction of the service accurately increased with additional visual information but not with auditory information. In contrast, the ability of the participants to predict the rotational motion of the service accurately increased with the addition of auditory information to visual information but not with additional visual information alone. In conclusion, this finding demonstrates that visual information enhances the ability of an individual to accurately predict the direction of the stimulus, whereas additional auditory information enhances the ability of an individual to accurately predict the rotational motion of stimulus.



2020 ◽  
Vol 45 ◽  
pp. 62-76
Author(s):  
Katharina Petri ◽  
Timon Schmidt ◽  
Kerstin Witte

It is well-known that visual information is essential for anticipation in table tennis but it not clarified whether auditory cues are also used. Therefore, we performed two in-situ studies, in which novices (study A) and advanced players (study B) responded to strokes of a real opponent or a ball machine by returning with forehand counters (study A) and forehand top spins (study B) to a given target area on the table. We assessed the parameters “hit quality” and “subjective effort”. In study A, we provided four conditions: normal, a noise-cancelling headphone and earplugs to dampen auditory information, other noise-cancelling headphones and earplugs to remove almost all environmental sounds, and the same head-phones with additional bright noise to remove all sounds. In study B, we performed three tests (irregular play and regular play with an opponent and response to regular balls of a ball machine) under two conditions: normal and noise-cancelling headphones with the additional bright noise. In both studies, no significant differences between all conditions for “hit quality” and “subjective effort” (all p>0.05) were found. We conclude that auditory information, as well as their volume, have no influence on the hit quality in table tennis for novices and advanced players.



2012 ◽  
Vol 25 (0) ◽  
pp. 112 ◽  
Author(s):  
Lukasz Piwek ◽  
Karin Petrini ◽  
Frank E. Pollick

Multimodal perception of emotions has been typically examined using displays of a solitary character (e.g., the face–voice and/or body–sound of one actor). We extend investigation to more complex, dyadic point-light displays combined with speech. A motion and voice capture system was used to record twenty actors interacting in couples with happy, angry and neutral emotional expressions. The obtained stimuli were validated in a pilot study and used in the present study to investigate multimodal perception of emotional social interactions. Participants were required to categorize happy and angry expressions displayed visually, auditorily, or using emotionally congruent and incongruent bimodal displays. In a series of cross-validation experiments we found that sound dominated the visual signal in the perception of emotional social interaction. Although participants’ judgments were faster in the bimodal condition, the accuracy of judgments was similar for both bimodal and auditory-only conditions. When participants watched emotionally mismatched bimodal displays, they predominantly oriented their judgments towards the auditory rather than the visual signal. This auditory dominance persisted even when the reliability of auditory signal was decreased with noise, although visual information had some effect on judgments of emotions when it was combined with a noisy auditory signal. Our results suggest that when judging emotions from observed social interaction, we rely primarily on vocal cues from the conversation, rather then visual cues from their body movement.



2022 ◽  
Vol 3 ◽  
Author(s):  
Chisa Aoyama ◽  
Ryoma Goya ◽  
Naofumi Suematsu ◽  
Koji Kadota ◽  
Yuji Yamamoto ◽  
...  

In a table tennis rally, players perform interceptive actions on a moving ball continuously in a short time, such that the acquisition process of visual information is an important determinant of the performance of the action. However, because it is technically hard to measure gaze movement in a real game, little is known about how gaze behavior is conducted during the continuous visuomotor actions and contributes to the performance. To examine these points, we constructed a novel psychophysical experiment model enabling a continuous visuomotor task without spatial movement of any body parts, including the arm and head, and recorded the movement of the gaze and effector simultaneously at high spatiotemporal resolution. In the task, Gabor patches (target) moved one after another at a constant speed from right to left at random vertical positions on an LC display. Participants hit the target with a cursor moving vertically on the left side of the display by controlling their prehensile force on a force sensor. Participants hit the target with the cursor using a rapid-approaching movement (rapid cursor approach, RCA). Their gaze also showed rapid saccadic approaching movement (saccadic eye approach, SEA), reaching the predicted arrival point of the target earlier than the cursor. The RCA reached in or near the Hit zone in the successful (Hit) trial, but ended up away from it in the unsuccessful (Miss) trial, suggesting the spatial accuracy of the RCA determines the task's success. The SEA in the Hit trial ended nearer the target than the Miss trial. The spatial accuracy of the RCA diminished when the target disappeared 100 ms just after the end of the SEA, suggesting that visual information acquired after the saccade acted as feedback information to correct the cursor movement online for the cursor to reach the target. There was a target speed condition that the target disappearance did not compromise RCA's spatial accuracy, implying the possible RCA correction based on the post-saccadic gaze location information. These experiments clarified that gaze behavior conducted during fast continuous visuomotor actions enables online correction of the ongoing interceptive movement of an effector, improving visuomotor performance.



2009 ◽  
Vol 23 (2) ◽  
pp. 63-76 ◽  
Author(s):  
Silke Paulmann ◽  
Sarah Jessen ◽  
Sonja A. Kotz

The multimodal nature of human communication has been well established. Yet few empirical studies have systematically examined the widely held belief that this form of perception is facilitated in comparison to unimodal or bimodal perception. In the current experiment we first explored the processing of unimodally presented facial expressions. Furthermore, auditory (prosodic and/or lexical-semantic) information was presented together with the visual information to investigate the processing of bimodal (facial and prosodic cues) and multimodal (facial, lexic, and prosodic cues) human communication. Participants engaged in an identity identification task, while event-related potentials (ERPs) were being recorded to examine early processing mechanisms as reflected in the P200 and N300 component. While the former component has repeatedly been linked to physical property stimulus processing, the latter has been linked to more evaluative “meaning-related” processing. A direct relationship between P200 and N300 amplitude and the number of information channels present was found. The multimodal-channel condition elicited the smallest amplitude in the P200 and N300 components, followed by an increased amplitude in each component for the bimodal-channel condition. The largest amplitude was observed for the unimodal condition. These data suggest that multimodal information induces clear facilitation in comparison to unimodal or bimodal information. The advantage of multimodal perception as reflected in the P200 and N300 components may thus reflect one of the mechanisms allowing for fast and accurate information processing in human communication.



Author(s):  
Weiyu Zhang ◽  
Se-Hoon Jeong ◽  
Martin Fishbein†

This study investigates how multitasking interacts with levels of sexually explicit content to influence an individual’s ability to recognize TV content. A 2 (multitasking vs. nonmultitasking) by 3 (low, medium, and high sexual content) between-subjects experiment was conducted. The analyses revealed that multitasking not only impaired task performance, but also decreased TV recognition. An inverted-U relationship between degree of sexually explicit content and recognition of TV content was found, but only when subjects were multitasking. In addition, multitasking interfered with subjects’ ability to recognize audio information more than their ability to recognize visual information.



Sign in / Sign up

Export Citation Format

Share Document