scholarly journals Increased Right Posterior STS Recruitment Without Enhanced Directional-Tuning During Tactile Motion Processing in Early Deaf Individuals

2020 ◽  
Vol 14 ◽  
Author(s):  
Alexandra N. Scurry ◽  
Elizabeth Huber ◽  
Courtney Matera ◽  
Fang Jiang
2020 ◽  
Vol 38 (5) ◽  
pp. 395-405
Author(s):  
Luca Battaglini ◽  
Federica Mena ◽  
Clara Casco

Background: To study motion perception, a stimulus consisting of a field of small, moving dots is often used. Generally, some of the dots coherently move in the same direction (signal) while the rest move randomly (noise). A percept of global coherent motion (CM) results when many different local motion signals are combined. CM computation is a complex process that requires the integrity of the middle-temporal area (MT/V5) and there is evidence that increasing the number of dots presented in the stimulus makes such computation more efficient. Objective: In this study, we explored whether anodal direct current stimulation (tDCS) over MT/V5 would increase individual performance in a CM task at a low signal-to-noise ratio (SNR, i.e. low percentage of coherent dots) and with a target consisting of a large number of moving dots (high dot numerosity, e.g. >250 dots) with respect to low dot numerosity (<60 dots), indicating that tDCS favour the integration of local motion signal into a single global percept (global motion). Method: Participants were asked to perform a CM detection task (two-interval forced-choice, 2IFC) while they received anodal, cathodal, or sham stimulation on three different days. Results: Our findings showed no effect of cathodal tDCS with respect to the sham condition. Instead, anodal tDCS improves performance, but mostly when dot numerosity is high (>400 dots) to promote efficient global motion processing. Conclusions: The present study suggests that tDCS may be used under appropriate stimulus conditions (low SNR and high dot numerosity) to boost the global motion processing efficiency, and may be useful to empower clinical protocols to treat visual deficits.


2021 ◽  
Vol 2 (3) ◽  
Author(s):  
Gustaf Halvardsson ◽  
Johanna Peterson ◽  
César Soto-Valero ◽  
Benoit Baudry

AbstractThe automatic interpretation of sign languages is a challenging task, as it requires the usage of high-level vision and high-level motion processing systems for providing accurate image perception. In this paper, we use Convolutional Neural Networks (CNNs) and transfer learning to make computers able to interpret signs of the Swedish Sign Language (SSL) hand alphabet. Our model consists of the implementation of a pre-trained InceptionV3 network, and the usage of the mini-batch gradient descent optimization algorithm. We rely on transfer learning during the pre-training of the model and its data. The final accuracy of the model, based on 8 study subjects and 9400 images, is 85%. Our results indicate that the usage of CNNs is a promising approach to interpret sign languages, and transfer learning can be used to achieve high testing accuracy despite using a small training dataset. Furthermore, we describe the implementation details of our model to interpret signs as a user-friendly web application.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Thomas Treal ◽  
Philip L. Jackson ◽  
Jean Jeuvrey ◽  
Nicolas Vignais ◽  
Aurore Meugnot

AbstractVirtual reality platforms producing interactive and highly realistic characters are being used more and more as a research tool in social and affective neuroscience to better capture both the dynamics of emotion communication and the unintentional and automatic nature of emotional processes. While idle motion (i.e., non-communicative movements) is commonly used to create behavioural realism, its use to enhance the perception of emotion expressed by a virtual character is critically lacking. This study examined the influence of naturalistic (i.e., based on human motion capture) idle motion on two aspects (the perception of other’s pain and affective reaction) of an empathic response towards pain expressed by a virtual character. In two experiments, 32 and 34 healthy young adults were presented video clips of a virtual character displaying a facial expression of pain while its body was either static (still condition) or animated with natural postural oscillations (idle condition). The participants in Experiment 1 rated the facial pain expression of the virtual human as more intense, and those in Experiment 2 reported being more touched by its pain expression in the idle condition compared to the still condition, indicating a greater empathic response towards the virtual human’s pain in the presence of natural postural oscillations. These findings are discussed in relation to the models of empathy and biological motion processing. Future investigations will help determine to what extent such naturalistic idle motion could be a key ingredient in enhancing the anthropomorphism of a virtual human and making its emotion appear more genuine.


2017 ◽  
Vol 57 ◽  
pp. 162-169 ◽  
Author(s):  
Stefanie C. Biehl ◽  
Melanie Andersen ◽  
Gordon D. Waiter ◽  
Karin S. Pilz

Sign in / Sign up

Export Citation Format

Share Document