scholarly journals Speed tuning to real-world-and retinal motion in cortical motion regions

2018 ◽  
Author(s):  
Didem Korkmaz Hacialihafiz ◽  
Andreas Bartels

AbstractMotion signals can arise for two reasons in the retina: due to self-motion or due to real motion in the environment. Prior studies on speed tuning always measured joint responses to real and retinal motion, and for some of the more recently identified human motion processing regions, speed tuning has not been examined in at all. We localized motion regions V3A, V6, V5/MT, MST and cingulate sulcus visual area (CSv) in 20 human participants, and then measured their responses to motion velocities from 1-24 degrees per second. Importantly, we used a pursuit paradigm that allowed us to quantify responses to objective and retinal motion separately. In order to provide optimal stimulation, we used stimuli with natural image statistics derived from Fourier scrambles of natural images. The results show that all regions increased responses with higher speeds for both, retinal and objective motion. V3A stood out in that it was the only region whose slope of the speed-response function for objective motion was higher than that for retinal motion. V6, V5/MT, MST and CSv did not differ in objective and retinal speed slopes, even though V5/MT and MST tended to respond more to objective motion at all speeds. These results reveal highly similar speed tuning functions for early and high-level motion regions, and support the view that human V3A encodes primarily objective rather than retinal motion signals.


2006 ◽  
Vol 18 (2) ◽  
pp. 158-168 ◽  
Author(s):  
Jeannette A. M. Lorteije ◽  
J. Leon Kenemans ◽  
Tjeerd Jellema ◽  
Rob H. J. van der Lubbe ◽  
Frederiek de Heer ◽  
...  

Viewing static photographs of objects in motion evokes higher fMRI activation in the human medial temporal complex (MT+) than looking at similar photographs without this implied motion. As MT+ is traditionally thought to be involved in motion perception (and not in form perception), this finding suggests feedback from object-recognition areas onto MT+. To investigate this hypothesis, we recorded extracranial potentials evoked by the sight of photographs of biological agents with and without implied motion. The difference in potential between responses to pictures with and without implied motion was maximal between 260 and 400 msec after stimulus onset. Source analysis of this difference revealed one bilateral, symmetrical dipole pair in the occipital lobe. This area also showed a response to real motion, but approximately 100 msec earlier than the implied motion response. The longer latency of the implied motion response in comparison to the real motion response is consistent with a feedback projection onto MT+ following object recognition in higher-level temporal areas.



2018 ◽  
Author(s):  
Didem Korkmaz Hacialihafiz ◽  
Andreas Bartels

AbstractCreating a stable perception of the world during pursuit eye movements is one of the everyday roles of visual system. Some motion regions have been shown to differentiate between motion in the external world from that generated by eye movements. However, in most circumstances, perceptual stability is consistently related to content: the surrounding scene is typically stable. However, no prior study has examined to which extent motion responsive regions are modulated by scene content, and whether there is an interaction between content and motion response. In the present study we used a factorial design that has previously been shown to reveal regional involvement in integrating efference copies of eye-movements with retinal motion to mediate perceptual stability and encode real-world motion. We then added scene content as a third factor, which allowed us to examine to which extent real-motion, retinal motion, and static responses were modulated by meaningful scenes versus their Fourier scrambled counterpart. We found that motion responses in human motion responsive regions V3A, V6, V5+/MT+ and cingulate sulcus visual area (CSv) were all modulated by scene content. Depending on the region, these motion-content interactions differentially depended on whether motion was self-induced or not. V3A was the only motion responsive region that also showed responses to still scenes. Our results suggest that contrary to the two-pathway hypothesis, scene responses are not isolated to ventral regions, but also can be found in dorsal areas.



2021 ◽  
Vol 2 (3) ◽  
Author(s):  
Gustaf Halvardsson ◽  
Johanna Peterson ◽  
César Soto-Valero ◽  
Benoit Baudry

AbstractThe automatic interpretation of sign languages is a challenging task, as it requires the usage of high-level vision and high-level motion processing systems for providing accurate image perception. In this paper, we use Convolutional Neural Networks (CNNs) and transfer learning to make computers able to interpret signs of the Swedish Sign Language (SSL) hand alphabet. Our model consists of the implementation of a pre-trained InceptionV3 network, and the usage of the mini-batch gradient descent optimization algorithm. We rely on transfer learning during the pre-training of the model and its data. The final accuracy of the model, based on 8 study subjects and 9400 images, is 85%. Our results indicate that the usage of CNNs is a promising approach to interpret sign languages, and transfer learning can be used to achieve high testing accuracy despite using a small training dataset. Furthermore, we describe the implementation details of our model to interpret signs as a user-friendly web application.



2013 ◽  
Author(s):  
Martin Lukac ◽  
Michitaka Kameyama ◽  
Kosuke Hiura


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Thomas Treal ◽  
Philip L. Jackson ◽  
Jean Jeuvrey ◽  
Nicolas Vignais ◽  
Aurore Meugnot

AbstractVirtual reality platforms producing interactive and highly realistic characters are being used more and more as a research tool in social and affective neuroscience to better capture both the dynamics of emotion communication and the unintentional and automatic nature of emotional processes. While idle motion (i.e., non-communicative movements) is commonly used to create behavioural realism, its use to enhance the perception of emotion expressed by a virtual character is critically lacking. This study examined the influence of naturalistic (i.e., based on human motion capture) idle motion on two aspects (the perception of other’s pain and affective reaction) of an empathic response towards pain expressed by a virtual character. In two experiments, 32 and 34 healthy young adults were presented video clips of a virtual character displaying a facial expression of pain while its body was either static (still condition) or animated with natural postural oscillations (idle condition). The participants in Experiment 1 rated the facial pain expression of the virtual human as more intense, and those in Experiment 2 reported being more touched by its pain expression in the idle condition compared to the still condition, indicating a greater empathic response towards the virtual human’s pain in the presence of natural postural oscillations. These findings are discussed in relation to the models of empathy and biological motion processing. Future investigations will help determine to what extent such naturalistic idle motion could be a key ingredient in enhancing the anthropomorphism of a virtual human and making its emotion appear more genuine.



Author(s):  
Kristin Krahl ◽  
Mark W. Scerbo

The present study examined team performance on an adaptive pursuit tracking task with human-human and human-computer teams. The participants were randomly assigned to one of three team conditions where their partner was either a computer novice, computer expert, or human. Participants began the experiment with control over either the horizontal or vertical axis, but had the option of taking control of their teammate's axis if they achieved superior performance on the previous trial. A control condition was also run where a single participant controlled both axes. Performance was assessed by RMSE scores over 100 trials. The results showed that performance along the horizontal axis improved over the session regardless of the experimental condition, but the degree of improvement was dependent upon group assignment. Individuals working alone or paired with an expert computer maintained a high level of performance throughout the experiment. Those paired with a computer-novice or another human performed poorly initially, but eventually reached the level of those in the other conditions. The results showed that team training can be as effective as individual training, but that the quality of training is moderated by the skill level of one's teammate. Moreover, these findings suggest that task partitioning of high performance skills between a human and a computer is not only possible but may be considered a viable option in the design of adaptive systems.



2010 ◽  
Vol 50 (21) ◽  
pp. 2137-2141 ◽  
Author(s):  
Catherine Lynn ◽  
William Curran


2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Samina Rafique ◽  
M. Najam-ul-Islam ◽  
M. Shafique ◽  
A. Mahmood

Sit-to-stand (STS) motion is an indicator of an individual’s physical independence and well-being. Determination of various variables that contribute to the execution and control of STS motion is an active area of research. In this study, we evaluate the clinical hypothesis that besides numerous other factors, the central nervous system (CNS) controls STS motion by tracking a prelearned head position trajectory. Motivated by the evidence for a task-oriented encoding of motion by the CNS, we adopt a robotic approach for the synthesis of STS motion and propose this scheme as a solution to this hypothesis. We propose an analytical biomechanical human CNS modeling framework where the head position trajectory defines the high-level task control variable. The motion control is divided into low-level task generation and motor execution phases. We model CNS as STS controller and its Estimator subsystem plans joint trajectories to perform the low-level task. The motor execution is done through the Cartesian controller subsystem that generates torque commands to the joints. We do extensive motion and force capture experiments on human subjects to validate our analytical modeling scheme. We first scale our biomechanical model to match the anthropometry of the subjects. We do dynamic motion reconstruction through the control of simulated custom human CNS models to follow the captured head position trajectories in real time. We perform kinematic and kinetic analyses and comparison of experimental and simulated motions. For head position trajectories, root mean square (RMS) errors are 0.0118 m in horizontal and 0.0315 m in vertical directions. Errors in angle estimates are 0.55 rad, 0.93 rad, 0.59 rad, and 0.0442 rad for ankle, knee, hip, and head orientation, respectively. RMS error of ground reaction force (GRF) is 50.26 N, and the correlation between ground reaction torque and the support moment is 0.72. Low errors in our results validate (1) the reliability of motion/force capture methods and anthropometric technique for customization of human models and (2) high-level task control framework and human CNS modeling as a solution to the hypothesis. Accurate modeling and detailed understanding of human motion can have significant scope in the fields of rehabilitation, humanoid robotics, and virtual characters’ motion planning based on high-level task control schemes.





Sign in / Sign up

Export Citation Format

Share Document