scholarly journals When two worlds collide: the influence of an obstacle in peripersonal space on multisensory encoding

Author(s):  
Rudmer Menger ◽  
Alyanne M. De Haan ◽  
Stefan Van der Stigchel ◽  
H. Chris Dijkerman

AbstractMultisensory coding of the space surrounding our body, the peripersonal space, is crucial for motor control. Recently, it has been proposed that an important function of multisensory coding is that it allows anticipation of the tactile consequences of contact with a nearby object. Indeed, performing goal-directed actions (i.e. pointing and grasping) induces a continuous visuotactile remapping as a function of on-line sensorimotor requirements. Here, we investigated whether visuotactile remapping can be induced by obstacles, e.g. objects that are not the target of the grasping movement. In the current experiment, we used a cross-modal obstacle avoidance paradigm, in which participants reached past an obstacle to grasp a second object. Participants indicated the location of tactile targets delivered to the hand during the grasping movement, while a visual cue was sometimes presented simultaneously on the to-be-avoided object. The tactile and visual stimulation was triggered when the reaching hand passed a position that was drawn randomly from a continuous set of predetermined locations (between 0 and 200 mm depth at 5 mm intervals). We observed differences in visuotactile interaction during obstacle avoidance dependent on the location of the stimulation trigger: visual interference was enhanced for tactile stimulation that occurred when the hand was near the to-be-avoided object. We show that to-be-avoided obstacles, which are relevant for action but are not to-be-interacted with (as the terminus of an action), automatically evoke the tactile consequences of interaction. This shows that visuotactile remapping extends to obstacle avoidance and that this process is flexible.

Author(s):  
Yiqi Gao ◽  
Theresa Lin ◽  
Francesco Borrelli ◽  
Eric Tseng ◽  
Davor Hrovat

Two frameworks based on Model Predictive Control (MPC) for obstacle avoidance with autonomous vehicles are presented. A given trajectory represents the driver intent. An MPC has to safely avoid obstacles on the road while trying to track the desired trajectory by controlling front steering angle and differential braking. We present two different approaches to this problem. The first approach solves a single nonlinear MPC problem. The second approach uses a hierarchical scheme. At the high-level, a trajectory is computed on-line, in a receding horizon fashion, based on a simplified point-mass vehicle model in order to avoid an obstacle. At the low-level an MPC controller computes the vehicle inputs in order to best follow the high level trajectory based on a nonlinear vehicle model. This article presents the design and comparison of both approaches, the method for implementing them, and successful experimental results on icy roads.


eLife ◽  
2017 ◽  
Vol 6 ◽  
Author(s):  
Thomas C Harmon ◽  
Uri Magaram ◽  
David L McLean ◽  
Indira M Raman

To study cerebellar activity during learning, we made whole-cell recordings from larval zebrafish Purkinje cells while monitoring fictive swimming during associative conditioning. Fish learned to swim in response to visual stimulation preceding tactile stimulation of the tail. Learning was abolished by cerebellar ablation. All Purkinje cells showed task-related activity. Based on how many complex spikes emerged during learned swimming, they were classified as multiple, single, or zero complex spike (MCS, SCS, ZCS) cells. With learning, MCS and ZCS cells developed increased climbing fiber (MCS) or parallel fiber (ZCS) input during visual stimulation; SCS cells fired complex spikes associated with learned swimming episodes. The categories correlated with location. Optogenetically suppressing simple spikes only during visual stimulation demonstrated that simple spikes are required for acquisition and early stages of expression of learned responses, but not their maintenance, consistent with a transient, instructive role for simple spikes during cerebellar learning in larval zebrafish.


2012 ◽  
Vol 21 (1) ◽  
Author(s):  
Rogelio de J. Portillo-Vélez ◽  
Carlos A. Cruz-Villar ◽  
Alejandro Rodríguez-Ángeles

Author(s):  
Alessandro Scano ◽  
Marco Caimmi ◽  
Andrea Chiavenna ◽  
Matteo Malosio ◽  
Lorenzo Molinari Tosatti

Stroke is one of the main causes of disability in Western countries. Damaged brain areas are not able to provide the fine-tuned muscular control typical of human upper-limbs, resulting in many symptoms that affect consistently patients' daily-life activities. Neurological rehabilitation is a multifactorial process that aims at partially restoring the functional properties of the impaired limbs, taking advantage of neuroplasticity, i.e. the capability of re-aggregating neural networks in order to repair and substitute the damaged neural circuits. Recently, many virtual reality-based, robotic and exoskeleton approaches have been developed to exploit neuroplasticity and help conventional therapies in clinic. The effectiveness of such methods is only partly demonstrated. Patients' performances and clinical courses are assessed via a variety of complex and expensive sensors and time-consuming techniques: motion capture systems, EMG, EEG, MRI, interaction forces with the devices, clinical scales. Evidences show that benefits are proportional to treatment duration and intensity. Clinics can provide intensive assistance just for a limited amount of time. Thus, in order to preserve the benefits and increase them in time, the rehabilitative process should be continued at home. Simplicity, easiness of use, affordability, reliability and capability of storing logs of the rehabilitative sessions are the most important requirements in developing devices to allow and facilitate domestic rehabilitation. Tracking systems are the primary sources of information to assess patients' motor performances. While expensive and sophisticated techniques can investigate neuroplasticity, neural activation (fMRI) and muscle stimulation patterns (EMG), the kinematic assessment is fundamental to provide basic but essential quantitative evaluations as range of motion, motor control quality and measurements of motion abilities. Microsoft Kinect and Kinect One are programmable and affordable tracking sensors enabling the measurement of the positions of human articular centers. They are widely used in rehabilitation, mainly for interacting with virtual environments and videogames, or training motor primitives and single joints. In this paper, the authors propose a novel use of the Kinect and Kinect One sensors in a medical protocol specifically developed to assess the motor control quality of neurologically impaired people. It is based on the evaluation of clinically meaningful synthetic performance indexes, derived from previously developed experiences in upper-limb robotic treatments. The protocol provides evaluations taking into account kinematics (articular clinical angles, velocities, accelerations), dynamics (shoulder torque and shoulder effort index), motor and postural control quantities (normalized jerk of the wrist, coefficient of periodicity, center of mass displacement). The Kinect-based platform performance evaluation was off-line compared with the measurements obtained with a marker-based motion tracking system during the execution of reaching tasks against gravity. Preliminary results based on the Kinect sensor suggest its efficacy in clustering healthy subjects and patients according to their motor performances, despite the less sensibility in respect to the marker-based system used for comparison. A software library to evaluate motor performances has been developed by the authors, implemented in different programming languages and is available for on-line use during training/evaluation sessions (Figure 1). The Kinect sensor coupled with the developed computational library is proposed as an assessment technology during domestic rehabilitation therapies with on-line feedback, enabled by an application featuring tracking, graphical representation and data logging. An experimental campaign is under development on post-stroke patients with the Kinect-One sensor. Preliminary results on patients with different residual functioning and level of impairment indicate the capability of the whole system in discriminating motor performances.


1973 ◽  
Vol 59 (3) ◽  
pp. 675-696
Author(s):  
R. J. COOTER

1. Visual and multimodal units were recorded from the thoracic nerve cord of the cockroach, Periplaneta americana, using glass microelectrodes. 2. Compound-eye units could be classified as ON-, OFF- or ON-OFF-units according to their response to visual stimulation. Some were multimodal, firing to both visual and tactile stimulation of the antennae. 3. Although some units were found to be either fired by ipsilateral or by contralateral stimulation only, others were fired by both types of stimulation, often in different ways. 4. Ocellar units were invariably OFF-units, mainly phasic, but one type showed tonic dark-firing in addition to the phasic OFF-burst. 5. The general properties of cockroach visual units are discussed and compared with those reported by other workers for different insects.


2012 ◽  
Vol 25 (0) ◽  
pp. 88
Author(s):  
Annick De Paepe ◽  
Valéry Legrain ◽  
Geert Crombez

Localizing pain not only requires a simple somatotopic representation of the body, but also knowledge about the limb position (i.e., proprioception), and a visual localization of the pain source in external space. Therefore, nociceptive events are remapped into a multimodal representation of the body and the space nearby (i.e., a peripersonal schema of the body). We investigated the influence of visual cues presented either in peripersonal, or in extrapersonal space on the localization of nociceptive stimuli in a temporal order judgement (TOJ) task. 24 psychology students made TOJs concerning which of two nociceptive stimuli (one applied to each hand) had been presented first (or last). A spatially non-predictive visual cue (i.e., lighting of a LED) preceded (80 ms) the nociceptive stimuli. This cue was presented randomly either on the hand of the participant (in peripersonal space), or 70 cm in front of the hand (in extrapersonal space), and either on the left or on the right side of space. Biases in spatial attention are reflected by the point of subjective simultaneity (PSS). The results revealed that TOJs were more biased towards the visual cue in peripersonal space in comparison with the visual cue in extrapersonal space. This study provides evidence for the crossmodal integration of visual and nociceptive stimuli in a peripersonal schema of the body. Future research with this paradigm will explore crossmodal attention deficits in chronic pain populations.


Sign in / Sign up

Export Citation Format

Share Document