Does visually induced self-motion affect grip force when holding an object?

2012 ◽  
Vol 108 (6) ◽  
pp. 1685-1694 ◽  
Author(s):  
Lionel Bringoux ◽  
Jean-Claude Lepecq ◽  
Frédéric Danion

Accurate control of grip force during object manipulation is necessary to prevent the object from slipping, especially to compensate for the action of gravitational and inertial forces resulting from hand/object motion. The goal of the current study was to assess whether the control of grip force was influenced by visually induced self-motion (i.e., vection), which would normally be accompanied by changes in object load. The main task involved holding a 400-g object between the thumb and the index finger while being seated within a virtual immersive environment that simulated the vertical motion of an elevator across floors. Different visual motions were tested, including oscillatory (0.21 Hz) and constant-speed displacements of the virtual scene. Different arm-loading conditions were also tested: with or without the hand-held object and with or without oscillatory arm motion (0.9 Hz). At the perceptual level, ratings from participants showed that both oscillatory and constant-speed motion of the elevator rapidly induced a long-lasting sensation of self-motion. At the sensorimotor level, vection compellingness altered arm movement control. Spectral analyses revealed that arm motion was entrained by the oscillatory motion of the elevator. However, we found no evidence that grip force used to hold the object was visually affected. Specifically, spectral analyses revealed no component in grip force that would mirror the virtual change in object load associated with the oscillatory motion of the elevator, thereby allowing the grip-to-load force coupling to remain unaffected. Altogether, our findings show that the neural mechanisms underlying vection interfere with arm movement control but do not interfere with the delicate modulation of grip force. More generally, those results provide evidence that the strength of the coupling between the sensorimotor system and the perceptual level can be modulated depending on the effector.

2021 ◽  
Author(s):  
Yara Almubarak ◽  
Michelle Schmutz ◽  
Miguel Perez ◽  
Shrey Shah ◽  
Yonas Tadesse

Abstract Underwater exploration or inspection requires suitable robotic systems capable of maneuvering, manipulating objects, and operating untethered in complex environmental conditions. Traditional robots have been used to perform many tasks underwater. However, they have limited degrees of freedom, manipulation capabilities, portability, and have disruptive interactions with aquatic life. Research in soft robotics seeks to incorporate ideas of the natural flexibility and agility of aquatic species into man-made technologies to improve the current capabilities of robots using biomimetics. In this paper, we present a novel design, fabrication, and testing results of an underwater robot known as Kraken that has tentacles to mimic the arm movement of an octopus. To control the arm motion, Kraken utilizes a hybrid actuation technology consisting of stepper motors and twisted and a coiled fishing line polymer muscle (TCP FL ). TCPs are becoming one of the promising actuation technologies due to their high actuation stroke, high force, light weight, and low cost. We have studied different arm stiffness configurations of the tentacles tailored to operate in different modalities (curling, twisting, and bending), to control the shape of the tentacles and grasp irregular objects delicately. Kraken uses an onboard battery, a wireless programmable joystick, a buoyancy system for depth control, all housed in a three-layer 3D printed dome-like structure. Here, we present Kraken fully functioning underwater in an Olympic-size swimming pool using its servo actuated tentacles and other test results on the TCP FL actuated tentacles in a laboratory setting. This is the first time that an embedded TCP FL actuator within elastomer has been proposed for the tentacles of an octopus-like robot along with the performance of the structures. Further, as a case study, we showed the functionality of the robot in grasping objects underwater for field robotics applications.


2019 ◽  
Vol 19 (10) ◽  
pp. 294a
Author(s):  
Scott T Steinmetz ◽  
Oliver W Layton ◽  
N. Andrew Browning ◽  
Nathaniel V Powell ◽  
Brett R Fajen

2003 ◽  
Vol 90 (2) ◽  
pp. 723-730 ◽  
Author(s):  
Kai V. Thilo ◽  
Andreas Kleinschmidt ◽  
Michael A. Gresty

In a previous functional neuroimaging study we found that early visual areas deactivated when a rotating optical flow stimulus elicited the illusion of self-motion (vection) compared with when it was perceived as a moving object. Here, we investigated whether electrical cortical responses to an independent central visual probe stimulus change as a function of whether optical flow stimulation in the periphery induces the illusion of self-motion or not. Visual-evoked potentials (VEPs) were obtained in response to pattern-reversals in the central visual field in the presence of a constant peripheral large-field optokinetic stimulus that rotated around the naso-occipital axis and induced intermittent sensations of vection. As control, VEPs were also recorded during a stationary peripheral stimulus and showed no difference than those obtained during optokinetic stimulation. The VEPs during constant peripheral stimulation were then divided into two groups according to the time spans where the subjects reported object- or self-motion, respectively. The N70 VEP component showed a significant amplitude reduction when, due to the peripheral stimulus, subjects experienced self-motion compared to when the peripheral stimulus was perceived as object-motion. This finding supplements and corroborates our recent evidence from functional neuroimaging that early visual cortex deactivates when a visual flow stimulus elicits the illusion of self-motion compared with when the same sensory input is interpreted as object-motion. This dampened responsiveness might reflect a redistribution of sensorial and attentional resources when the monitoring of self-motion relies on a sustained and veridical processing of optic flow and may be compromised by other sources of visual input.


2020 ◽  
Vol 10 (6) ◽  
pp. 2139
Author(s):  
Betsy D. M. Chaparro-Rico ◽  
Daniele Cafolla ◽  
Marco Ceccarelli ◽  
Eduardo Castillo-Castaneda

Patients with neurological or orthopedic lesions require assistance during therapies with repetitive movements. NURSE (cassiNo-qUeretaro uppeR-limb aSsistive dEvice) is an arm movement aid device for both right- and left-upper limb. The device has a big workspace to conduct physical therapy or training on individuals including kids and elderly individuals, of any age and size. This paper describes the mechanism design of NURSE and presents a numerical procedure for testing the mechanism feasibility that includes a kinematic, dynamic, and FEM (Finite Element Method) analysis. The kinematic demonstrated that a big workspace is available in the device to reproduce therapeutic movements. The dynamic analysis shows that commercial motors for low power consumption can achieve the needed displacement, acceleration, speed, and torque. Finite Element Method showed that the mechanism can afford the upper limb weight with light-bars for a tiny design. This work has led to the construction of a NURSE prototype with a light structure of 2.6 kg fitting into a box of 35 × 45 × 30 cm. The latter facilitates portability as well as rehabilitation at home with a proper follow-up. The prototype presented a repeatability of ±1.3 cm that has been considered satisfactory for a device having components manufactured with 3D rapid prototyping technology.


2020 ◽  
Vol 34 (2) ◽  
pp. 134-147
Author(s):  
Preeti Raghavan ◽  
Seda Bilaloglu ◽  
Syed Zain Ali ◽  
Xin Jin ◽  
Viswanath Aluru ◽  
...  

Background. High-intensity repetitive training is challenging to provide poststroke. Robotic approaches can facilitate such training by unweighting the limb and/or by improving trajectory control, but the extent to which these types of assistance are necessary is not known. Objective. The purpose of this study was to examine the extent to which robotic path assistance and/or weight support facilitate repetitive 3D movements in high functioning and low functioning subjects with poststroke arm motor impairment relative to healthy controls. Methods. Seven healthy controls and 18 subjects with chronic poststroke right-sided hemiparesis performed 300 repetitions of a 3D circle-drawing task using a 3D Cable-driven Arm Exoskeleton (CAREX) robot. Subjects performed 100 repetitions each with path assistance alone, weight support alone, and path assistance plus weight support in a random order over a single session. Kinematic data from the task were used to compute the normalized error and speed as well as the speed-error relationship. Results. Low functioning stroke subjects (Fugl-Meyer Scale score = 16.6 ± 6.5) showed the lowest error with path assistance plus weight support, whereas high functioning stroke subjects (Fugl-Meyer Scale score = 59.6 ± 6.8) moved faster with path assistance alone. When both speed and error were considered together, low functioning subjects significantly reduced their error and increased their speed but showed no difference across the robotic conditions. Conclusions. Robotic assistance can facilitate repetitive task performance in individuals with severe arm motor impairment, but path assistance provides little advantage over weight support alone. Future studies focusing on antigravity arm movement control are warranted poststroke.


1998 ◽  
Vol 79 (3) ◽  
pp. 1409-1424 ◽  
Author(s):  
Paul L. Gribble ◽  
David J. Ostry ◽  
Vittorio Sanguineti ◽  
Rafael Laboissière

Gribble, Paul L., David J. Ostry, Vittorio Sanguineti, and Rafael Laboissière. Are complex control signals required for human arm movement? J. Neurophysiol. 79: 1409–1424, 1998. It has been proposed that the control signals underlying voluntary human arm movement have a “complex” nonmonotonic time-varying form, and a number of empirical findings have been offered in support of this idea. In this paper, we address three such findings using a model of two-joint arm motion based on the λ version of the equilibrium-point hypothesis. The model includes six one- and two-joint muscles, reflexes, modeled control signals, muscle properties, and limb dynamics. First, we address the claim that “complex” equilibrium trajectories are required to account for nonmonotonic joint impedance patterns observed during multijoint movement. Using constant-rate shifts in the neurally specified equilibrium of the limb and constant cocontraction commands, we obtain patterns of predicted joint stiffness during simulated multijoint movements that match the nonmonotonic patterns reported empirically. We then use the algorithm proposed by Gomi and Kawato to compute a hypothetical equilibrium trajectory from simulated stiffness, viscosity, and limb kinematics. Like that reported by Gomi and Kawato, the resulting trajectory was nonmonotonic, first leading then lagging the position of the limb. Second, we address the claim that high levels of stiffness are required to generate rapid single-joint movements when simple equilibrium shifts are used. We compare empirical measurements of stiffness during rapid single-joint movements with the predicted stiffness of movements generated using constant-rate equilibrium shifts and constant cocontraction commands. Single-joint movements are simulated at a number of speeds, and the procedure used by Bennett to estimate stiffness is followed. We show that when the magnitude of the cocontraction command is scaled in proportion to movement speed, simulated joint stiffness varies with movement speed in a manner comparable with that reported by Bennett. Third, we address the related claim that nonmonotonic equilibrium shifts are required to generate rapid single-joint movements. Using constant-rate equilibrium shifts and constant cocontraction commands, rapid single-joint movements are simulated in the presence of external torques. We use the procedure reported by Latash and Gottlieb to compute hypothetical equilibrium trajectories from simulated torque and angle measurements during movement. As in Latash and Gottlieb, a nonmonotonic function is obtained even though the control signals used in the simulations are constant-rate changes in the equilibrium position of the limb. Differences between the “simple” equilibrium trajectory proposed in the present paper and those that are derived from the procedures used by Gomi and Kawato and Latash and Gottlieb arise from their use of simplified models of force generation.


2019 ◽  
Vol 116 (18) ◽  
pp. 9060-9065 ◽  
Author(s):  
Kalpana Dokka ◽  
Hyeshin Park ◽  
Michael Jansen ◽  
Gregory C. DeAngelis ◽  
Dora E. Angelaki

The brain infers our spatial orientation and properties of the world from ambiguous and noisy sensory cues. Judging self-motion (heading) in the presence of independently moving objects poses a challenging inference problem because the image motion of an object could be attributed to movement of the object, self-motion, or some combination of the two. We test whether perception of heading and object motion follows predictions of a normative causal inference framework. In a dual-report task, subjects indicated whether an object appeared stationary or moving in the virtual world, while simultaneously judging their heading. Consistent with causal inference predictions, the proportion of object stationarity reports, as well as the accuracy and precision of heading judgments, depended on the speed of object motion. Critically, biases in perceived heading declined when the object was perceived to be moving in the world. Our findings suggest that the brain interprets object motion and self-motion using a causal inference framework.


Sign in / Sign up

Export Citation Format

Share Document