scholarly journals Control Loop Sensor Calibration Using Neural Networks for Robotic Control

2011 ◽  
Vol 2011 ◽  
pp. 1-8
Author(s):  
Kathleen A. Kramer ◽  
Stephen C. Stubberud

Whether sensor model’s inaccuracies are a result of poor initial modeling or from sensor damage or drift, the effects can be just as detrimental. Sensor modeling errors result in poor state estimation. This, in turn, can cause a control system relying upon the sensor’s measurements to become unstable, such as in robotics where the control system is applied to allow autonomous navigation. A technique referred to as a neural extended Kalman filter (NEKF) is developed to provide both state estimation in a control loop and to learn the difference between the true sensor dynamics and the sensor model. The technique requires multiple sensors on the control system so that the properly operating and modeled sensors can be used as truth. The NEKF trains a neural network on-line using the same residuals as the state estimation. The resulting sensor model can then be reincorporated fully into the system to provide the added estimation capability and redundancy.

Author(s):  
K. Shibazaki ◽  
H. Nozaki

In this study, in order to improve steering stability during turning, we devised an inner and outer wheel driving force control system that is based on the steering angle and steering angular velocity, and verified its effectiveness via running tests. In the driving force control system based on steering angle, the inner wheel driving force is weakened in proportion to the steering angle during a turn, and the difference in driving force is applied to the inner and outer wheels by strengthening the outer wheel driving force. In the driving force control (based on steering angular velocity), the value obtained by multiplying the driving force constant and the steering angular velocity,  that differentiates the driver steering input during turning output as the driving force of the inner and outer wheels. By controlling the driving force of the inner and outer wheels, it reduces the maximum steering angle by 40 deg and it became possible to improve the cornering marginal performance and improve the steering stability at the J-turn. In the pylon slalom it reduces the maximum steering angle by 45 deg and it became possible to improve the responsiveness of the vehicle. Control by steering angle is effective during steady turning, while control by steering angular velocity is effective during sharp turning. The inner and outer wheel driving force control are expected to further improve steering stability.


2013 ◽  
Vol 133 (4) ◽  
pp. 313-323 ◽  
Author(s):  
Kuniaki Anzai ◽  
Kimihiko Shimomura ◽  
Soshi Yoshiyama ◽  
Hiroyuki Taguchi ◽  
Masaru Takeishi ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 230
Author(s):  
Xiangwei Dang ◽  
Zheng Rong ◽  
Xingdong Liang

Accurate localization and reliable mapping is essential for autonomous navigation of robots. As one of the core technologies for autonomous navigation, Simultaneous Localization and Mapping (SLAM) has attracted widespread attention in recent decades. Based on vision or LiDAR sensors, great efforts have been devoted to achieving real-time SLAM that can support a robot’s state estimation. However, most of the mature SLAM methods generally work under the assumption that the environment is static, while in dynamic environments they will yield degenerate performance or even fail. In this paper, first we quantitatively evaluate the performance of the state-of-the-art LiDAR-based SLAMs taking into account different pattens of moving objects in the environment. Through semi-physical simulation, we observed that the shape, size, and distribution of moving objects all can impact the performance of SLAM significantly, and obtained instructive investigation results by quantitative comparison between LOAM and LeGO-LOAM. Secondly, based on the above investigation, a novel approach named EMO to eliminating the moving objects for SLAM fusing LiDAR and mmW-radar is proposed, towards improving the accuracy and robustness of state estimation. The method fully uses the advantages of different characteristics of two sensors to realize the fusion of sensor information with two different resolutions. The moving objects can be efficiently detected based on Doppler effect by radar, accurately segmented and localized by LiDAR, then filtered out from the point clouds through data association and accurate synchronized in time and space. Finally, the point clouds representing the static environment are used as the input of SLAM. The proposed approach is evaluated through experiments using both semi-physical simulation and real-world datasets. The results demonstrate the effectiveness of the method at improving SLAM performance in accuracy (decrease by 30% at least in absolute position error) and robustness in dynamic environments.


2021 ◽  
Vol 11 (13) ◽  
pp. 5914
Author(s):  
Daniel Reyes-Uquillas ◽  
Tesheng Hsiao

In this article, we aim to achieve manual guidance of a robot manipulator to perform tasks that require strict path following and would benefit from collaboration with a human to guide the motion. The robot can be used as a tool to increase the accuracy of a human operator while remaining compliant with the human instructions. We propose a dual-loop control structure where the outer admittance control loop allows the robot to be compliant along a path considering the projection of the external force to the tangential-normal-binormal (TNB) frame associated with the path. The inner motion control loop is designed based on a modified sliding mode control (SMC) law. We evaluate the system behavior to forces applied from different directions to the end-effector of a 6-DOF industrial robot in a linear motion test. Next, a second test using a 3D path as a tracking task is conducted, where we specify three interaction types: free motion (FM), force-applied motion (FAM), and combined motion with virtual forces (CVF). Results show that the difference of root mean square error (RMSE) among the cases is less than 0.1 mm, which proves the feasibility of applying this method for various path-tracking applications in compliant human–robot collaboration.


2010 ◽  
Vol 103 (1) ◽  
pp. 278-289 ◽  
Author(s):  
W. S. Yu ◽  
H. van Duinen ◽  
S. C. Gandevia

In humans, hand performance has evolved from a crude multidigit grasp to skilled individuated finger movements. However, control of the fingers is not completely independent. Although musculotendinous factors can limit independent movements, constraints in supraspinal control are more important. Most previous studies examined either flexion or extension of the digits. We studied differences in voluntary force production by the five digits, in both flexion and extension tasks. Eleven healthy subjects were instructed either to maximally flex or extend their digits, in all single- and multidigit combinations. They received visual feedback of total force produced by “instructed” digits and had to ignore “noninstructed” digits. Despite attempts to maximally flex or extend instructed digits, subjects rarely generated their “maximal” force, resulting in a “force deficit,” and produced forces with noninstructed digits (“enslavement”). Subjects performed differently in flexion and extension tasks. Enslavement was greater in extension than in flexion tasks ( P = 0.019), whereas the force deficit in multidigit tasks was smaller in extension ( P = 0.035). The difference between flexion and extension in the relationships between the enslavement and force deficit suggests a difference in balance of spillover of neural drive to agonists acting on neighboring digits and focal neural drive to antagonist muscles. An increase in drive to antagonists would lead to more individualized movements. The pattern of force production matches the daily use of the digits. These results reveal a neural control system that preferentially lifts fingers together by extension but allows an individual digit to flex so that the finger pads can explore and grasp.


Sign in / Sign up

Export Citation Format

Share Document