scholarly journals Mix Frame Visual Servo Control Framework for Autonomous Assistive Robotic Arms

Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 642
Author(s):  
Zubair Arif ◽  
Yili Fu

Assistive robotic arms (ARAs) that provide care to the elderly and people with disabilities, are a significant part of Human-Robot Interaction (HRI). Presently available ARAs provide non-intuitive interfaces such as joysticks for control and thus, lacks the autonomy to perform daily activities. This study proposes that, for inducing autonomous behavior in ARAs, visual sensors integration is vital, and visual servoing in the direct Cartesian control mode is the preferred method. Generally, ARAs are designed in a configuration where its end-effector’s position is defined in the fixed base frame while orientation is expressed in the end-effector frame. We denoted this configuration as ‘mixed frame robotic arms’. Consequently, conventional visual servo controllers which operate in a single frame of reference are incompatible with mixed frame ARAs. Therefore, we propose a mixed-frame visual servo control framework for ARAs. Moreover, we enlightened the task space kinematics of a mixed frame ARAs, which led us to the development of a novel “mixed frame Jacobian matrix”. The proposed framework was validated on a mixed frame JACO-2 7 DoF ARA using an adaptive proportional derivative controller for achieving image-based visual servoing (IBVS), which showed a significant increase of 31% in the convergence rate, outperforming conventional IBVS joint controllers, especially in the outstretched arm positions and near the base frame. Our Results determine the need for the mixed frame controller for deploying visual servo control on modern ARAs, that can inherently cater to the robotic arm’s joint limits, singularities, and self-collision problems.

2021 ◽  
Author(s):  
SHOGO ARAI ◽  
Yoshihiro Miyamoto ◽  
Akinari Kobayashi ◽  
Kazuhiro Kosuge

<p>Visual servo control uses images that are obtained by a camera for robotic control. This study focuses on the problem of positioning a target object using a robotic manipulator with image-based visual servo (IBVS) control. To perform the positioning task, the image-based visual servoing requires visual features that can be extracted from the appearance of the target object. Therefore, a positioning error tends to increase especially for textureless objects, such as industrial parts. Since it is difficult to extract differences of the visual features between current and goal images. To solve these problems, this paper presents a novel visual servoing named ``Active Visual Servoing." Active Visual Servoing (AVS) projects patterned light onto the target object using a projector. The design of the projection pattern affects the positioning error. AVS uses an optimal pattern which is theoretically derived and maximizes differences between current and goal images. The experimental results show that the proposed active visual servoing method reduces the positioning error by more than 97% compared to conventional image-based visual servoing.</p>


2017 ◽  
Vol 2017 ◽  
pp. 1-6 ◽  
Author(s):  
Liying Zou ◽  
Huiguang Li ◽  
Wei Zhao ◽  
Lei Zhu

This paper presents a novel control strategy to force a vertical take-off and landing (VTOL) aircraft to accomplish the pinpoint landing task. The control development is based on the image-based visual servoing method and the back-stepping technique; its design differs from the existing methods because the controller maps the image errors onto the actuator space via a visual model which does not contain the depth information of the feature point. The novelty of the proposed method is to extend the image-based visual servoing technique to the VTOL aircraft control. In addition, the Lyapunov theory is used to prove the asymptotic stability of the VTOL aircraft visual servoing system, while the image error can converge to zero. Furthermore, simulations have been also conducted to demonstrate the performances of the proposed method.


Author(s):  
Haoxiang Lang ◽  
Muhammad Tahir Khan ◽  
Kok-Kiong Tan ◽  
Clarence W. W. De Silva

Mobile robots that integrate visual servo control for facilitating autonomous grasping nd manipulation are the focus of this paper. In view of mobility, they have wider pplication than traditional fixed-based robots with visual servoing. Visual servoing s widely used in mobile robot navigation. However, there are not so many report or applying it to mobile manipulation. In this paper, challenges and limitations of pplying visual servoing in mobile manipulation are discussed. Next, two classical pproaches (image-based visual servoing (IBVS) and position-based visual servoing (PBVS)) are introduced aloing with their advantages and disadvantages. Simulations n Matlab are carried out using the two methods, there advantages and drawbacks are llustrated and discussed. On this basis, a suggested system in mobile manipulation s proposed including an IBVS with an eye-in-hand camera configuration system. imulations and experimentations are carried with this robot configuration in a earch and rescue scenario, which show good performance.


2021 ◽  
Author(s):  
SHOGO ARAI ◽  
Yoshihiro Miyamoto ◽  
Akinari Kobayashi ◽  
Kazuhiro Kosuge

<p>Visual servo control uses images that are obtained by a camera for robotic control. This study focuses on the problem of positioning a target object using a robotic manipulator with image-based visual servo (IBVS) control. To perform the positioning task, the image-based visual servoing requires visual features that can be extracted from the appearance of the target object. Therefore, a positioning error tends to increase especially for textureless objects, such as industrial parts. Since it is difficult to extract differences of the visual features between current and goal images. To solve these problems, this paper presents a novel visual servoing named ``Active Visual Servoing." Active Visual Servoing (AVS) projects patterned light onto the target object using a projector. The design of the projection pattern affects the positioning error. AVS uses an optimal pattern which is theoretically derived and maximizes differences between current and goal images. The experimental results show that the proposed active visual servoing method reduces the positioning error by more than 97% compared to conventional image-based visual servoing.</p>


2018 ◽  
Vol 41 (1) ◽  
pp. 3-13 ◽  
Author(s):  
Tolga Yüksel

While quadrotors are becoming more popular, control of these unmanned air vehicles should be improved. In this study, a new intelligent image-based visual servo control system is proposed for the flight guidance control of quadrotors. Features are essential for visual servoing and the proposed system utilizes the features of a shape that provide a clear sight of the landing site instead of point features. Furthermore, the system focuses on three problems of visual servo control: finding an appropriate gain value under velocity limits, keeping the shape features in the field of view and tracking a moving target. As a solution to the first problem, a fuzzy logic unit that uses feature error and error derivative norms as inputs are deployed to assign the gain, adaptively. The second problem is solved by defining safe and risky regions in the image plane to take precautions before field of view missing. Another fuzzy logic unit is activated when the shape passes through a risky region to provide counter velocity in x or y direction and to drag the shape through the safe region. As the last stage, Kalman filtering with Potter’s square root update is added to the proposed system to increase the feature tracking performance. This update also promises divergence avoidance. To show the performance of the proposed system, simulation results for fixed and moving targets under feature disturbance are presented for a quadrotor. The results verify that the proposed system is capable of handling visual servoing problems.


2020 ◽  
Author(s):  
Haitao Liu ◽  
Dewei Zhang ◽  
Xianye Wang ◽  
Juliang Xiao

Abstract Due to the intermittent motion of a legged mobile robot, an additional periodic movement must be introduced that directly affects the image processing accuracy and destabilizes the visual servo control of the robot. To address this problem, this paper investigates a control scheme for the visual servoing of a legged mobile robot equipped with a fixed monocular camera. The kinematics of the legged mobile robot and homography- based visual servoing are employed to allow the robot to achieve the desired pose. By investigating the homographic relationship between the current and desired poses, the approach has no need for transcendental knowledge of the three-dimensional geometry of the target image. The feature points are directly extracted from the images to evaluate the homography matrix. To reduce the effects caused by the intermittent motions of the legged robot, an improved adaptive median filter is proposed. Furthermore, a sliding mode controller is designed, and a Lyapunov-based approach is used to analyze the stability of the control system. With the aid of CoppeliaSim software, a simulation is implemented to verify the effectiveness of the proposed method.


2004 ◽  
Author(s):  
J. Chen ◽  
D. M. Dason ◽  
W. E. Dixon ◽  
V. K. Chitrakaran

Sign in / Sign up

Export Citation Format

Share Document