visual servo control
Recently Published Documents


TOTAL DOCUMENTS

328
(FIVE YEARS 52)

H-INDEX

25
(FIVE YEARS 3)

Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 642
Author(s):  
Zubair Arif ◽  
Yili Fu

Assistive robotic arms (ARAs) that provide care to the elderly and people with disabilities, are a significant part of Human-Robot Interaction (HRI). Presently available ARAs provide non-intuitive interfaces such as joysticks for control and thus, lacks the autonomy to perform daily activities. This study proposes that, for inducing autonomous behavior in ARAs, visual sensors integration is vital, and visual servoing in the direct Cartesian control mode is the preferred method. Generally, ARAs are designed in a configuration where its end-effector’s position is defined in the fixed base frame while orientation is expressed in the end-effector frame. We denoted this configuration as ‘mixed frame robotic arms’. Consequently, conventional visual servo controllers which operate in a single frame of reference are incompatible with mixed frame ARAs. Therefore, we propose a mixed-frame visual servo control framework for ARAs. Moreover, we enlightened the task space kinematics of a mixed frame ARAs, which led us to the development of a novel “mixed frame Jacobian matrix”. The proposed framework was validated on a mixed frame JACO-2 7 DoF ARA using an adaptive proportional derivative controller for achieving image-based visual servoing (IBVS), which showed a significant increase of 31% in the convergence rate, outperforming conventional IBVS joint controllers, especially in the outstretched arm positions and near the base frame. Our Results determine the need for the mixed frame controller for deploying visual servo control on modern ARAs, that can inherently cater to the robotic arm’s joint limits, singularities, and self-collision problems.


2021 ◽  
pp. 104043
Author(s):  
Yang Tian ◽  
Guoteng Zhang ◽  
Kenji Morimoto ◽  
Shugen Ma

2021 ◽  
Vol 112 ◽  
pp. 104827
Author(s):  
Zhiqi Tang ◽  
Rita Cunha ◽  
David Cabecinhas ◽  
Tarek Hamel ◽  
Carlos Silvestre

2021 ◽  
Author(s):  
SHOGO ARAI ◽  
Yoshihiro Miyamoto ◽  
Akinari Kobayashi ◽  
Kazuhiro Kosuge

<p>Visual servo control uses images that are obtained by a camera for robotic control. This study focuses on the problem of positioning a target object using a robotic manipulator with image-based visual servo (IBVS) control. To perform the positioning task, the image-based visual servoing requires visual features that can be extracted from the appearance of the target object. Therefore, a positioning error tends to increase especially for textureless objects, such as industrial parts. Since it is difficult to extract differences of the visual features between current and goal images. To solve these problems, this paper presents a novel visual servoing named ``Active Visual Servoing." Active Visual Servoing (AVS) projects patterned light onto the target object using a projector. The design of the projection pattern affects the positioning error. AVS uses an optimal pattern which is theoretically derived and maximizes differences between current and goal images. The experimental results show that the proposed active visual servoing method reduces the positioning error by more than 97% compared to conventional image-based visual servoing.</p>


2021 ◽  
Author(s):  
SHOGO ARAI ◽  
Yoshihiro Miyamoto ◽  
Akinari Kobayashi ◽  
Kazuhiro Kosuge

<p>Visual servo control uses images that are obtained by a camera for robotic control. This study focuses on the problem of positioning a target object using a robotic manipulator with image-based visual servo (IBVS) control. To perform the positioning task, the image-based visual servoing requires visual features that can be extracted from the appearance of the target object. Therefore, a positioning error tends to increase especially for textureless objects, such as industrial parts. Since it is difficult to extract differences of the visual features between current and goal images. To solve these problems, this paper presents a novel visual servoing named ``Active Visual Servoing." Active Visual Servoing (AVS) projects patterned light onto the target object using a projector. The design of the projection pattern affects the positioning error. AVS uses an optimal pattern which is theoretically derived and maximizes differences between current and goal images. The experimental results show that the proposed active visual servoing method reduces the positioning error by more than 97% compared to conventional image-based visual servoing.</p>


2021 ◽  
Vol 182 ◽  
pp. 295-309
Author(s):  
Hirohisa Kojima ◽  
Taku Okawara ◽  
Pavel M. Trivailo

Sign in / Sign up

Export Citation Format

Share Document