Combined force and visual control of an industrial robot

Robotica ◽  
2004 ◽  
Vol 22 (2) ◽  
pp. 163-171 ◽  
Author(s):  
Ricardo Carelli ◽  
Eduardo Oliva ◽  
Carlos Soria ◽  
Oscar Nasisi

This work proposes control structures that efficiently combine force control with vision servo control of robot manipulators. Impedance controllers are considered which are based both on visual servoing and on physical or fictitious force feedback, the force and visual information being combined in the image space. Force and visual servo controllers included in extended hybrid control structures are also considered. The combination of both force and vision based control allows the tasks range of the robot to be extended to partially structured environments. The proposed controllers, implemented on an industrial SCARA-type robot, are tested in tasks involving physical and virtual contact with the environment.

2021 ◽  
Author(s):  
SHOGO ARAI ◽  
Yoshihiro Miyamoto ◽  
Akinari Kobayashi ◽  
Kazuhiro Kosuge

<p>Visual servo control uses images that are obtained by a camera for robotic control. This study focuses on the problem of positioning a target object using a robotic manipulator with image-based visual servo (IBVS) control. To perform the positioning task, the image-based visual servoing requires visual features that can be extracted from the appearance of the target object. Therefore, a positioning error tends to increase especially for textureless objects, such as industrial parts. Since it is difficult to extract differences of the visual features between current and goal images. To solve these problems, this paper presents a novel visual servoing named ``Active Visual Servoing." Active Visual Servoing (AVS) projects patterned light onto the target object using a projector. The design of the projection pattern affects the positioning error. AVS uses an optimal pattern which is theoretically derived and maximizes differences between current and goal images. The experimental results show that the proposed active visual servoing method reduces the positioning error by more than 97% compared to conventional image-based visual servoing.</p>


2017 ◽  
Vol 2017 ◽  
pp. 1-6 ◽  
Author(s):  
Liying Zou ◽  
Huiguang Li ◽  
Wei Zhao ◽  
Lei Zhu

This paper presents a novel control strategy to force a vertical take-off and landing (VTOL) aircraft to accomplish the pinpoint landing task. The control development is based on the image-based visual servoing method and the back-stepping technique; its design differs from the existing methods because the controller maps the image errors onto the actuator space via a visual model which does not contain the depth information of the feature point. The novelty of the proposed method is to extend the image-based visual servoing technique to the VTOL aircraft control. In addition, the Lyapunov theory is used to prove the asymptotic stability of the VTOL aircraft visual servoing system, while the image error can converge to zero. Furthermore, simulations have been also conducted to demonstrate the performances of the proposed method.


Author(s):  
Haoxiang Lang ◽  
Muhammad Tahir Khan ◽  
Kok-Kiong Tan ◽  
Clarence W. W. De Silva

Mobile robots that integrate visual servo control for facilitating autonomous grasping nd manipulation are the focus of this paper. In view of mobility, they have wider pplication than traditional fixed-based robots with visual servoing. Visual servoing s widely used in mobile robot navigation. However, there are not so many report or applying it to mobile manipulation. In this paper, challenges and limitations of pplying visual servoing in mobile manipulation are discussed. Next, two classical pproaches (image-based visual servoing (IBVS) and position-based visual servoing (PBVS)) are introduced aloing with their advantages and disadvantages. Simulations n Matlab are carried out using the two methods, there advantages and drawbacks are llustrated and discussed. On this basis, a suggested system in mobile manipulation s proposed including an IBVS with an eye-in-hand camera configuration system. imulations and experimentations are carried with this robot configuration in a earch and rescue scenario, which show good performance.


Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7121
Author(s):  
Yongchao Luo ◽  
Shipeng Li ◽  
Di Li

Robot control based on visual information perception is a hot topic in the industrial robot domain and makes robots capable of doing more things in a complex environment. However, complex visual background in an industrial environment brings great difficulties in recognizing the target image, especially when a target is small or far from the sensor. Therefore, target recognition is the first problem that should be addressed in a visual servo system. This paper considers common complex constraints in industrial environments and proposes a You Only Look Once Version 2 Region of Interest (YOLO-v2-ROI) neural network image processing algorithm based on machine learning. The proposed algorithm combines the advantages of YOLO (You Only Look Once) rapid detection with effective identification of ROI (Region of Interest) pooling structure, which can quickly locate and identify different objects in different fields of view. This method can also lead the robot vision system to recognize and classify a target object automatically, improve robot vision system efficiency, avoid blind movement, and reduce the calculation load. The proposed algorithm is verified by experiments. The experimental result shows that the learning algorithm constructed in this paper has real-time image-detection speed and demonstrates strong adaptability and recognition ability when processing images with complex backgrounds, such as different backgrounds, lighting, or perspectives. In addition, this algorithm can also effectively identify and locate visual targets, which improves the environmental adaptability of a visual servo system


Author(s):  
T Eun ◽  
H S Cho

Hybrid position/force control is an effective tool for carrying out a task whose geometry constrains the position of a manipulator. This paper presents a task-oriented architecture for hybrid control, taking into consideration the complexity of the manipulator dynamics and uncertainties in the external constraints. For this purpose an adaptive pole assignment self-tuning algorithm was adopted based upon six independent decoupled ARMA models which represent position and force dynamics of manipulators in the task-oriented coordinate frame. To complete the control architecture, a control input transformation algorithm and an output synthesizing algorithm were developed in the task-oriented frame. These algorithms were designed to be easily transformable between coordinate frames in order to be applicable to a variety of tasks. To demonstrate the validity of the architecture, three example tasks were simulated using a manipulator whose kinematic and dynamic characteristics are analogous to those of a Puma 560 industrial robot.


2002 ◽  
Vol 35 (1) ◽  
pp. 485-490
Author(s):  
N. García ◽  
G. Mamani ◽  
O. Reinoso ◽  
O. Nasisi ◽  
R. Aracil ◽  
...  

Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 642
Author(s):  
Zubair Arif ◽  
Yili Fu

Assistive robotic arms (ARAs) that provide care to the elderly and people with disabilities, are a significant part of Human-Robot Interaction (HRI). Presently available ARAs provide non-intuitive interfaces such as joysticks for control and thus, lacks the autonomy to perform daily activities. This study proposes that, for inducing autonomous behavior in ARAs, visual sensors integration is vital, and visual servoing in the direct Cartesian control mode is the preferred method. Generally, ARAs are designed in a configuration where its end-effector’s position is defined in the fixed base frame while orientation is expressed in the end-effector frame. We denoted this configuration as ‘mixed frame robotic arms’. Consequently, conventional visual servo controllers which operate in a single frame of reference are incompatible with mixed frame ARAs. Therefore, we propose a mixed-frame visual servo control framework for ARAs. Moreover, we enlightened the task space kinematics of a mixed frame ARAs, which led us to the development of a novel “mixed frame Jacobian matrix”. The proposed framework was validated on a mixed frame JACO-2 7 DoF ARA using an adaptive proportional derivative controller for achieving image-based visual servoing (IBVS), which showed a significant increase of 31% in the convergence rate, outperforming conventional IBVS joint controllers, especially in the outstretched arm positions and near the base frame. Our Results determine the need for the mixed frame controller for deploying visual servo control on modern ARAs, that can inherently cater to the robotic arm’s joint limits, singularities, and self-collision problems.


2021 ◽  
Author(s):  
SHOGO ARAI ◽  
Yoshihiro Miyamoto ◽  
Akinari Kobayashi ◽  
Kazuhiro Kosuge

<p>Visual servo control uses images that are obtained by a camera for robotic control. This study focuses on the problem of positioning a target object using a robotic manipulator with image-based visual servo (IBVS) control. To perform the positioning task, the image-based visual servoing requires visual features that can be extracted from the appearance of the target object. Therefore, a positioning error tends to increase especially for textureless objects, such as industrial parts. Since it is difficult to extract differences of the visual features between current and goal images. To solve these problems, this paper presents a novel visual servoing named ``Active Visual Servoing." Active Visual Servoing (AVS) projects patterned light onto the target object using a projector. The design of the projection pattern affects the positioning error. AVS uses an optimal pattern which is theoretically derived and maximizes differences between current and goal images. The experimental results show that the proposed active visual servoing method reduces the positioning error by more than 97% compared to conventional image-based visual servoing.</p>


2018 ◽  
Vol 41 (1) ◽  
pp. 3-13 ◽  
Author(s):  
Tolga Yüksel

While quadrotors are becoming more popular, control of these unmanned air vehicles should be improved. In this study, a new intelligent image-based visual servo control system is proposed for the flight guidance control of quadrotors. Features are essential for visual servoing and the proposed system utilizes the features of a shape that provide a clear sight of the landing site instead of point features. Furthermore, the system focuses on three problems of visual servo control: finding an appropriate gain value under velocity limits, keeping the shape features in the field of view and tracking a moving target. As a solution to the first problem, a fuzzy logic unit that uses feature error and error derivative norms as inputs are deployed to assign the gain, adaptively. The second problem is solved by defining safe and risky regions in the image plane to take precautions before field of view missing. Another fuzzy logic unit is activated when the shape passes through a risky region to provide counter velocity in x or y direction and to drag the shape through the safe region. As the last stage, Kalman filtering with Potter’s square root update is added to the proposed system to increase the feature tracking performance. This update also promises divergence avoidance. To show the performance of the proposed system, simulation results for fixed and moving targets under feature disturbance are presented for a quadrotor. The results verify that the proposed system is capable of handling visual servoing problems.


2013 ◽  
Vol 01 (01) ◽  
pp. 143-162 ◽  
Author(s):  
Haoxiang Lang ◽  
Muhammad Tahir Khan ◽  
Kok-Kiong Tan ◽  
Clarence W. de Silva

A new trend in mobile robotics is to integrate visual information in feedback control for facilitating autonomous grasping and manipulation. The result is a visual servo system, which is quite beneficial in autonomous mobile manipulation. In view of mobility, it has wider application than the traditional visual servoing in manipulators with fixed base. In this paper, the state of art of vision-guided robotic applications is presented along with the associated hardware. Next, two classical approaches of visual servoing: image-based visual servoing (IBVS) and position-based visual servoing (PBVS) are reviewed; and their advantages and drawbacks in applying to a mobile manipulation system are discussed. A general concept of modeling a visual servo system is demonstrated. Some challenges in developing visual servo systems are discussed. Finally, a practical application of mobile manipulation system which is developed for applications of search and rescue and homecare robotics is introduced.


Sign in / Sign up

Export Citation Format

Share Document