scholarly journals Development of a Versatile Modular Platform for Aerial Manipulators

2020 ◽  
Author(s):  
Nikolaos Evangeliou ◽  
Athanasios Tsoukalas ◽  
Nikolaos Giakoumidis ◽  
Steffen Holter ◽  
Anthony Tzes

The scope of this chapter is the development of an aerial manipulator platform using an octarotor drone with an attached manipulator. An on-board spherical camera provides visual information for the drone’s surroundings, while a Pan-Tilt-Zoom camera system is used to track targets. A powerful computer with a GPU offers significant on-board computational power for the visual servoing of the aerial manipulator system. This vision system, along with the Inertial Management Unit based controller provides exemplary guidance in confined and outdoor spaces. Coupled with the manipulator’s force sensing capabilities the system can interact with the environment. This aerial manipulation system is modular as far as attaching various payloads depending on the application (i.e., environmental sensing, facade cleaning and others, aerial netting for evader-drone geofencing, and others). Experimental studies using a motion capture system are offered to validate the system’s efficiency.

Author(s):  
Shojiro Ishibashi ◽  
Hiroshi Yoshida ◽  
Tadahiro Hyakudome

The visual information is very important for the operation of an underwater vehicle such as a manned vehicle and a remotely operated vehicle (ROV). And it will be also essential for functions which should be applied to an autonomous underwater vehicle (AUV) for the next generation. Generally, it is got by optical sensors, and most underwater vehicles are equipped with various types of them. Above all, camera systems are applied as multiple units to the underwater vehicles. And they can construct a stereo camera system. In this paper, some new functions, which provide some type of visual information derived by the stereo vision system, are described. And methods to apply the visual information to the underwater vehicle and their utility are confirmed.


2013 ◽  
Vol 01 (01) ◽  
pp. 143-162 ◽  
Author(s):  
Haoxiang Lang ◽  
Muhammad Tahir Khan ◽  
Kok-Kiong Tan ◽  
Clarence W. de Silva

A new trend in mobile robotics is to integrate visual information in feedback control for facilitating autonomous grasping and manipulation. The result is a visual servo system, which is quite beneficial in autonomous mobile manipulation. In view of mobility, it has wider application than the traditional visual servoing in manipulators with fixed base. In this paper, the state of art of vision-guided robotic applications is presented along with the associated hardware. Next, two classical approaches of visual servoing: image-based visual servoing (IBVS) and position-based visual servoing (PBVS) are reviewed; and their advantages and drawbacks in applying to a mobile manipulation system are discussed. A general concept of modeling a visual servo system is demonstrated. Some challenges in developing visual servo systems are discussed. Finally, a practical application of mobile manipulation system which is developed for applications of search and rescue and homecare robotics is introduced.


Actuators ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 105
Author(s):  
Thinh Huynh ◽  
Minh-Thien Tran ◽  
Dong-Hun Lee ◽  
Soumayya Chakir ◽  
Young-Bok Kim

This paper proposes a new method to control the pose of a camera mounted on a two-axis gimbal system for visual servoing applications. In these applications, the camera should be stable while its line-of-sight points at a target located within the camera’s field of view. One of the most challenging aspects of these systems is the coupling in the gimbal kinematics as well as the imaging geometry. Such factors must be considered in the control system design process to achieve better control performances. The novelty of this study is that the couplings in both mechanism’s kinematics and imaging geometry are decoupled simultaneously by a new technique, so popular control methods can be easily implemented, and good tracking performances are obtained. The proposed control configuration includes a calculation of the gimbal’s desired motion taking into account the coupling influence, and a control law derived by the backstepping procedure. Simulation and experimental studies were conducted, and their results validate the efficiency of the proposed control system. Moreover, comparison studies are conducted between the proposed control scheme, the image-based pointing control, and the decoupled control. This proves the superiority of the proposed approach that requires fewer measurements and results in smoother transient responses.


Motor Control ◽  
1999 ◽  
Vol 3 (3) ◽  
pp. 237-271 ◽  
Author(s):  
Jeroen B.J. Smeets ◽  
Eli Brenner

Reaching out for an object is often described as consisting of two components that are based on different visual information. Information about the object's position and orientation guides the hand to the object, while information about the object's shape and size determines how the fingers move relative to the thumb to grasp it. We propose an alternative description, which consists of determining suitable positions on the object—on the basis of its shape, surface roughness, and so on—and then moving one's thumb and fingers more or less independently to these positions. We modeled this description using a minimum-jerk approach, whereby the finger and thumb approach their respective target positions approximately orthogonally to the surface. Our model predicts how experimental variables such as object size, movement speed, fragility, and required accuracy will influence the timing and size of the maximum aperture of the hand. An extensive review of experimental studies on grasping showed that the predicted influences correspond to human behavior.


Author(s):  
M. Alizadeh ◽  
C. Ratanasawanya ◽  
M. Mehrandezh ◽  
R. Paranjape

A vision-based servoing technique is proposed for a 2 degrees-of-freedom (dof) model helicopter equipped with a monocular vision system. In general, these techniques can be categorized as image- and position-based, where the task error is defined in the image plane in the former and in the physical space in the latter. The 2-dof model helicopter requires a configuration-dependent feed-forward control to compensate for gravitational forces when servoing on a ground target. Therefore, a position-based visual servoing deems more appropriate for precision control. Image information collected from a ground object, with known geometry a priori, is used to calculate the desired pose of the camera and correspondingly the desired joint angles of the model helicopter. To assure a smooth servoing, the task error is parameterized, using the information obtained from the linearaized image Jacobian, and time scaled to form a moving reference trajectory. At the higher level, a Linear Quadratic Regulator (LQR), augmented with a feed-forward term and an integrator, is used to track this trajectory. The discretization of the reference trajectory is achieved by an error-clamping strategy for optimal performance. The proposed technique was tested on a 2-dof model helicopter capable of pitch and yaw maneuvers carrying a light-weight off-the-shelf video camera. The test results show that the optimized controller can servo the model helicopter to a hovering pose for an image acquisition rate of as low as 2 frames per second.


2020 ◽  
Vol 10 (18) ◽  
pp. 6480
Author(s):  
Vicente Román ◽  
Luis Payá ◽  
Sergio Cebollada ◽  
Óscar Reinoso

In this work, an incremental clustering approach to obtain compact hierarchical models of an environment is developed and evaluated. This process is performed using an omnidirectional vision sensor as the only source of information. The method is structured in two loop closure levels. First, the Node Level Loop Closure process selects the candidate nodes with which the new image can close the loop. Second, the Image Level Loop Closure process detects the most similar image and the node with which the current image closed the loop. The algorithm is based on an incremental clustering framework and leads to a topological model where the images of each zone tend to be clustered in different nodes. In addition, the method evaluates when two nodes are similar and they can be merged in a unique node or when a group of connected images are different enough to the others and they should constitute a new node. To perform the process, omnidirectional images are described with global appearance techniques in order to obtain robust descriptors. The use of such technique in mapping and localization algorithms is less extended than local features description, so this work also evaluates the efficiency in clustering and mapping techniques. The proposed framework is tested with three different public datasets, captured by an omnidirectional vision system mounted on a robot while it traversed three different buildings. This framework is able to build the model incrementally, while the robot explores an unknown environment. Some relevant parameters of the algorithm adapt their value as the robot captures new visual information to fully exploit the features’ space, and the model is updated and/or modified as a consequence. The experimental section shows the robustness and efficiency of the method, comparing it with a batch spectral clustering algorithm.


2020 ◽  
Vol 17 (6) ◽  
pp. 172988142096907
Author(s):  
Changxin Li

In the process of strawberry easily broken fruit picking, in order to reduce the damage rate of the fruit, improves accuracy and efficiency of picking robot, field put forward a motion capture system based on international standard badminton edge feature detection and capture automation algorithm process of night picking robot badminton motion capture techniques training methods. The badminton motion capture system can analyze the game video in real time and obtain the accuracy rate of excellent badminton players and the technical characteristics of badminton motion capture through motion capture. The purpose of this article is to apply the high-precision motion capture vision control system to the design of the vision control system of the robot in the night picking process, so as to effectively improve the observation and recognition accuracy of the robot in the night picking process, so as to improve the degree of automation of the operation. This paper tests the reliability of the picking robot vision system. Taking the environment of picking at night as an example, image processing was performed on the edge features of the fruits picked by the picking robot. The results show that smooth and enhanced image processing can successfully extract edge features of fruit images. The accuracy of the target recognition rate and the positioning ability of the vision system of the picking robot were tested by the edge feature test. The results showed that the accuracy of the target recognition rate and the positioning ability of the motion edge of the vision system were far higher than 91%, satisfying the automation demand of the picking robot operation with high precision.


Sign in / Sign up

Export Citation Format

Share Document