Visual Servoing Corresponding to Various Obstacle Placements and Target Object Shapes Based on Learning in Virtual Environments

Author(s):  
Takuya IWASAKI ◽  
Kimitoshi YAMAZAKI
Author(s):  
Aaron Hao Tan ◽  
Abdulrahman Al-Shanoon ◽  
Haoxiang Lang ◽  
Moustafa El-Gindy

The development of image processing algorithms grew exponentially over the past few decades with improvements in vision sensors and computational power. In this paper, a visual servo controller is designed and developed using the image-based method for a differential drive robot. The objective is to reach a desired pose relative to a target object placed in the world frame with four feature points. A full system model that includes the mobile base and camera is presented along with the design of a proportional controller. The system is implemented in the Husky A200 Robot by Clearpath Robotics. MATLAB Simulation and experimental results are analyzed and discussed with conclusion and future works recommendation in the end.


2019 ◽  
Author(s):  
Bei Xiao ◽  
Shuang Zhao ◽  
Ioannis Gkioulekas ◽  
Wenyan Bi ◽  
Kavita Bala

When judging optical properties of a translucent object, humans often look at sharp geometric features such as edges and thin parts. Analysis of the physics of light transport shows that these sharp geometries are necessary for scientific imaging systems to be able to accurately measure the underlying material optical properties. In this paper, we examine whether human perception of translucency is likewise affected by the presence of sharp geometry, by confounding our perceptual inferences about an object’s optical properties. We employ physically accurate simulations to create visual stimuli of translucent materials with varying shapes and optical properties under different illuminations. We then use these stimuli in psychophysical experiments, where human observers are asked to match an image of a target object by adjusting the material parameters of a match object with different geometric sharpness, lighting geometry, and 3D geometry. We find that the level of geometric sharpness significantly affects perceived translucency by the observers. These findings generalize across a few illuminations and object shapes. Our results suggest that the perceived translucency of an object depends on both the underlying material optical parameters and 3D shape. We also conduct analyses using computational metrics including (luminance-normalized) L2, structural similarity index (SSIM), and Michelson contrast. We find that these image metrics cannot predict perceptual results, suggesting low level image cues are not sufficient to explain our results.


2021 ◽  
Author(s):  
SHOGO ARAI ◽  
Yoshihiro Miyamoto ◽  
Akinari Kobayashi ◽  
Kazuhiro Kosuge

<p>Visual servo control uses images that are obtained by a camera for robotic control. This study focuses on the problem of positioning a target object using a robotic manipulator with image-based visual servo (IBVS) control. To perform the positioning task, the image-based visual servoing requires visual features that can be extracted from the appearance of the target object. Therefore, a positioning error tends to increase especially for textureless objects, such as industrial parts. Since it is difficult to extract differences of the visual features between current and goal images. To solve these problems, this paper presents a novel visual servoing named ``Active Visual Servoing." Active Visual Servoing (AVS) projects patterned light onto the target object using a projector. The design of the projection pattern affects the positioning error. AVS uses an optimal pattern which is theoretically derived and maximizes differences between current and goal images. The experimental results show that the proposed active visual servoing method reduces the positioning error by more than 97% compared to conventional image-based visual servoing.</p>


2021 ◽  
Vol 2 ◽  
Author(s):  
Lauren Buck ◽  
Richard Paris ◽  
Bobby Bodenheimer

Spatial perception in immersive virtual environments, particularly regarding distance perception, is a well-studied topic in virtual reality literature. Distance compression, or the underestimation of distances, is and has been historically prevalent in all virtual reality systems. The problem of distance compression still remains open, but recent advancements have shown that as systems have developed, the level of distance compression has decreased. Here, we add evidence to this trend by beginning the assessment of distance compression in the HTC Vive Pro. To our knowledge, there are no archival results that report any findings about distance compression in this system. Using a familiar paradigm for studying distance compression in virtual reality hardware, we asked users to blind walk to a target object placed in a virtual environment and assessed their judgments based on those distances. We find that distance compression in the HTC Vive Pro mirrors that of the HTC Vive. Our results are not particularly surprising, considering the nature of the differences between the two systems, but they lend credence to the finding that resolution does not affect distance compression. More extensive study should be performed to reinforce these results.


2015 ◽  
Vol 772 ◽  
pp. 512-517 ◽  
Author(s):  
Yu Cui ◽  
Kenta Nishimura ◽  
Yusuke Sunami ◽  
Mamoru Minami ◽  
Takayuki Matsuno ◽  
...  

Visual Servoing to Moving Target with Fixed Hand-Eye Cameras Mounted at Hand of Robot is Inevitably Be Affected by Hand Dynamical Oscillations, then it is Hard to Keep Target at the Centre of Camera’s Image, since Nonlinear Dynamical Effects of Whole Manipulator Stand against Tracking Ability. in Order to Solve this Problem, an Eye-Vergence System, where the Visual Servoing Controller of Hand and Eye-Vergence is Controlled Independently, so that the Cameras can Observe the Target Object at the Center of the Camera Images through Eye-Vergence Functions. the Eyes with Light Mass Make the Cameras’ Eye-Sight Direction Rotate Quickly, so the Track Ability of the Eye-Vergence Motion is Superior to the One of Fixed Hand-Eye Configuration. in this Report Merits of Eye-Vengence Visual Servoing for Pose Tracking Have been Confirmed through Frequency Response Experiments.


Author(s):  
NAONORI UEDA ◽  
KENJI MASE

This paper proposes a robust method for tracking an object contour in a sequence of images. In this method, both object extraction and tracking problems can be solved simultaneously. Furthermore, it is applicable to the tracking of arbitrary shapes since it does not need a priori knowledge about the object shapes. In the contour tracking, energy-minimizing elastic contour models are utilized, which is newly presented in this paper. The contour tracking is formulated as an optimization problem to find the position that minimizes both the elastic energy of its model and the potential energy derived from the edge potential image that includes a target object contour. We also present an algorithm which efficiently solves energy minimization problems within a dynamic programming framework. The algorithm enables us to obtain optimal solution even when the variables to be optimized are not ordered. We show the validity and usefulness of the proposed method with some experimental results.


2021 ◽  
Author(s):  
SHOGO ARAI ◽  
Kazuya Konada ◽  
Naoya Yoshinaga ◽  
Akinari Kobayashi ◽  
Kazuhiro Kosuge

<div>This study proposes a method of robust regrasping an object using a dual-arm robot with general-purpose hands, which is robust against the error of grasping. In this paper, one arm is assigned to hand over the object to the other arm that is named a receiver arm. The grasping error must be considered to increase the success rate of the regrasping since a hand-over arm first picks up the object with the general-purpose hand. In an online phase, the proposed method performs object positioning at an optimal pose at the time of regrasping using an image-based visual servoing (IBVS) approach to reduce the effect of the grasping error. In the planning phase, the proposed method computes the optimal pose for regrasping by maximizing the minimum singular values of the image Jacobian of IBVS to achieve a high positioning accuracy using a 3D model of the target object. To achieve the regrasping objects with various shapes robustly against image noises and changes in light environments, the image Jacobian of IBVS is computed by numerical differential using an actual data set. A large number of data sets corresponding to each candidate grasp are usually required for computing the image Jacobian. To reduce the number of data sets, we propose a conversion method of the image Jacobian requiring only one data set corresponding to one representative grasp. The experimental results show that the proposed method achieves regrasping of target objects with the general-purpose hands with high success rates and performs target object positioning with less than 0.7[mm] positioning error.</div>


2016 ◽  
Vol 28 (4) ◽  
pp. 543-558 ◽  
Author(s):  
Myo Myint ◽  
◽  
Kenta Yonemori ◽  
Akira Yanou ◽  
Khin Nwe Lwin ◽  
...  

[abstFig src='/00280004/12.jpg' width='300' text='ROV with dual-eyes cameras and 3D marker' ] Recently, a number of researches related to underwater vehicle has been conducted worldwide with the huge demand in different applications. In this paper, we propose visual servoing for underwater vehicle using dual-eyes cameras. A new method of pose estimation scheme that is based on 3D model-based recognition is proposed for real-time pose tracking to be applied in Autonomous Underwater Vehicle (AUV). In this method, we use 3D marker as a passive target that is simple but enough rich of information. 1-step Genetic Algorithm (GA) is utilized in searching process of pose in term of optimization, because of its effectiveness, simplicity and promising performance of recursive evaluation, for real-time pose tracking performance. The proposed system is implemented as software implementation and Remotely Operated Vehicle (ROV) is used as a test-bed. In simulated experiment, the ROV recognizes the target, estimates the relative pose of vehicle with respect to the target and controls the vehicle to be regulated in desired pose. PID control concept is adapted for proper regulation function. Finally, the robustness of the proposed system is verified in the case when there is physical disturbance and in the case when the target object is partially occluded. Experiments are conducted in indoor pool. Experimental results show recognition accuracy and regulating performance with errors kept in centimeter level.


Author(s):  
Alessandro R. L. Zachi ◽  
Hsu Liu ◽  
Fernando Lizarralde ◽  
Antonio C. Leite

This paper presents a control strategy for robot manipulators to perform 3D cartesian tracking using visual servoing. Considering a fixed camera, the 3D cartesian motion is decomposed in a 2D motion on a plane orthogonal to the optical axis and a 1D motion parallel to this axis. An image-based visual servoing approach is used to deal with the nonlinear control problem generated by the depth variation without requiring direct depth estimation. Due to the lack of camera calibration, an adaptive control method is used to ensure both depth and planar tracking in the image frame. The depth feedback loop is closed by measuring the image area of a target object attached to the robot end-effector. Simulation and experimental results obtained with a real robot manipulator illustrate the viability of the proposed scheme.


2013 ◽  
Vol 25 (1) ◽  
pp. 117-126 ◽  
Author(s):  
Su Keun Jeong ◽  
Yaoda Xu

In many everyday activities, we need to attend and encode multiple target objects among distractor objects. For example, when driving a car on a busy street, we need to simultaneously attend objects such as traffic signs, pedestrians, and other cars, while ignoring colorful and flashing objects in display windows. To explain how multiple visual objects are selected and encoded in visual STM and in perception in general, the neural object file theory argues that, whereas object selection and individuation is supported by inferior intraparietal sulcus (IPS), the encoding of detailed object features that enables object identification is mediated by superior IPS and higher visual areas such as the lateral occipital complex (LOC). Nevertheless, because task-irrelevant distractor objects were never present in previous studies, it is unclear how distractor objects would impact neural responses related to target object individuation and identification. To address this question, in two fMRI experiments, we asked participants to encode target object shapes among distractor object shapes, with targets and distractors shown in different spatial locations and in different colors. We found that distractor-related neural processing only occurred at low, but not at high, target encoding load and impacted both target individuation in inferior IPS and target identification in superior IPS and LOC. However, such distractor-related neural processing was short-lived, as it was only present during the visual STM encoding but not the delay period. Moreover, with spatial cuing of target locations in advance, distractor processing was attenuated during target encoding in superior IPS. These results are consistent with the load theory of visual information processing. They also show that, whereas inferior IPS and LOC were automatically engaged in distractor processing under low task load, with the help of precuing, superior IPS was able to only encode the task-relevant visual information.


Sign in / Sign up

Export Citation Format

Share Document