Development and Analysis of a New Specialized Gripper Mechanism for Garment Handling

Author(s):  
Loan Le ◽  
Matteo Zoppi ◽  
Michal Jilich ◽  
Raffaello Camoriano ◽  
Dimiter Zlatanov ◽  
...  

This paper reports ongoing work on the design of a new gripper for garments handling. The development of this device is part of the CloPeMa European Project creating a robot system for automated manipulation of clothing and other textile items. First, we analyze the specificity of the application determining the requirements for the design and functioning of the grasping system. Textiles do not have a stable shape and cannot be manipulated on the basis of a priori geometric knowledge. The necessary exploration of the material and the environment is performed with the help of tactile sensors embedded in the fingertips of the gripper, complementing the vision system of the robotic work cell. The chosen design solution is a simple mechanism able to perform adequately the grasping task and to permit exploratory finger motions. The kinematics and statics of the mechanism are outlined briefly and, in accord with initial experiments, used to validate the design.

Author(s):  
Muthukkumar S. Kadavasal ◽  
Abhishek Seth ◽  
James H. Oliver

A multi modal teleoperation interface is introduced featuring an integrated virtual reality based simulation augmented by sensors and image processing capabilities on-board the remotely operated vehicle. The proposed virtual reality interface fuses an existing VR model with live video feed and prediction states, thereby creating a multi modal control interface. Virtual reality addresses the typical limitations of video-based teleoperation caused by signal lag and limited field of view thereby allowing the operator to navigate in a continuous fashion. The vehicle incorporates an on-board computer and a stereo vision system to facilitate obstacle detection. A vehicle adaptation system with a priori risk maps and real state tracking system enables temporary autonomous operation of the vehicle for local navigation around obstacles and automatic re-establishment of the vehicle’s teleoperated state. As both the vehicle and the operator share absolute autonomy in stages, the operation is referred to as mixed autonomous. Finally, the system provides real time update of the virtual environment based on anomalies encountered by the vehicle. The system effectively balances the autonomy between human and on board vehicle intelligence. The stereo vision based obstacle avoidance system is initially implemented on video based teleoperation architecture and experimental results are presented. The VR based multi modal teleoperation interface is expected to be more adaptable and intuitive when compared to other interfaces.


Robotica ◽  
2007 ◽  
Vol 25 (5) ◽  
pp. 615-626 ◽  
Author(s):  
Wen-Chung Chang

SUMMARYRobotic manipulators that have interacted with uncalibrated environments typically have limited positioning and tracking capabilities, if control tasks cannot be appropriately encoded using available features in the environments. Specifically, to perform 3-D trajectory following operations employing binocular vision, it seems necessary to have a priori knowledge on pointwise correspondence information between two image planes. However, such an assumption cannot be made for any smooth 3-D trajectories. This paper describes how one might enhance autonomous robotic manipulation for 3-D trajectory following tasks using eye-to-hand binocular visual servoing. Based on a novel encoded error, an image-based feedback control law is proposed without assuming pointwise binocular correspondence information. The proposed control approach can guarantee task precision by employing only an approximately calibrated binocular vision system. The goal of the autonomous task is to drive a tool mounted on the end-effector of the robotic manipulator to follow a visually determined smooth 3-D target trajectory in desired speed with precision. The proposed control architecture is suitable for applications that require precise 3-D positioning and tracking in unknown environments. Our approach is successfully validated in a real task environment by performing experiments with an industrial robotic manipulator.


2020 ◽  
Author(s):  
Johannes Lutzmann ◽  
Ralf Sussmann ◽  
Huilin Chen ◽  
Frank Hase ◽  
Rigel Kivi ◽  
...  

<p>Ground-based column measurements of trace gases by FTIR spectrometers within the Total Carbon Column Observing Network (TCCON) provide accurate ground reference for the validation of the nadir-viewing hyperspectral Tropospheric Monitoring Instrument (TROPOMI) on-board the ESA satellite Sentinel 5 Precursor (S-5P). In such intercomparisons of two independent remote soundings, errors can occur as the a priori profiles used in the respective retrievals are i) differing from each other, and ii) both different from the true atmospheric state at the moment of observation. In certain conditions of atmospheric dynamics, e.g. polar vortex subsidence or stratospheric intrusions, which strongly alter the shape of vertical concentration profiles, these intercomparison errors can become considerable (Ostler et al., 2014).</p><p>In our work funded by the German Space Agency DLR and performed as part of the ESA AO project TCCON4S5P, we search for potential sources of realistic common a priori profiles for S-5P and TCCON CH<sub>4</sub> and CO measurements which reduce these large errors. We examine the performance of a number of chemical transport models and data assimilation systems in reproducing dynamical effects and in minimizing intercomparison errors. In-situ profiles measured by AirCores are used as validation where they are available. We present the status and results of our ongoing work.</p><p>Reference:</p><p>Ostler, A., Sussmann, R., Rettinger, M., Deutscher, N. M., Dohe, S., Hase, F., Jones, N., Palm, M., and Sinnhuber, B.-M.: Multistation intercomparison of column-averaged methane from NDACC and TCCON: impact of dynamical variability, Atmos. Meas. Tech., 7, 4081–4101, doi:10.5194/amt-7-4081-2014, 2014. Ostler, A., Sussmann, R., Rettinger, M., Deutscher, N. M., Dohe, S., Hase, F., Jones, N., Palm, M., and Sinnhuber, B.-M.: Multistation intercomparison of column-averaged methane from NDACC and TCCON: impact of dynamical variability, Atmos. Meas. Tech., 7, 4081–4101, doi:10.5194/amt-7-4081-2014, 2014.</p>


2015 ◽  
Vol 27 (2) ◽  
pp. 182-190
Author(s):  
Gou Koutaki ◽  
◽  
Keiichi Uchimura

<div class=""abs_img""> <img src=""[disp_template_path]/JRM/abst-image/00270002/08.jpg"" width=""150"" />Developed shogi robot system</div> The authors developed a low-cost, safety shogi robot system. A Web camera installed on the lower frame is used to recognize pieces and their positions on the board, after which the game program is played. A robot arm moves a selected piece to the position used in playing a human player. A fast, robust image processing algorithm is needed because a low-cost wide-angle Web camera and robot are used. The authors describe image processing and robot systems, then discuss experiments conducted to verify the feasibility of the proposal, showing that even a low-cost system can be highly reliable. </span>


2011 ◽  
Vol 143-144 ◽  
pp. 737-741 ◽  
Author(s):  
Hai Bo Liu ◽  
Wei Wei Li ◽  
Yu Jie Dong

Vision system is an important part of the whole robot soccer system.In order to win the game, the robot system must be more quick and more accuracy.A color image segmentation method using improved seed-fill algorithm in YUV color space is introduced in this paper. The new method dramatically reduces the work of calculation,and speeds up the image processing. The result of comparing it with the old method based on RGB color space was showed in the paper.The second step of the vision sub system is identification the color block that separated by the first step.A improved seed fill algorithm is used in the paper.The implementation on MiroSot Soccer Robot System shows that the new method is fast and accurate.


2020 ◽  
Author(s):  
Gyeongbin Mun ◽  
Young Gyun Kim ◽  
Myungjoon Kim ◽  
Byoungjun Jeon ◽  
Seong-Ho Kong ◽  
...  

Abstract Background: Robot surgery has become prevalent because of its various advantages as a progressive method based on empirical researches of conventional open surgery and minimally invasive surgery. However, the da Vinci surgical robot system, the most widely used and researched surgical robot, still requires an ergonomic improvement because of the uncomfortable posture in which it has to be operated. The stereo viewer—the current vision system of the da Vinci surgical robot—requires a user to maintain a posture wherein the user is looking down, which causes discomfort and results in musculoskeletal disorders. To overcome this limitation, a virtual reality (VR) head-mounted display (HMD) is proposed by previous researchers as an appropriate option to replace the stereo viewer, as it enables surgeons to move freely during surgery instead of having to look down on the stereo viewer. Presently, there is no direct comparison between the stereo viewer and a VR HMD by surgeons. Comparative evaluations were performed using peg transfer tasks, a questionnaire, and a NASA-Task Load Index (TLX). These were planned and performed by surgeons and novices to determine if the stereo viewer can be replaced by the VR HMD and to investigate whether the VR HMD has ergonomic.Results: Based on the results of peg transfer tasks, completion times when using VR HMD were shorter than those when using the stereo viewer. In these tasks, the participants performed more executions using the VR HMD compared to the stereo viewer. Based on the questionnaire, the participants favored the VR HMD compared to the stereo viewer, with respect to its visual and ergonomic performance. The modified NASA-TLX showed positive perceptions for the VR HMD.Conclusions: This comparative evaluation confirmed that the VR HMD can be employed as a potential alternative for the stereo viewer in a surgical robot system to achieve ergonomic improvements. The VR HMD improved the task performance of the surgical robot system, and it provided an ergonomic operation environment.


Author(s):  
R. Kamguem ◽  
A. S. Tahan ◽  
V. Songmene

The surface roughness is very significant information required for product quality on the field of mechanical engineering and manufacturing, especially in aeronautic. Its measurement must therefore be conducted with care. In this work, a measuring method of the surface roughness based on machine vision was studied. The authors' use algorithms to evaluate new discriminatory features thereby than the statistical characteristics using the coefficients of the wavelet transform and used to estimate the roughness parameters. This vision system allows measuring simultaneously several parameters of the roughness at the same time, order to meet for the desired surface function used. The results were validated on three different families of materials: aluminum, cast iron and brass. The impact of material on the quality of the results was analyzed, leading to the development of multi-materials. The study had shown that several roughness parameters can be estimated using only features extracted from the image and a neural network without a priori knowledge of the machining parameters.


2015 ◽  
Vol 27 (5) ◽  
pp. 543-551 ◽  
Author(s):  
Akio Namiki ◽  
◽  
Fumiyasu Takahashi

<div class=""abs_img""> <img src=""[disp_template_path]/JRM/abst-image/00270005/11.jpg"" width=""300"" /> Defensive motion against attack</div> In this paper, we discuss how to generate defensive motions for a sword-fighting robot based on quick detection of the opposite player’s initial motions. Our sword-fighting robot system, which has a stereo high-speed vision system, recognizes both the position of a human player and that of the sword grasped by the robot’s hand. Further, it detects the moment when the human player initiates a move using ChangeFinder, which is a method of detecting change points. Next, using least squares method, it predicts the possible trajectories of the sword of the human player from the moment when the attack starts. Finally, it judges the type of the attack and generates an appropriate defensive motion. The effectiveness of the proposed algorithm is verified by experimental results. </span>


Author(s):  
Chengtao Cai ◽  
Bing Fan ◽  
Xiangyu Weng ◽  
Qidan Zhu ◽  
Li Su

Purpose Because of their large field of view, omnistereo vision systems have been widely used as primary vision sensors in autonomous mobile robot tasks. The purpose of this article is to achieve real-time and accurate tracking by the omnidirectional vision robot system. Design/methodology/approach The authors provide in this study the key techniques required to obtain an accurate omnistereo target tracking and location robot system, including stereo rectification and target tracking in complex environment. A simple rectification model is proposed, and a local image processing method is used to reduce the computation time in the localization process. A target tracking method is improved to make it suitable for omnidirectional vision system. Using the proposed methods and some existing methods, an omnistereo target tracking and location system is established. Findings The experiments are conducted with all the necessary stages involved in obtaining a high-performance omnistereo vision system. The proposed correction algorithm can process the image in real time. The experimental results of the improved tracking algorithm are better than the original algorithm. The statistical analysis of the experimental results demonstrates the effectiveness of the system. Originality/value A simple rectification model is proposed, and a local image processing method is used to reduce the computation time in the localization process. A target tracking method is improved to make it suitable for omnidirectional vision system. Using the proposed methods and some existing methods, an omnistereo target tracking and location system is established.


Sign in / Sign up

Export Citation Format

Share Document