scholarly journals Pose Estimation and Image Matching for Tidy-up Task using a Robot Arm

2021 ◽  
Vol 16 (4) ◽  
pp. 299-305
Author(s):  
Jinglan Piao ◽  
HyunJun Jo ◽  
Jae-Bok Song
2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Xin Wu ◽  
Canjun Yang ◽  
Yuanchao Zhu ◽  
Weitao Wu ◽  
Qianxiao Wei

Purpose This paper aims to present a natural human–robot teleoperation system, which capitalizes on the latest advancements of monocular human pose estimation to simplify scenario requirements on heterogeneous robot arm teleoperation. Design/methodology/approach Several optimizations in the joint extraction process are carried on to better balance the performance of the pose estimation network. To bridge the gap between human joint pose in Cartesian space and heterogeneous robot joint angle pose in Radian space, a routinized mapping procedure is proposed. Findings The effectiveness of the developed methods on joint extraction is verified via qualitative and quantitative experiments. The teleoperation experiments on different robots validate the feasibility of the system controlling. Originality/value The proposed system provides an intuitive and efficient human–robot teleoperation method with low-cost devices. It also enhances the controllability and flexibility of robot arms by releasing human operator from motion constraints, paving a new way for effective robot teleoperation.


2021 ◽  
Vol 11 (18) ◽  
pp. 8750
Author(s):  
Styliani Verykokou ◽  
Argyro-Maria Boutsi ◽  
Charalabos Ioannidis

Mobile Augmented Reality (MAR) is designed to keep pace with high-end mobile computing and their powerful sensors. This evolution excludes users with low-end devices and network constraints. This article presents ModAR, a hybrid Android prototype that expands the MAR experience to the aforementioned target group. It combines feature-based image matching and pose estimation with fast rendering of 3D textured models. Planar objects of the real environment are used as pattern images for overlaying users’ meshes or the app’s default ones. Since ModAR is based on the OpenCV C++ library at Android NDK and OpenGL ES 2.0 graphics API, there are no dependencies on additional software, operating system version or model-specific hardware. The developed 3D graphics engine implements optimized vertex-data rendering with a combination of data grouping, synchronization, sub-texture compression and instancing for limited CPU/GPU resources and a single-threaded approach. It achieves up to 3 × speed-up compared to standard index rendering, and AR overlay of a 50 K vertices 3D model in less than 30 s. Several deployment scenarios on pose estimation demonstrate that the oriented FAST detector with an upper threshold of features per frame combined with the ORB descriptor yield best results in terms of robustness and efficiency, achieving a 90% reduction of image matching time compared to the time required by the AGAST detector and the BRISK descriptor, corresponding to pattern recognition accuracy of above 90% for a wide range of scale changes, regardless of any in-plane rotations and partial occlusions of the pattern.


2017 ◽  
Author(s):  
Indika B. Wijayasinghe ◽  
Joseph D. Sanford ◽  
Shamsudeen Abubakar ◽  
Mohammad Nasser Saadatzi ◽  
Sumit K. Das ◽  
...  
Keyword(s):  

2021 ◽  
Author(s):  
Kiruthikan Sithamparanathan ◽  
Sarangan Rajendran ◽  
Pirakash Thavapirakasam ◽  
A.M. Harsha ◽  
S. Abeykoon

2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Kevin Yu ◽  
Thomas Wegele ◽  
Daniel Ostler ◽  
Dirk Wilhelm ◽  
Hubertus Feußner

AbstractTelemedicine has become a valuable asset in emergency responses for assisting paramedics in decision making and first contact treatment. Paramedics in unfamiliar environments or time-critical situations often encounter complications for which they require external advice. Modern ambulance vehicles are equipped with microphones, cameras, and vital sensors, which allow experts to remotely join the local team. However, the visual channels are rarely used since the statically installed cameras only allow broad views at the patient. They neither allow a close-up view nor a dynamic viewpoint controlled by the remote expert. In this paper, we present EyeRobot, a concept which enables dynamic viewpoints for telepresence using the intuitive control of the user’s head motion. In particular, EyeRobot utilizes the 6 degrees of freedom pose estimation capabilities of modern head-mounted displays and applies them in real-time to the pose of a robot arm. A stereo-camera, installed on the end-effector of the robot arm, serves as the eyes of the remote expert at the local site. We put forward an implementation of EyeRobot and present the results of our pilot study which indicates its intuitive control.


Author(s):  
Indika B. Wijayasinghe ◽  
Mohammad Nasser Saadatzi ◽  
Shamsudeen Abubakar ◽  
Dan O. Popa

Sign in / Sign up

Export Citation Format

Share Document