scholarly journals Modeling ArUco Markers Images for Accuracy Analysis of Their 3D Pose Estimation

2020 ◽  
pp. short14-1-short14-7
Author(s):  
Anton Poroykov ◽  
Pavel Kalugin ◽  
Sergey Shitov ◽  
Irina Lapitskaya

Fiducial markers are used in vision systems to determine the position of objects in space, reconstruct movement and create augmented reality. Despite the abundance of work on analysis of the accuracy of the estimation of the fiducial markers spatial position, this question remains open. In this paper, we propose the computer modeling of images with ArUco markers for this purpose. The paper presents a modeling algorithm, which was implemented in the form of software based on the OpenCV library. Algorithm is based on projection of three-dimensional points of the marker corners into two-dimensional points using the camera parameters and rendering the marker image in the new two-dimensional coordinates on the modeled image with the use of the perspective transformation obtained from these points. A number of dependencies were obtained by which it is possible to evaluate the error in determining the position depending on markers size. Including the probability of detecting a marker depending on its area on an image.

Author(s):  
Hanieh Deilamsalehy ◽  
Timothy C. Havens ◽  
Joshua Manela

Precise, robust, and consistent localization is an important subject in many areas of science such as vision-based control, path planning, and simultaneous localization and mapping (SLAM). To estimate the pose of a platform, sensors such as inertial measurement units (IMUs), global positioning system (GPS), and cameras are commonly employed. Each of these sensors has their strengths and weaknesses. Sensor fusion is a known approach that combines the data measured by different sensors to achieve a more accurate or complete pose estimation and to cope with sensor outages. In this paper, a three-dimensional (3D) pose estimation algorithm is presented for a unmanned aerial vehicle (UAV) in an unknown GPS-denied environment. A UAV can be fully localized by three position coordinates and three orientation angles. The proposed algorithm fuses the data from an IMU, a camera, and a two-dimensional (2D) light detection and ranging (LiDAR) using extended Kalman filter (EKF) to achieve accurate localization. Among the employed sensors, LiDAR has not received proper attention in the past; mostly because a two-dimensional (2D) LiDAR can only provide pose estimation in its scanning plane, and thus, it cannot obtain a full pose estimation in a 3D environment. A novel method is introduced in this paper that employs a 2D LiDAR to improve the full 3D pose estimation accuracy acquired from an IMU and a camera, and it is shown that this method can significantly improve the precision of the localization algorithm. The proposed approach is evaluated and justified by simulation and real world experiments.


Author(s):  
Yahya Rasheed Alameer

  The purpose of the research is to determine the effect of the difference in the mode of presentation of the enhanced reality models in the development of the cognitive achievement of secondary students in Jazan region in computer science, the researcher used quasi-experimental approach in comparing the 2D image models of Augmented reality to the first experimental group, and teaching the pattern of 3D image models of Augmented reality of the second experimental group, to ascertain the hypotheses of the research and to reveal the relationship between the independent variable and the dependent variable, the sample consisted of (60) students: (30) students in the first experimental group, which was studied using the two-dimensional Augmented Reality models, And (30) students in the second experimental group, which was studied using the pattern of Augmented Reality three-dimensional, the results showed that there were statistically significant differences at (α≤05.0) between the mean scores of the students of the first experimental groups studied using the two-dimensional Augmented Reality models, the second experiment, which was studied using the Augmented three-dimensional image models, in the post-application to test cognitive achievement, for the second experimental group studied using the three-dimensional Augmented Reality models, In the light of the results, recommendations and suggestions were made to develop the cognitive achievement of secondary students in computer and various subjects.    


Repositor ◽  
2020 ◽  
Vol 2 (5) ◽  
pp. 553
Author(s):  
Tirto Adhi Triambodo ◽  
Ali Sofyan Kholimi ◽  
Lailatul Husniah

AbstrakTaman Rekreasi Sengkaling memiliki luas keseluruhan  9 hektar yang terdiri dari 6 hektar  diantaranya ada taman dan pepohonan hijau. Mengingat luasnya Taman Rekreasi Sengkaling, disana tidak ada peta dan tempat lokasi pusat informasi wahana berada di pintu masuk yang tentu akan membuat pengunjung bingung ketika sudah berada didalam Taman Rekreasi Sengkaling ingin mengetahui informasi wahana dan membutuhkan waktu lama dalam mencapai tujuan wahana yang diinginkan. Berdasarkan dari permasalahan yang ada, maka dibutuhkan suatu aplikasi yang bisa memberikan informasi dan navigasi sehingga pengunjung dapat dengan mudah mengetahui informasi wahana dan navigasi menuju lokasi wahana. Augmented Reality adalah teknologi yang menggabungkan benda maya dua dimensi dan ataupun tiga dimensi ke dalam lingkungan nyata tiga dimensi. Teknologi Augmented Reality ini digunakan untuk pembuatan aplikasi untuk informasi dan navigasi pada Taman Rekreasi Sengkaling. Pada pengujian sistem berdasarkan hasil kuesioner dengan 5 pertanyaan kepada 30 responden untuk memakai aplikasi AR Taman Rekreasi Sengkaling. Dari pengujian sistem aplikasi AR kepada user yang memilih setuju dengan presentase 91%. Maka hasil yang didapatkan, penggunaan aplikasi Augmented Reality direspon baik oleh pengunjung Taman Rekreasi Sengkaling.Abstract  Sengkaling Recreation Park has a total area of 9 hectares consisting of 6 hectares of which there are parks and green trees. Given the breadth of the Sengkaling Recreation Park, there is no map and location of the information center where the vehicle is located at the entrance which would make visitors confused when already in the Sengkaling Recreation Park wants to know vehicle information and takes a long time to reach the desired destination. Based on the existing problems, it requires an application that can provide information and navigation so that visitors can easily find information on vehicle and navigation to the location of the vehicle. Augmented Reality is a technology that combines two-dimensional and / or three-dimensional virtual objects into a real three-dimensional environment. This Augmented Reality technology is used for making applications for information and navigation in Sengkaling Recreation Park. On testing the system based on the results of the questionnaire with 5 questions to 30 respondents to use the AR Sengkaling Recreational Park application. From testing the AR application system to users who choose to agree with a percentage of 91%. Then the results obtained, the use of Augmented Reality applications responded well by visitors to the Sengkaling Recreation Park.


2015 ◽  
Vol 6 (1) ◽  
Author(s):  
Kimitoshi Yamazaki ◽  
Kiyohiro Sogen ◽  
Takashi Yamamoto ◽  
Masayuki Inaba

Abstract This paper describes a method for the detection of textureless objects. Our target objects include furniture and home appliances, which have no rich textural features or characteristic shapes. Focusing on the ease of application, we define a model that represents objects in terms of three-dimensional edgels and surfaces. Object detection is performed by superimposing input data on the model. A two-stage algorithm is applied to bring out object poses. Surfaces are used to extract candidates fromthe input data, and edgels are then used to identify the pose of a target object using two-dimensional template matching. Experiments using four real furniture and home appliances were performed to show the feasibility of the proposed method.We suggest the possible applicability in occlusion and clutter conditions.


2016 ◽  
Vol 10 (4) ◽  
pp. 299-307
Author(s):  
Luis Unzueta ◽  
Nerea Aranjuelo ◽  
Jon Goenetxea ◽  
Mikel Rodriguez ◽  
Maria Teresa Linaza

SISFOTENIKA ◽  
2020 ◽  
Vol 10 (2) ◽  
pp. 152
Author(s):  
Joe Yuan Mambu ◽  
Andria Kusuma Wahyudi ◽  
Brily Latusuay ◽  
Devi Elwanda Supit

<p>In learning projectile motion and its velocity, students tend to look up a plain two-dimensional image in a science book. While there’s some educational props, yet they usually a very tradional ones and can not be used for real calculation. The utilization of Augmented Reality (AR) in educational method may raise curiosity and gives a unique way in learning projectile motion as the motion can be seen in a three dimensional. Augmented Reality itself is a combination of real world and virtual objects. This application uses the Vuforia SDK that able to blend the real world and virtual objects. Through this application, we were able to simulate projectile motion and its velocity in more realistic way, have slightly interaction with the reality, and gets input from user so they can learn and see the result of the parameter that they entered. Thus, with the advantage of AR the application gives a more realistic feel compared to the existing ones available in public as it could receive any input and show the output in AR. </p>


Author(s):  
Anang Pramono ◽  
Martin Dwiky Setiawan

The concept of education for children is important. The aspects that must be considered are methods and learning media. In this research innovative and alternative learning media are made to understand fruits for children with Augmented Reality (AR). Augmented Reality (AR) in principle is a technology that is able to combine two-dimensional or three-dimensional virtual objects into a real environment and then project it. This learning media combines picture cards and virtual reality. Markers contained on picture cards will be captured by the mobile device camera, processed and will 3D animated pieces appear on the mobile screen in realtime. By using the concept of combining real world, real images on cards and virtual, applications can stimulate imagination and sense of desire in children and motivation to learn more and more. 3D fruit estimation created using the 3D Blender application and the Augmented Rea process lity is made using Unity and the Vuforia SDK library. The application of fruit recognition has been applied to several child respondents and has been tested on several types and brands of Android-based mobile phones. Based on research trials, 86% of 30 respondents stated that the application which was developed very effectively as a medium for the introduction of fruits.


Author(s):  
Jan Stenum ◽  
Cristina Rossi ◽  
Ryan T. Roemmich

ABSTRACTWalking is the primary mode of human locomotion. Accordingly, people have been interested in studying human gait since at least the fourth century BC. Human gait analysis is now common in many fields of clinical and basic research, but gold standard approaches – e.g., three-dimensional motion capture, instrumented mats or footwear, and wearables – are often expensive, immobile, data-limited, and/or require specialized equipment or expertise for operation. Recent advances in video-based pose estimation have suggested exciting potential for analyzing human gait using only two-dimensional video inputs collected from readily accessible devices (e.g., smartphones, tablets). However, we currently lack: 1) data about the accuracy of video-based pose estimation approaches for human gait analysis relative to gold standard measurement techniques and 2) an available workflow for performing human gait analysis via video-based pose estimation. In this study, we compared a large set of spatiotemporal and sagittal kinematic gait parameters as measured by OpenPose (a freely available algorithm for video-based human pose estimation) and three-dimensional motion capture from trials where healthy adults walked overground. We found that OpenPose performed well in estimating many gait parameters (e.g., step time, step length, sagittal hip and knee angles) while some (e.g., double support time, sagittal ankle angles) were less accurate. We observed that mean values for individual participants – as are often of primary interest in clinical settings – were more accurate than individual step-by-step measurements. We also provide a workflow for users to perform their own gait analyses and offer suggestions and considerations for future approaches.


2021 ◽  
Vol 17 (4) ◽  
pp. e1008935
Author(s):  
Jan Stenum ◽  
Cristina Rossi ◽  
Ryan T. Roemmich

Human gait analysis is often conducted in clinical and basic research, but many common approaches (e.g., three-dimensional motion capture, wearables) are expensive, immobile, data-limited, and require expertise. Recent advances in video-based pose estimation suggest potential for gait analysis using two-dimensional video collected from readily accessible devices (e.g., smartphones). To date, several studies have extracted features of human gait using markerless pose estimation. However, we currently lack evaluation of video-based approaches using a dataset of human gait for a wide range of gait parameters on a stride-by-stride basis and a workflow for performing gait analysis from video. Here, we compared spatiotemporal and sagittal kinematic gait parameters measured with OpenPose (open-source video-based human pose estimation) against simultaneously recorded three-dimensional motion capture from overground walking of healthy adults. When assessing all individual steps in the walking bouts, we observed mean absolute errors between motion capture and OpenPose of 0.02 s for temporal gait parameters (i.e., step time, stance time, swing time and double support time) and 0.049 m for step lengths. Accuracy improved when spatiotemporal gait parameters were calculated as individual participant mean values: mean absolute error was 0.01 s for temporal gait parameters and 0.018 m for step lengths. The greatest difference in gait speed between motion capture and OpenPose was less than 0.10 m s−1. Mean absolute error of sagittal plane hip, knee and ankle angles between motion capture and OpenPose were 4.0°, 5.6° and 7.4°. Our analysis workflow is freely available, involves minimal user input, and does not require prior gait analysis expertise. Finally, we offer suggestions and considerations for future applications of pose estimation for human gait analysis.


Sign in / Sign up

Export Citation Format

Share Document