scholarly journals Comparative Study of Relative-Pose Estimations from a Monocular Image Sequence in Computer Vision and Photogrammetry

Sensors ◽  
2019 ◽  
Vol 19 (8) ◽  
pp. 1905 ◽  
Author(s):  
Tserennadmid Tumurbaatar ◽  
Taejung Kim

Techniques for measuring the position and orientation of an object from corresponding images are based on the principles of epipolar geometry in the computer vision and photogrammetric fields. Contributing to their importance, many different approaches have been developed in computer vision, increasing the automation of the pure photogrammetric processes. The aim of this paper is to evaluate the main differences between photogrammetric and computer vision approaches for the pose estimation of an object from image sequences, and how these have to be considered in the choice of processing technique when using a single camera. The use of a single camera in consumer electronics has enormously increased, even though most 3D user interfaces require additional devices to sense 3D motion for their input. In this regard, using a monocular camera to determine 3D motion is unique. However, we argue that relative pose estimations from monocular image sequences have not been studied thoroughly by comparing both photogrammetry and computer vision methods. To estimate motion parameters characterized by 3D rotation and 3D translations, estimation methods developed in the computer vision and photogrammetric fields are implemented. This paper describes a mathematical motion model for the proposed approaches, by differentiating their geometric properties and estimations of the motion parameters. A precision analysis is conducted to investigate the main characteristics of the methods in both fields. The results of the comparison indicate the differences between the estimations in both fields, in terms of accuracy and the test dataset. We show that homography-based approaches are more accurate than essential-matrix or relative orientation–based approaches under noisy conditions.

1993 ◽  
Vol 03 (04) ◽  
pp. 797-831 ◽  
Author(s):  
V. CAPPELLINI ◽  
A. MECOCCI ◽  
A. DEL BIMBO

Motion analysis is of high interest in many different fields for a number of crucial applications. Short-term motion analysis addresses the computation of motion parameters or the qualitative estimation of the motion field. Long-term motion analysis aims at the understanding of motion and includes reasoning on motion properties. Image sequences are in general processed to perform the above motion analysis. These subjects are considered in this review with reference to the more significant results in the literature both at theory and application levels.


Author(s):  
Ran Li ◽  
Nayun Xu ◽  
Xutong Lu ◽  
Yucheng Xing ◽  
Haohua Zhao ◽  
...  

Electronics ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 159
Author(s):  
Paulo J. S. Gonçalves ◽  
Bernardo Lourenço ◽  
Samuel Santos ◽  
Rodolphe Barlogis ◽  
Alexandre Misson

The purpose of this work is to develop computational intelligence models based on neural networks (NN), fuzzy models (FM), support vector machines (SVM) and long short-term memory networks (LSTM) to predict human pose and activity from image sequences, based on computer vision approaches to gather the required features. To obtain the human pose semantics (output classes), based on a set of 3D points that describe the human body model (the input variables of the predictive model), prediction models were obtained from the acquired data, for example, video images. In the same way, to predict the semantics of the atomic activities that compose an activity, based again in the human body model extracted at each video frame, prediction models were learned using LSTM networks. In both cases the best learned models were implemented in an application to test the systems. The SVM model obtained 95.97% of correct classification of the six different human poses tackled in this work, during tests in different situations from the training phase. The implemented LSTM learned model achieved an overall accuracy of 88%, during tests in different situations from the training phase. These results demonstrate the validity of both approaches to predict human pose and activity from image sequences. Moreover, the system is capable of obtaining the atomic activities and quantifying the time interval in which each activity takes place.


Sensors ◽  
2020 ◽  
Vol 20 (4) ◽  
pp. 1074 ◽  
Author(s):  
Weiya Chen ◽  
Chenchen Yu ◽  
Chenyu Tu ◽  
Zehua Lyu ◽  
Jing Tang ◽  
...  

Real-time sensing and modeling of the human body, especially the hands, is an important research endeavor for various applicative purposes such as in natural human computer interactions. Hand pose estimation is a big academic and technical challenge due to the complex structure and dexterous movement of human hands. Boosted by advancements from both hardware and artificial intelligence, various prototypes of data gloves and computer-vision-based methods have been proposed for accurate and rapid hand pose estimation in recent years. However, existing reviews either focused on data gloves or on vision methods or were even based on a particular type of camera, such as the depth camera. The purpose of this survey is to conduct a comprehensive and timely review of recent research advances in sensor-based hand pose estimation, including wearable and vision-based solutions. Hand kinematic models are firstly discussed. An in-depth review is conducted on data gloves and vision-based sensor systems with corresponding modeling methods. Particularly, this review also discusses deep-learning-based methods, which are very promising in hand pose estimation. Moreover, the advantages and drawbacks of the current hand gesture estimation methods, the applicative scope, and related challenges are also discussed.


1999 ◽  
Vol 74 (3) ◽  
pp. 174-192 ◽  
Author(s):  
S. Wachter ◽  
H.-H. Nagel

Sign in / Sign up

Export Citation Format

Share Document