scholarly journals Design of Three-Dimensional Gesture Recognition and Motion Tracking Human-Computer Intelligent Interaction System based on PAJ7620

2021 ◽  
Vol 2005 (1) ◽  
pp. 012085
Author(s):  
WANG Jian-liang ◽  
WEI Ye ◽  
LI Yang ◽  
MENG Yuan ◽  
LI Ming-yu ◽  
...  
2017 ◽  
Vol 14 (5) ◽  
pp. 172988141773275 ◽  
Author(s):  
Francisco J Perez-Grau ◽  
Fernando Caballero ◽  
Antidio Viguria ◽  
Anibal Ollero

This article presents an enhanced version of the Monte Carlo localization algorithm, commonly used for robot navigation in indoor environments, which is suitable for aerial robots moving in a three-dimentional environment and makes use of a combination of measurements from an Red,Green,Blue-Depth (RGB-D) sensor, distances to several radio-tags placed in the environment, and an inertial measurement unit. The approach is demonstrated with an unmanned aerial vehicle flying for 10 min indoors and validated with a very precise motion tracking system. The approach has been implemented using the robot operating system framework and works smoothly on a regular i7 computer, leaving plenty of computational capacity for other navigation tasks such as motion planning or control.


2019 ◽  
Vol 1 ◽  
pp. 1-1
Author(s):  
Bernhard Jenny ◽  
Kadek Ananta Satriadi ◽  
Yalong Yang ◽  
Christopher R. Austin ◽  
Simond Lee ◽  
...  

<p><strong>Abstract.</strong> Augmented reality (AR) and virtual reality (VR) technology are increasingly used for the analysis and visualisation of geospatial data. It has become simple to create an immersive three-dimensional AR or VR map with a combination of game engines (e.g., Unity), software development kits for streaming and rendering geospatial data (e.g., Mapbox), and affordable hardware (e.g., HTC Vive). However, it is not clear how to best interact with geospatial visualisations in AR and VR. For example, there are no established standards to efficiently zoom and pan, select map features, or place markers on AR and VR maps. In this paper, we explore interaction with AR and VR maps using gestures and handheld controllers.</p><p>As for gesture-controlled interaction, we present the results of recent research projects exploring how body gestures can control basic AR and VR map operations. We use motion-tracking controllers (e.g., Leap Motion) to capture and interpret gestures. We conducted a set of user studies to identify, explore and compare various gestures for controlling map-related operations. This includes, for example, mid-air hand gestures for zooming and panning (Satriadi et al. 2019), selecting points of interest, adjusting the orientation of maps, or placing markers on maps. Additionally, we present novel VR interfaces and interaction methods for controlling the content of maps with gestures.</p><p>As for handheld controllers, we discuss interaction with exocentric globes, egocentric globes (where the user stands inside a large virtual globe), flat maps, and curved maps in VR. We demonstrate controller-based interaction for adjusting the centre of world maps displayed on these four types of projection surfaces (Yang et al. 2018), and illustrate the utility of interactively movable VR maps by the example of three-dimensional origin-destination flow maps (Yang et al. 2019).</p>


Leonardo ◽  
2016 ◽  
Vol 49 (3) ◽  
pp. 203-210 ◽  
Author(s):  
Shaltiel Eloul ◽  
Gil Zissu ◽  
Yehiel H. Amo ◽  
Nori Jacoby

The authors have mapped the three-dimensional motion of a fish onto various electronic music performance gestures, including loops, melodies, arpeggio and DJ-like interventions. They combine an element of visualization, using an LED screen installed on the back of an aquarium, to create a link between the fish’s motion and the sonified music. This visual addition provides extra information about the fish’s role in the music, enabling the perception of versatile and developing auditory structures during the performance that extend beyond the sonification of the momentary motion of objects.


Author(s):  
Ankit Chaudhary ◽  
Jagdish L. Raheja ◽  
Karen Das ◽  
Shekhar Raheja

In the last few years gesture recognition and gesture-based human computer interaction has gained a significant amount of popularity amongst researchers all over the world. It has a number of applications ranging from security to entertainment. Gesture recognition is a form of biometric identification that relies on the data acquired from the gesture depicted by an individual. This data, which can be either two-dimensional or three-dimensional, is compared against a database of individuals or is compared with respective thresholds based on the way of solving the riddle. In this paper, a novel method for angle calculation of both hands’ bended fingers is discussed and its application to a robotic hand control is presented. For the first time, such a study has been conducted in the area of natural computing for calculating angles without using any wired equipment, colors, marker or any device. The system deploys a simple camera and captures images. The pre-processing and segmentation of the region of interest is performed in a HSV color space and a binary format respectively. The technique presented in this paper requires no training for the user to perform the task.


Sign in / Sign up

Export Citation Format

Share Document