pointing gestures
Recently Published Documents


TOTAL DOCUMENTS

195
(FIVE YEARS 51)

H-INDEX

28
(FIVE YEARS 5)

2022 ◽  
pp. 47-89
Author(s):  
Aliyah Morgenstern
Keyword(s):  

2021 ◽  
Vol 12 (1) ◽  
pp. 258
Author(s):  
Marek Čorňák ◽  
Michal Tölgyessy ◽  
Peter Hubinský

The concept of “Industry 4.0” relies heavily on the utilization of collaborative robotic applications. As a result, the need for an effective, natural, and ergonomic interface arises, as more workers will be required to work with robots. Designing and implementing natural forms of human–robot interaction (HRI) is key to ensuring efficient and productive collaboration between humans and robots. This paper presents a gestural framework for controlling a collaborative robotic manipulator using pointing gestures. The core principle lies in the ability of the user to send the robot’s end effector to the location towards, which he points to by his hand. The main idea is derived from the concept of so-called “linear HRI”. The framework utilizes a collaborative robotic arm UR5e and the state-of-the-art human body tracking sensor Leap Motion. The user is not required to wear any equipment. The paper describes the overview of the framework’s core method and provides the necessary mathematical background. An experimental evaluation of the method is provided, and the main influencing factors are identified. A unique robotic collaborative workspace called Complex Collaborative HRI Workplace (COCOHRIP) was designed around the gestural framework to evaluate the method and provide the basis for the future development of HRI applications.


2021 ◽  
Author(s):  
Ebru Ger ◽  
Stephanie Wermelinger ◽  
Maxine de Ven ◽  
Moritz M. Daum

Adults and infants as young as 4 months old follow pointing gestures. Although adults are shown to orient faster to index-finger pointing compared to other hand shapes, it is not known whether hand shapes influence infants' following of pointing. In this study, we used a spatial cueing paradigm on an eye tracker to investigate whether and to what extent adults and 12-month-old infants orient their attention in the direction of pointing gestures with different hand shapes: index finger, whole hand, and pinky finger. Results revealed that adults showed a cueing effect, that is, shorter saccadic reaction times (SRTs) to congruent compared to incongruent targets, for all hand shapes. However, they did not show a larger cueing effect triggered by the index finger. This contradicts previous findings and is discussed with respect to the differences in methodology. Infants showed a cueing effect only for the whole hand but not for the index finger or the pinky finger. Infants predominantly point with the whole hand prior to 12 months. The current results thus suggest that infants' perception of pointing gestures may be linked to their own production of pointing gestures. Infants may show a cueing effect by the conventional index-finger pointing shape later than their first year, possibly when they start to point predominantly with their index finger.


2021 ◽  
Author(s):  
Agnieszka Wykowska

Attentional orienting towards others’ gaze direction or pointing has been wellinvestigated in laboratory conditions. However, less is known about the operation ofattentional mechanisms in online naturalistic social interaction scenarios. It is equally plausible that following social directional cues (gaze, pointing) occurs reflexively, and/orthat it is influenced by top-down cognitive factors. In a mobile eye-tracking experiment,we show that under natural interaction conditions overt attentional orienting is notnecessarily reflexively triggered by pointing gestures or a combination of gaze shifts andpointing gestures. We found that participants conversing with an experimenter, who,during the interaction, would play out pointing gestures as well as directional gaze movements, continued to mostly focus their gaze on the face of the experimenter, demonstrating the significance of attending to the face of the interaction partner – in linewith effective top-down control over reflexive orienting of attention in the direction of social cues.


2021 ◽  
pp. 47-90
Author(s):  
Aliyah Morgenstern
Keyword(s):  

Author(s):  
Hyunggoog Seo ◽  
Jaedong Kim ◽  
Kwanggyoon Seo ◽  
Bumki Kim ◽  
Junyong Noh

An absolute mid-air pointing technique requires a preprocess called registration that makes the system remember the 3D positions and types of objects in advance. Previous studies have simply assumed that the information is already available because it requires a cumbersome process performed by an expert in a carefully calibrated environment. We introduce Overthere, which allows the user to intuitively register the objects in a smart environment by pointing to each target object a few times. To ensure accurate and coherent pointing gestures made by the user regardless of individual differences between them, we performed a user study and identified a desirable gesture motion for this purpose. In addition, we provide the user with various feedback to help them understand the current registration progress and adhere to required conditions, which will lead to accurate registration results. The user studies show that Overthere is sufficiently intuitive to be used by ordinary people.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Monamie Ringhofer ◽  
Miléna Trösch ◽  
Léa Lansade ◽  
Shinya Yamamoto

AbstractWhen interacting with humans, domesticated species may respond to communicative gestures, such as pointing. However, it is currently unknown, except for in dogs, if species comprehend the communicative nature of such cues. Here, we investigated whether horses could follow the pointing of a human informant by evaluating the credibility of the information about the food-hiding place provided by the pointing of two informants. Using an object-choice task, we manipulated the attentional state of the two informants during food-hiding events and differentiated their knowledge about the location of the hidden food. Furthermore, we investigated the horses’ visual attention levels towards human behaviour to evaluate the relationship between their motivation and their performance of the task. The result showed that horses that sustained high attention levels could evaluate the credibility of the information and followed the pointing of an informant who knew where food was hidden (Z =  − 2.281, P = 0.002, n = 36). This suggests that horses are highly sensitive to the attentional state and pointing gestures of humans, and that they perceive pointing as a communicative cue. This study also indicates that the motivation for the task should be investigated to determine the socio-cognitive abilities of animals.


2021 ◽  
Vol 2 ◽  
Author(s):  
Yuan Li ◽  
Donghan Hu ◽  
Boyuan Wang ◽  
Doug A. Bowman ◽  
Sang Won Lee

In many collaborative tasks, the need for joint attention arises when one of the users wants to guide others to a specific location or target in space. If the collaborators are co-located and the target position is in close range, it is almost instinctual for users to refer to the target location by pointing with their bare hands. While such pointing gestures can be efficient and effective in real life, performance will be impacted if the target is in augmented reality (AR), where depth cues like occlusion may be missing if the pointer’s hand is not tracked and modeled in 3D. In this paper, we present a study utilizing head-worn AR displays to examine the effects of incorrect occlusion cues on spatial target identification in a collaborative barehanded referencing task. We found that participants’ performance in AR was reduced compared to a real-world condition, but also that they developed new strategies to cope with the limitations of AR. Our work also identified mixed results of the effect of spatial relationships between users.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4382
Author(s):  
Yung-Han Chen ◽  
Chi-Hsuan Huang ◽  
Sin-Wun Syu ◽  
Tien-Ying Kuo ◽  
Po-Chyi Su

This research investigated real-time fingertip detection in frames captured from the increasingly popular wearable device, smart glasses. The egocentric-view fingertip detection and character recognition can be used to create a novel way of inputting texts. We first employed Unity3D to build a synthetic dataset with pointing gestures from the first-person perspective. The obvious benefits of using synthetic data are that they eliminate the need for time-consuming and error-prone manual labeling and they provide a large and high-quality dataset for a wide range of purposes. Following that, a modified Mask Regional Convolutional Neural Network (Mask R-CNN) is proposed, consisting of a region-based CNN for finger detection and a three-layer CNN for fingertip location. The process can be completed in 25 ms per frame for 640×480 RGB images, with an average error of 8.3 pixels. The speed is high enough to enable real-time “air-writing”, where users are able to write characters in the air to input texts or commands while wearing smart glasses. The characters can be recognized by a ResNet-based CNN from the fingertip trajectories. Experimental results demonstrate the feasibility of this novel methodology.


Sign in / Sign up

Export Citation Format

Share Document