A Robust Method for Head Orientation Estimation Using Histogram of Oriented Gradients

Author(s):  
Dinh Tuan Tran ◽  
Joo-Ho Lee
Author(s):  
Qiang Yang ◽  
Yuanqing Zheng

Voice interaction is friendly and convenient for users. Smart devices such as Amazon Echo allow users to interact with them by voice commands and become increasingly popular in our daily life. In recent years, research works focus on using the microphone array built in smart devices to localize the user's position, which adds additional context information to voice commands. In contrast, few works explore the user's head orientation, which also contains useful context information. For example, when a user says, "turn on the light", the head orientation could infer which light the user is referring to. Existing model-based works require a large number of microphone arrays to form an array network, while machine learning-based approaches need laborious data collection and training workload. The high deployment/usage cost of these methods is unfriendly to users. In this paper, we propose HOE, a model-based system that enables Head Orientation Estimation for smart devices with only two microphone arrays, which requires a lower training overhead than previous approaches. HOE first estimates the user's head orientation candidates by measuring the voice energy radiation pattern. Then, the voice frequency radiation pattern is leveraged to obtain the final result. Real-world experiments are conducted, and the results show that HOE can achieve a median estimation error of 23 degrees. To the best of our knowledge, HOE is the first model-based attempt to estimate the head orientation by only two microphone arrays without the arduous data training overhead.


Author(s):  
Richardson Santiago Teles de Menezes ◽  
Lucas de Azevedo Lima ◽  
Orivaldo Santana ◽  
Aron Miranda Henriques-Alves ◽  
Rossana Moreno Santa Cruz ◽  
...  

2014 ◽  
Vol 6 (0) ◽  
pp. 63-67 ◽  
Author(s):  
Mitsuru Nakazawa ◽  
Ikuhisa Mitsugami ◽  
Hirotake Yamazoe ◽  
Yasushi Yagi

Author(s):  
Stephanie Tan ◽  
David M. J. Tax ◽  
Hayley Hung

Human head orientation estimation has been of interest because head orientation serves as a cue to directed social attention. Most existing approaches rely on visual and high-fidelity sensor inputs and deep learning strategies that do not consider the social context of unstructured and crowded mingling scenarios. We show that alternative inputs, like speaking status, body location, orientation, and acceleration contribute towards head orientation estimation. These are especially useful in crowded and in-the-wild settings where visual features are either uninformative due to occlusions or prohibitive to acquire due to physical space limitations and concerns of ecological validity. We argue that head orientation estimation in such social settings needs to account for the physically evolving interaction space formed by all the individuals in the group. To this end, we propose an LSTM-based head orientation estimation method that combines the hidden representations of the group members. Our framework jointly predicts head orientations of all group members and is applicable to groups of different sizes. We explain the contribution of different modalities to model performance in head orientation estimation. The proposed model outperforms baseline methods that do not explicitly consider the group context, and generalizes to an unseen dataset from a different social event.


Author(s):  
Cristian Canton-Ferrer ◽  
Carlos Segura ◽  
Josep R. Casas ◽  
Montse Pardàs ◽  
Javier Hernando

Sign in / Sign up

Export Citation Format

Share Document