Detecting socially occupied spaces with depth cameras: evaluating location and body orientation as relevant social features

Author(s):  
Violeta Ana Luz Sosa Leon ◽  
Angela Schwering
Author(s):  
Dusan Koniar ◽  
Jozef Volak ◽  
Libor Hargas ◽  
Silvia Janisova ◽  
Jakub Bajzik

2021 ◽  
pp. 1-12
Author(s):  
Morgane Allanic ◽  
Misato Hayashi ◽  
Takeshi Furuichi ◽  
Tetsuro Matsuzawa

Grooming site preferences have been relatively well studied in monkey species in order to investigate the function of social grooming. They are not only influenced by the amount of ectoparasites, but also by different social variables such as the dominance rank between individuals or their levels of affiliation. However, studies on this topic mainly come from monkey species, with almost no report on great apes. This study aimed to explore whether body site and body orientation preferences during social grooming show species-specific differences (bonobos vs. chimpanzees) and environment-specific differences (captivity vs. wild). Results showed that bonobos groomed the head, the front and faced each other more often than chimpanzees, while chimpanzees groomed the back, anogenitals and more frequently in face-to-back positions. Moreover, captive individuals were found to groom facing one another more often than wild ones, whereas wild individuals groomed the back and in face-to-back positions more. While future studies should expand their scope to include more populations per condition, our preliminary 2 by 2 comparison study highlights the influence of (i) species-specific social differences such as social tolerance, social attention and facial communication, and (ii) socioenvironmental constraints such as risk of predation, spatial crowding and levels of hygiene, that might be the two important factors determining the grooming patterns in two <i>Pan</i>species.


Author(s):  
HyeonJung Park ◽  
Youngki Lee ◽  
JeongGil Ko

In this work we present SUGO, a depth video-based system for translating sign language to text using a smartphone's front camera. While exploiting depth-only videos offer benefits such as being less privacy-invasive compared to using RGB videos, it introduces new challenges which include dealing with low video resolutions and the sensors' sensitiveness towards user motion. We overcome these challenges by diversifying our sign language video dataset to be robust to various usage scenarios via data augmentation and design a set of schemes to emphasize human gestures from the input images for effective sign detection. The inference engine of SUGO is based on a 3-dimensional convolutional neural network (3DCNN) to classify a sequence of video frames as a pre-trained word. Furthermore, the overall operations are designed to be light-weight so that sign language translation takes place in real-time using only the resources available on a smartphone, with no help from cloud servers nor external sensing components. Specifically, to train and test SUGO, we collect sign language data from 20 individuals for 50 Korean Sign Language words, summing up to a dataset of ~5,000 sign gestures and collect additional in-the-wild data to evaluate the performance of SUGO in real-world usage scenarios with different lighting conditions and daily activities. Comprehensively, our extensive evaluations show that SUGO can properly classify sign words with an accuracy of up to 91% and also suggest that the system is suitable (in terms of resource usage, latency, and environmental robustness) to enable a fully mobile solution for sign language translation.


2009 ◽  
Vol 6 (1) ◽  
pp. 119-131 ◽  
Author(s):  
Ann Forsyth ◽  
J. Michael Oakes ◽  
Kathryn H. Schmitz

Background:The Twin Cities Walking Study measured the associations of built environment versus socioeconomic and psychosocial variables with total physical activity and walking for 716 adults.Methods:This article reports on the test–retest reliability of the survey portion of the study. To test the reliability of the study measures, 158 respondents completed measures twice within 1 to 4 weeks. Agreement between participants’ responses was measured using Pearson r and Spearman rho, and kappa statistics.Results:Demographic questions are highly reliable (R > .8). Questions about environmental and social features are typically less reliable (rho range = 0.42– 0.91). Reliability of the International Physical Activity Questionnaire (last 7 days version) was low (rho = 0.15 for total activity).Conclusions:Much of the survey has acceptable-to-good reliability. The low test–retest reliability points to potential limitations of using a single administration of the IPAQ to characterize habitual physical activity. Implications for sound inference are accordingly complicated.


Author(s):  
Li Li ◽  
Ken Chen ◽  
Karen Chen ◽  
Xu Xu*

Occupational injuries have high incidence rates across various industries. Safety education is a key component to effectively reduce work-related injuries. Posture training for work safety is widely adopted to increase the awareness of unsafe movements at work and to evaluate workers to minimize work-related musculoskeletal stresses. However, existing one-size-fits-all pamphlet-based posture training is facing challenges in its effectiveness. In recent years, the substantial technological development in virtual reality (VR) and augmented reality (AR) has made immersive and personalized education possible. For VR/AR-assisted posture training, full-body reconstruction from multiple point clouds is the key step. In this study, we propose a fast and coarse method to reconstruct the full-body pose of safety instructors using multiple low-cost depth cameras. The reconstructed body images from depth cameras are registered through iterative closet point algorithm. The reconstructed full-body pose can be further rendered in VR/AR environments for next-generation safety education.


Sign in / Sign up

Export Citation Format

Share Document