Safety Services in Smart Environments Using Depth Cameras

Author(s):  
Matthias Ruben Mettel ◽  
Michael Alekseew ◽  
Carsten Stocklöw ◽  
Andreas Braun
IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 228804-228817
Author(s):  
Aurora Polo-Rodriguez ◽  
Federico Cruciani ◽  
Chris D. Nugent ◽  
Javier Medina

Author(s):  
Dusan Koniar ◽  
Jozef Volak ◽  
Libor Hargas ◽  
Silvia Janisova ◽  
Jakub Bajzik

2020 ◽  
Vol 6 (3) ◽  
pp. 380-383
Author(s):  
Jochen Bauer ◽  
Michael Hechtel ◽  
Martin Holzwarth ◽  
Julian Sessner ◽  
Jörg Franke ◽  
...  

AbstractAll aspects of daily life increasingly include digitization. So-called „smart home“ technologies, as well as „wearables“, are gaining attention from more and more dwellers. Therefore, sensor-based, individualized, AI-based services for improved post-intervention monitoring and therapy accompaniment will become feasible and possible if these systems offer a related context-awareness. This paper provides an approach on how to sense and interpret specific contexts with the help of wearables, smartwatches, smart home sensors, and emotion detection software.


Author(s):  
Paulo Pérez ◽  
Philippe Roose ◽  
Yudith Cardinale ◽  
Mark Dalmau ◽  
Dominique Masson ◽  
...  

Author(s):  
HyeonJung Park ◽  
Youngki Lee ◽  
JeongGil Ko

In this work we present SUGO, a depth video-based system for translating sign language to text using a smartphone's front camera. While exploiting depth-only videos offer benefits such as being less privacy-invasive compared to using RGB videos, it introduces new challenges which include dealing with low video resolutions and the sensors' sensitiveness towards user motion. We overcome these challenges by diversifying our sign language video dataset to be robust to various usage scenarios via data augmentation and design a set of schemes to emphasize human gestures from the input images for effective sign detection. The inference engine of SUGO is based on a 3-dimensional convolutional neural network (3DCNN) to classify a sequence of video frames as a pre-trained word. Furthermore, the overall operations are designed to be light-weight so that sign language translation takes place in real-time using only the resources available on a smartphone, with no help from cloud servers nor external sensing components. Specifically, to train and test SUGO, we collect sign language data from 20 individuals for 50 Korean Sign Language words, summing up to a dataset of ~5,000 sign gestures and collect additional in-the-wild data to evaluate the performance of SUGO in real-world usage scenarios with different lighting conditions and daily activities. Comprehensively, our extensive evaluations show that SUGO can properly classify sign words with an accuracy of up to 91% and also suggest that the system is suitable (in terms of resource usage, latency, and environmental robustness) to enable a fully mobile solution for sign language translation.


Author(s):  
Li Li ◽  
Ken Chen ◽  
Karen Chen ◽  
Xu Xu*

Occupational injuries have high incidence rates across various industries. Safety education is a key component to effectively reduce work-related injuries. Posture training for work safety is widely adopted to increase the awareness of unsafe movements at work and to evaluate workers to minimize work-related musculoskeletal stresses. However, existing one-size-fits-all pamphlet-based posture training is facing challenges in its effectiveness. In recent years, the substantial technological development in virtual reality (VR) and augmented reality (AR) has made immersive and personalized education possible. For VR/AR-assisted posture training, full-body reconstruction from multiple point clouds is the key step. In this study, we propose a fast and coarse method to reconstruct the full-body pose of safety instructors using multiple low-cost depth cameras. The reconstructed body images from depth cameras are registered through iterative closet point algorithm. The reconstructed full-body pose can be further rendered in VR/AR environments for next-generation safety education.


Computer ◽  
2013 ◽  
Vol 46 (2) ◽  
pp. 69-75 ◽  
Author(s):  
J. W. S. Liu ◽  
Chi-Sheng Shih ◽  
Edward T.-H Chu
Keyword(s):  

2012 ◽  
Vol 10 ◽  
pp. 205-214 ◽  
Author(s):  
Eric Torunski ◽  
Rana Othman ◽  
Mauricio Orozco ◽  
Abdulmotaleb El Saddik

Sign in / Sign up

Export Citation Format

Share Document