Real-time Safety Monitoring Vision System for Linemen in Buckets Using Spatio-temporal Inference

Author(s):  
Zahid Ali ◽  
Unsang Park
Author(s):  
Jun-hua Chen ◽  
Da-hu Wang ◽  
Cun-yuan Sun

Objective: This study focused on the application of wearable technology in the safety monitoring and early warning for subway construction workers. Methods: With the help of real-time video surveillance and RFID positioning which was applied in the construction has realized the real-time monitoring and early warning of on-site construction to a certain extent, but there are still some problems. Real-time video surveillance technology relies on monitoring equipment, while the location of the equipment is fixed, so it is difficult to meet the full coverage of the construction site. However, wearable technologies can solve this problem, they have outstanding performance in collecting workers’ information, especially physiological state data and positioning data. Meanwhile, wearable technology has no impact on work and is not subject to the inference of dynamic environment. Results and conclusion: The first time the system applied to subway construction was a great success. During the construction of the station, the number of occurrences of safety warnings was 43 times, but the number of occurrences of safety accidents was 0, which showed that the safety monitoring and early warning system played a significant role and worked out perfectly.


Author(s):  
Giuseppe Placidi ◽  
Danilo Avola ◽  
Luigi Cinque ◽  
Matteo Polsinelli ◽  
Eleni Theodoridou ◽  
...  

AbstractVirtual Glove (VG) is a low-cost computer vision system that utilizes two orthogonal LEAP motion sensors to provide detailed 4D hand tracking in real–time. VG can find many applications in the field of human-system interaction, such as remote control of machines or tele-rehabilitation. An innovative and efficient data-integration strategy, based on the velocity calculation, for selecting data from one of the LEAPs at each time, is proposed for VG. The position of each joint of the hand model, when obscured to a LEAP, is guessed and tends to flicker. Since VG uses two LEAP sensors, two spatial representations are available each moment for each joint: the method consists of the selection of the one with the lower velocity at each time instant. Choosing the smoother trajectory leads to VG stabilization and precision optimization, reduces occlusions (parts of the hand or handling objects obscuring other hand parts) and/or, when both sensors are seeing the same joint, reduces the number of outliers produced by hardware instabilities. The strategy is experimentally evaluated, in terms of reduction of outliers with respect to a previously used data selection strategy on VG, and results are reported and discussed. In the future, an objective test set has to be imagined, designed, and realized, also with the help of an external precise positioning equipment, to allow also quantitative and objective evaluation of the gain in precision and, maybe, of the intrinsic limitations of the proposed strategy. Moreover, advanced Artificial Intelligence-based (AI-based) real-time data integration strategies, specific for VG, will be designed and tested on the resulting dataset.


2005 ◽  
Vol 56 (8-9) ◽  
pp. 831-842 ◽  
Author(s):  
Monica Carfagni ◽  
Rocco Furferi ◽  
Lapo Governi

Author(s):  
Gaurav Chaurasia ◽  
Arthur Nieuwoudt ◽  
Alexandru-Eugen Ichim ◽  
Richard Szeliski ◽  
Alexander Sorkine-Hornung

We present an end-to-end system for real-time environment capture, 3D reconstruction, and stereoscopic view synthesis on a mobile VR headset. Our solution allows the user to use the cameras on their VR headset as their eyes to see and interact with the real world while still wearing their headset, a feature often referred to as Passthrough. The central challenge when building such a system is the choice and implementation of algorithms under the strict compute, power, and performance constraints imposed by the target user experience and mobile platform. A key contribution of this paper is a complete description of a corresponding system that performs temporally stable passthrough rendering at 72 Hz with only 200 mW power consumption on a mobile Snapdragon 835 platform. Our algorithmic contributions for enabling this performance include the computation of a coarse 3D scene proxy on the embedded video encoding hardware, followed by a depth densification and filtering step, and finally stereoscopic texturing and spatio-temporal up-sampling. We provide a detailed discussion and evaluation of the challenges we encountered, as well as algorithm and performance trade-offs in terms of compute and resulting passthrough quality.;AB@The described system is available to users as the Passthrough+ feature on Oculus Quest. We believe that by publishing the underlying system and methods, we provide valuable insights to the community on how to design and implement real-time environment sensing and rendering on heavily resource constrained hardware.


Sign in / Sign up

Export Citation Format

Share Document