Multi-functional coal mine safety system: visualisation of events (mining processes) form the miner's workplace

Author(s):  
A.V. Novikov ◽  
K.V. Panevnikov ◽  
I.V. Pisarev

The paper reviews the use of mobile video monitoring equipment in coal mines. The most common option is the use of stationary video cameras with real-time video streaming to the mine dispatcher's control monitor via cables. Despite all the benefits of the information obtained, this method has certain limitations due to the specific features of the mine atmosphere, i.e. high humidity and dust levels, as well as the impossibility to organize video monitoring over the entire length of the mine workings. Therefore, mobile video monitoring equipment, both portable and vehicle-based, is efficient supplement to the stationary video cameras. The portable devices include smart phones and the battery-powered head lights with an integrated video camera, which have recently become very popular. In both cases, an important consideration, in addition to the actual video capturing, is the issue of transmitting video data to the top level, i.e. to the mine dispatcher's control panel. The following options are possible: connection to the mine wireless network hotspots via radio channel, reading the information in the lamp rooms when leaving the mine and real-time broadcasting from the mine to the top level. The assumption is made that in order to implement the fastest (and the most efficient) way that works without delays between capturing and transmitting of video data to the daylight surface, such as the latter of the options above, a communications infrastructure based on wireless and cable networks needs to be deployed in the mine workings. The required infrastructure is present in a number of systems designed to locate miners inside the mine workings as part of a multifunctional security system, which enables continuous radio communication of individual devices with infrastructure nodes and, therefore, real-time video data transmission.

2014 ◽  
Vol 543-547 ◽  
pp. 891-894
Author(s):  
Lian Jun Zhang ◽  
Shi Jie Liu

The bus video monitoring system is composed by WCDMA transmission system, video server system, system monitoring center and outreach system. By WCDMA wireless transmission module achieving real time video data return, while using VPDN network technology. Using of the DVS video server and by WCDMA transmission system, the monitoring videos information will be transmitted to the monitoring center rapidly and in real time. The monitoring center can remotely monitor, manage, and dispatch the bus. The results demonstrating this system has good real time transmission ability.


2013 ◽  
Vol 380-384 ◽  
pp. 790-793
Author(s):  
Min Feng ◽  
Jie Sun ◽  
Yin Yang Zhang

According to some bottleneck problems of the communication network bandwidth in wireless video transmission, a design scheme of real-time traffic video monitoring system based on 3G network is put forward in this paper. The design of hardware and the software realization of the system process are mainly introduced. TMS320DM8168 is selected to build the hardware platform in this system. H.264 video encoder is integrated internally. The real-time transmission of video data is sent to the remote monitoring center through the 3G network to improve the video transmission quality. The system meets the requirements of video transmission applied in automobiles.


2011 ◽  
Vol 189-193 ◽  
pp. 3605-3611
Author(s):  
Ying Zhan Yan ◽  
Ling Peng ◽  
Song Fa Huang

To meet the needs of video monitoring applications, the paper researches and designs of Hi3510-based embedded wireless video monitoring systems.The systems uses H.264 for video data encoding, combining Hi3510 encoding API to adjust H.264 rate control dynamically. Based on real-time transmission control RTP / RTCP protocol, in order to achieve adaptability and guarantee the QoS of the real-time transmission, this paper presents variable constant growth and change constant reduction method to adjust the transmission speed dynamically. In addition, this paper processes the packet loss during transmission processing, trying to ensure the I-frame reliable transmission. To transplant DWL-G122 wireless network card, it can realize communication.


2013 ◽  
Vol 756-759 ◽  
pp. 682-685
Author(s):  
Hao Zeng ◽  
Xu Chen ◽  
Yan Hui Fu

This paper introduces a mobile video monitoring system Based on Android. This Mobile video monitoring system consists of video acquisition device, DVR, server, wireless network and mobile monitor client. Hierarchy of RTP/RTCP in the system is discussed in detail, as well as the decoding of the Real-time data streams in the client.


2008 ◽  
Author(s):  
A. Lapeyronnie ◽  
C. Parisot ◽  
J. Meessen ◽  
X. Desurmont ◽  
J.-F. Delaigle

Author(s):  
Qingtao Wu ◽  
Zaihui Cao

: Cloud monitoring technology is an important maintenance and management tool for cloud platforms.Cloud monitoring system is a kind of network monitoring service, monitoring technology and monitoring platform based on Internet. At present, the monitoring system is changed from the local monitoring to cloud monitoring, with the flexibility and convenience improved, but also exposed more security issues. Cloud video may be intercepted or changed in the transmission process. Most of the existing encryption algorithms have defects in real-time and security. Aiming at the current security problems of cloud video surveillance, this paper proposes a new video encryption algorithm based on H.264 standard. By using the advanced FMO mechanism, the related macro blocks can be driven into different Slice. The encryption algorithm proposed in this paper can encrypt the whole video content by encrypting the FMO sub images. The method has high real-time performance, and the encryption process can be executed in parallel with the coding process. The algorithm can also be combined with traditional scrambling algorithm, further improve the video encryption effect. The algorithm selects the encrypted part of the video data, which reducing the amount of data to be encrypted. Thus reducing the computational complexity of the encryption system, with faster encryption speed, improve real-time and security, suitable for transfer through mobile multimedia and wireless multimedia network.


2021 ◽  
Vol 11 (11) ◽  
pp. 4940
Author(s):  
Jinsoo Kim ◽  
Jeongho Cho

The field of research related to video data has difficulty in extracting not only spatial but also temporal features and human action recognition (HAR) is a representative field of research that applies convolutional neural network (CNN) to video data. The performance for action recognition has improved, but owing to the complexity of the model, some still limitations to operation in real-time persist. Therefore, a lightweight CNN-based single-stream HAR model that can operate in real-time is proposed. The proposed model extracts spatial feature maps by applying CNN to the images that develop the video and uses the frame change rate of sequential images as time information. Spatial feature maps are weighted-averaged by frame change, transformed into spatiotemporal features, and input into multilayer perceptrons, which have a relatively lower complexity than other HAR models; thus, our method has high utility in a single embedded system connected to CCTV. The results of evaluating action recognition accuracy and data processing speed through challenging action recognition benchmark UCF-101 showed higher action recognition accuracy than the HAR model using long short-term memory with a small amount of video frames and confirmed the real-time operational possibility through fast data processing speed. In addition, the performance of the proposed weighted mean-based HAR model was verified by testing it in Jetson NANO to confirm the possibility of using it in low-cost GPU-based embedded systems.


2021 ◽  
pp. 104687812110082
Author(s):  
Omamah Almousa ◽  
Ruby Zhang ◽  
Meghan Dimma ◽  
Jieming Yao ◽  
Arden Allen ◽  
...  

Objective. Although simulation-based medical education is fundamental for acquisition and maintenance of knowledge and skills; simulators are often located in urban centers and they are not easily accessible due to cost, time, and geographic constraints. Our objective is to develop a proof-of-concept innovative prototype using virtual reality (VR) technology for clinical tele simulation training to facilitate access and global academic collaborations. Methodology. Our project is a VR-based system using Oculus Quest as a standalone, portable, and wireless head-mounted device, along with a digital platform to deliver immersive clinical simulation sessions. Instructor’s control panel (ICP) application is designed to create VR-clinical scenarios remotely, live-stream sessions, communicate with learners and control VR-clinical training in real-time. Results. The Virtual Clinical Simulation (VCS) system offers realistic clinical training in virtual space that mimics hospital environments. Those VR clinical scenarios are customizable to suit the need, with high-fidelity lifelike characters designed to deliver interactive and immersive learning experience. The real-time connection and live-stream between ICP and VR-training system enables interactive academic learning and facilitates access to tele simulation training. Conclusions. VCS system provides innovative solutions to major challenges associated with conventional simulation training such as access, cost, personnel, and curriculum. VCS facilitates the delivery of academic and interactive clinical training that is similar to real-life settings. Tele-clinical simulation systems like VCS facilitate necessary academic-community partnerships, as well as global education network between resource-rich and low-income countries.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4045
Author(s):  
Alessandro Sassu ◽  
Jose Francisco Saenz-Cogollo ◽  
Maurizio Agelli

Edge computing is the best approach for meeting the exponential demand and the real-time requirements of many video analytics applications. Since most of the recent advances regarding the extraction of information from images and video rely on computation heavy deep learning algorithms, there is a growing need for solutions that allow the deployment and use of new models on scalable and flexible edge architectures. In this work, we present Deep-Framework, a novel open source framework for developing edge-oriented real-time video analytics applications based on deep learning. Deep-Framework has a scalable multi-stream architecture based on Docker and abstracts away from the user the complexity of cluster configuration, orchestration of services, and GPU resources allocation. It provides Python interfaces for integrating deep learning models developed with the most popular frameworks and also provides high-level APIs based on standard HTTP and WebRTC interfaces for consuming the extracted video data on clients running on browsers or any other web-based platform.


Sign in / Sign up

Export Citation Format

Share Document