An Early Fire and Smoke Detection Model for Surveillance Systems Based on Dilated CNNs

2020 ◽  
Vol 11 (4) ◽  
pp. 51-66
Author(s):  
Yakhyokhuja Valikhujaev ◽  
◽  
Makhmudov Fazliddin ◽  
Muksimova Shakhnoza ◽  
YoungIm Cho
Symmetry ◽  
2020 ◽  
Vol 12 (7) ◽  
pp. 1075
Author(s):  
Md Rashedul Islam ◽  
Md Amiruzzaman ◽  
Shahriar Nasim ◽  
Jungpil Shin

This article concerns smoke detection in the early stages of a fire. Using the computer-aided system, the efficient and early detection of smoke may stop a massive fire incident. Without considering the multiple moving objects on background and smoke particles analysis (i.e., pattern recognition), smoke detection models show suboptimal performance. To address this, this paper proposes a hybrid smoke segmentation and an efficient symmetrical simulation model of dynamic smoke to extract a smoke growth feature based on temporal frames from a video. In this model, smoke is segmented from the multi-moving object on the complex background using the Gaussian’s Mixture Model (GMM) and HSV (hue-saturation-value) color segmentation to encounter the candidate smoke and non-smoke regions in the preprocessing stage. The preprocessed temporal frames with moving smoke are analyzed by the dynamic smoke growth analysis and spatial-temporal frame energy feature extraction model. In dynamic smoke growth analysis, the temporal frames are segmented in blocks and the smoke growth representations are formulated from corresponding blocks. Finally, the classifier was trained using the extracted features to classify and detect smoke using a Radial Basis Function (RBF) non-linear Gaussian kernel-based binary Support Vector Machine (SVM). For validating the proposed smoke detection model, multi-conditional video clips are used. The experimental results suggest that the proposed model outperforms state-of-the-art algorithms.


Proceedings ◽  
2020 ◽  
Vol 59 (1) ◽  
pp. 9
Author(s):  
Antoine Chevrot ◽  
Alexandre Vernotte ◽  
Pierre Bernabe ◽  
Aymeric Cretin ◽  
Fabien Peureux ◽  
...  

Major transportation surveillance protocols have not been specified with cyber security in mind and therefore provide no encryption nor identification. These issues expose air and sea transport to false data injection attacks (FDIAs), in which an attacker modifies, blocks or emits fake surveillance messages to dupe controllers and surveillance systems. There has been growing interest in conducting research on machine learning-based anomaly detection systems that address these new threats. However, significant amounts of data are needed to achieve meaningful results with this type of model. Raw, genuine data can be obtained from existing databases but need to be preprocessed before being fed to a model. Acquiring anomalous data is another challenge: such data is much too scarce for both the Automatic Dependent Surveillance–Broadcast (ADS-B) and the Automatic Identification System (AIS). Crafting anomalous data by hand, which has been the sole method applied to date, is hardly suitable for broad detection model testing. This paper proposes an approach built upon existing libraries and ideas that offers ML researchers the necessary tools to facilitate the access and processing of genuine data as well as to automatically generate synthetic anomalous surveillance data to constitute broad, elaborated test datasets. We demonstrate the usability of the approach by discussing work in progress that includes the reproduction of related work, creation of relevant datasets and design of advanced anomaly detection models for both domains of application.


Atmosphere ◽  
2020 ◽  
Vol 11 (11) ◽  
pp. 1241
Author(s):  
Yakhyokhuja Valikhujaev ◽  
Akmalbek Abdusalomov ◽  
Young Im Cho

The technologies underlying fire and smoke detection systems play a crucial role in ensuring and delivering optimal performance in modern surveillance environments. In fact, fire can cause significant damage to lives and properties. Considering that the majority of cities have already installed camera-monitoring systems, this encouraged us to take advantage of the availability of these systems to develop cost-effective vision detection methods. However, this is a complex vision detection task from the perspective of deformations, unusual camera angles and viewpoints, and seasonal changes. To overcome these limitations, we propose a new method based on a deep learning approach, which uses a convolutional neural network that employs dilated convolutions. We evaluated our method by training and testing it on our custom-built dataset, which consists of images of fire and smoke that we collected from the internet and labeled manually. The performance of our method was compared with that of methods based on well-known state-of-the-art architectures. Our experimental results indicate that the classification performance and complexity of our method are superior. In addition, our method is designed to be well generalized for unseen data, which offers effective generalization and reduces the number of false alarms.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Xi Cheng

Most of the existing smoke detection methods are based on manual operation, which is difficult to meet the needs of fire monitoring. To further improve the accuracy of smoke detection, an automatic feature extraction and classification method based on fast regional convolution neural network (fast R–CNN) was introduced in the study. This method uses a selective search algorithm to obtain the candidate images of the sample images. The preselected area coordinates and the sample image of visual task are used as network learning. During the training process, we use the feature migration method to avoid the lack of smoke data or limited data sources. Finally, a target detection model is obtained, which is strongly related to a specified visual task, and it has well-trained weight parameters. Experimental results show that this method not only improves the detection accuracy but also effectively reduces the false alarm rate. It can not only meet the real time and accuracy of fire detection but also realize effective fire detection. Compared with similar fire detection algorithms, the improved algorithm proposed in this paper has better robustness to fire detection and has better performance in accuracy and speed.


Energies ◽  
2020 ◽  
Vol 13 (8) ◽  
pp. 2098 ◽  
Author(s):  
Alessio Gagliardi ◽  
Sergio Saponara

This paper proposes a video-based smoke detection technique for early warning in antifire surveillance systems. The algorithm is developed to detect the smoke behavior in a restricted video surveillance environment, both indoor (e.g., railway carriage, bus wagon, industrial plant, or home/office) or outdoor (e.g., storage area or parking area). The proposed technique exploits a Kalman estimator, color analysis, image segmentation, blob labeling, geometrical features analysis, and M of N decisor, in order to extract an alarm signal within a strict real-time deadline. This new technique requires just a few seconds to detect fire smoke, and it is 15 times faster compared to the requirements of fire-alarm standards for industrial or transport systems, e.g., the EN50155 standard for onboard train fire-alarm systems. Indeed, the EN50155 considers a response time of at least 60 s for onboard systems. The proposed technique has been tested and compared with state-of-art systems using the open access Firesense dataset developed as an output of a European FP7 project, including several fire/smoke indoor and outdoor scenes. There is an improvement of all the detection metrics (recall, accuracy, F1 score, precision, etc.) when comparing Advanced Video SmokE Detection (AdViSED) with other video-based antifire works recently proposed in literature. The proposed technique is flexible in terms of input camera type and frame size and rate and has been implemented on a low-cost embedded platform to develop a distributed antifire system accessible via web browser.


Author(s):  
J. J. Majin ◽  
Y. M. Valencia ◽  
M. E. Stivanello ◽  
M. R. Stemmer ◽  
J. D. Salazar

Abstract. In intelligent transportation systems (ITS), it is essential to obtain reliable statistics of the vehicular flow in order to create urban traffic management strategies. These systems have benefited from the increase in computational resources and the improvement of image processing methods, especially in object detection based on deep learning. This paper proposes a method for vehicle counting composed of three stages: object detection, tracking and trajectory processing. In order to select the detection model with the best trade-off between accuracy and speed, the following one-stage detection models were compared: SSD512, CenterNet, Efficiedet-D0 and YOLO family models (v2, v3 and v4). Experimental results conducted on the benchmark dataset show that the best rates among the detection models were obtained using YOLOv4 with mAP = 87% and a processing speed of 18 FPS. On the other hand, the accuracy obtained in the proposed counting method was 94% with a real-time processing rate lower than 1.9.


2007 ◽  
Author(s):  
John E. Brooker ◽  
David L. Urban ◽  
Gary A. Ruff

Author(s):  
Muthukumaran Ramasubramanian ◽  
Aaron Kaulfus ◽  
Manil Maskey ◽  
Rahul Ramachandran ◽  
Iksha Gurung ◽  
...  

Author(s):  
Rahul Rawat

Abstract: Localization, Visibility, Proximity, Detection, Recognition has always been a challenge for surveillance system. These challenges can be felt in the industries where surveillance systems are used like armed forces, technical-agriculture and other such fields. Most of the Smart system available are just for the surveillance of Human intervention but there is a need for a system which can be used for animals as well because with the outburst of human population and symbiotic relationship with wild animals results in life loss and damage to agriculture. In this paper we are designing to overcome these above-mentioned challenges for human and animal-based surveillance system in real time application. The system setup is done on a Raspberry pi integrated with deep-learning models which performs the classification of objects on the frames, then the classified objects is given to a face detection model for further processing. The detected face is relayed to the back-end for feature mapping with the saved log files with containing features of familiar face IDs. Four models were tested for face detection out of which the DNN model performed the best giving an accuracy of 94.88%.The system is also able to send alerts to the admin if any threat is detected with the help of a communication module. Keywords: Deep learning, Raspberry Pi, OpenCV, Image Processing, YOLO, Face Recognition


Sign in / Sign up

Export Citation Format

Share Document