Near-Real Time Post-Disaster Damage Assessment with Airborne Oblique Video Data

Author(s):  
Norman Kerle ◽  
Rob Stekelenburg ◽  
Frank van den Heuvel ◽  
Ben Gorte
Author(s):  
Johnny Cusicanqui ◽  
Norman Kerle ◽  
Francesco Nex

Abstract. Remote sensing has evolved into the most efficient approach to assess post-disaster structural damage, in extensively affected areas through the use of space-borne data. For smaller, and in particular, complex urban disaster scenes, multi-perspective aerial imagery obtained with Unmanned Aerial Vehicles and derived dense colour 3D-models are increasingly being used. These type of data allow the direct and automated recognition of damage-related features, supporting an effective post-disaster structural damage assessment. However, the rapid collection and sharing of multi-perspective aerial imagery is still limited due to tight or lacking regulations and legal frameworks. A potential alternative is aerial video footage, typically acquired and shared by civil protection institutions or news media, and which tend to be the first type of airborne data available. Nevertheless, inherent artifacts and the lack of suitable processing means, have long limited its potential use in structural damage assessment and other post-disaster activities. In this research the usability of modern aerial video data was evaluated based on a comparative quality and application analysis of video data and multi-perspective imagery (photos), and their derivative 3D point clouds created using current photogrammetric techniques. Additionally, the effects of external factors, such as topography and the presence of smoke and moving objects were determined by analyzing two different earthquake-affected sites: Tainan (Taiwan) and Pescara del Tronto (Italy). Results demonstrated similar usabilities for video and photos. This is shown by the short 2 cm of difference between the accuracies of video and photo-based 3D Point clouds. Despite the low video resolution, the usability of this data was compensated by a small ground sampling distance. Instead of video characteristics, low quality and application resulted from non-data related factors, such as changes in the scene, lack of texture or moving objects. We conclude that current video data are not only more rapidly available than photos, but they also have a comparable ability to assist in image-based structural damage assessment and other post-disaster activities.


2018 ◽  
Vol 18 (6) ◽  
pp. 1583-1598 ◽  
Author(s):  
Johnny Cusicanqui ◽  
Norman Kerle ◽  
Francesco Nex

Abstract. Remote sensing has evolved into the most efficient approach to assess post-disaster structural damage, in extensively affected areas through the use of spaceborne data. For smaller, and in particular, complex urban disaster scenes, multi-perspective aerial imagery obtained with unmanned aerial vehicles and derived dense color 3-D models are increasingly being used. These type of data allow the direct and automated recognition of damage-related features, supporting an effective post-disaster structural damage assessment. However, the rapid collection and sharing of multi-perspective aerial imagery is still limited due to tight or lacking regulations and legal frameworks. A potential alternative is aerial video footage, which is typically acquired and shared by civil protection institutions or news media and which tends to be the first type of airborne data available. Nevertheless, inherent artifacts and the lack of suitable processing means have long limited its potential use in structural damage assessment and other post-disaster activities. In this research the usability of modern aerial video data was evaluated based on a comparative quality and application analysis of video data and multi-perspective imagery (photos), and their derivative 3-D point clouds created using current photogrammetric techniques. Additionally, the effects of external factors, such as topography and the presence of smoke and moving objects, were determined by analyzing two different earthquake-affected sites: Tainan (Taiwan) and Pescara del Tronto (Italy). Results demonstrated similar usabilities for video and photos. This is shown by the short 2 cm of difference between the accuracies of video- and photo-based 3-D point clouds. Despite the low video resolution, the usability of these data was compensated for by a small ground sampling distance. Instead of video characteristics, low quality and application resulted from non-data-related factors, such as changes in the scene, lack of texture, or moving objects. We conclude that not only are current video data more rapidly available than photos, but they also have a comparable ability to assist in image-based structural damage assessment and other post-disaster activities.


Author(s):  
Qingtao Wu ◽  
Zaihui Cao

: Cloud monitoring technology is an important maintenance and management tool for cloud platforms.Cloud monitoring system is a kind of network monitoring service, monitoring technology and monitoring platform based on Internet. At present, the monitoring system is changed from the local monitoring to cloud monitoring, with the flexibility and convenience improved, but also exposed more security issues. Cloud video may be intercepted or changed in the transmission process. Most of the existing encryption algorithms have defects in real-time and security. Aiming at the current security problems of cloud video surveillance, this paper proposes a new video encryption algorithm based on H.264 standard. By using the advanced FMO mechanism, the related macro blocks can be driven into different Slice. The encryption algorithm proposed in this paper can encrypt the whole video content by encrypting the FMO sub images. The method has high real-time performance, and the encryption process can be executed in parallel with the coding process. The algorithm can also be combined with traditional scrambling algorithm, further improve the video encryption effect. The algorithm selects the encrypted part of the video data, which reducing the amount of data to be encrypted. Thus reducing the computational complexity of the encryption system, with faster encryption speed, improve real-time and security, suitable for transfer through mobile multimedia and wireless multimedia network.


2021 ◽  
Vol 11 (11) ◽  
pp. 4940
Author(s):  
Jinsoo Kim ◽  
Jeongho Cho

The field of research related to video data has difficulty in extracting not only spatial but also temporal features and human action recognition (HAR) is a representative field of research that applies convolutional neural network (CNN) to video data. The performance for action recognition has improved, but owing to the complexity of the model, some still limitations to operation in real-time persist. Therefore, a lightweight CNN-based single-stream HAR model that can operate in real-time is proposed. The proposed model extracts spatial feature maps by applying CNN to the images that develop the video and uses the frame change rate of sequential images as time information. Spatial feature maps are weighted-averaged by frame change, transformed into spatiotemporal features, and input into multilayer perceptrons, which have a relatively lower complexity than other HAR models; thus, our method has high utility in a single embedded system connected to CCTV. The results of evaluating action recognition accuracy and data processing speed through challenging action recognition benchmark UCF-101 showed higher action recognition accuracy than the HAR model using long short-term memory with a small amount of video frames and confirmed the real-time operational possibility through fast data processing speed. In addition, the performance of the proposed weighted mean-based HAR model was verified by testing it in Jetson NANO to confirm the possibility of using it in low-cost GPU-based embedded systems.


2021 ◽  
Vol 13 (5) ◽  
pp. 905
Author(s):  
Chuyi Wu ◽  
Feng Zhang ◽  
Junshi Xia ◽  
Yichen Xu ◽  
Guoqing Li ◽  
...  

The building damage status is vital to plan rescue and reconstruction after a disaster and is also hard to detect and judge its level. Most existing studies focus on binary classification, and the attention of the model is distracted. In this study, we proposed a Siamese neural network that can localize and classify damaged buildings at one time. The main parts of this network are a variety of attention U-Nets using different backbones. The attention mechanism enables the network to pay more attention to the effective features and channels, so as to reduce the impact of useless features. We train them using the xBD dataset, which is a large-scale dataset for the advancement of building damage assessment, and compare their result balanced F (F1) scores. The score demonstrates that the performance of SEresNeXt with an attention mechanism gives the best performance, with the F1 score reaching 0.787. To improve the accuracy, we fused the results and got the best overall F1 score of 0.792. To verify the transferability and robustness of the model, we selected the dataset on the Maxar Open Data Program of two recent disasters to investigate the performance. By visual comparison, the results show that our model is robust and transferable.


2021 ◽  
pp. 147592172199621
Author(s):  
Enrico Tubaldi ◽  
Ekin Ozer ◽  
John Douglas ◽  
Pierre Gehl

This study proposes a probabilistic framework for near real-time seismic damage assessment that exploits heterogeneous sources of information about the seismic input and the structural response to the earthquake. A Bayesian network is built to describe the relationship between the various random variables that play a role in the seismic damage assessment, ranging from those describing the seismic source (magnitude and location) to those describing the structural performance (drifts and accelerations) as well as relevant damage and loss measures. The a priori estimate of the damage, based on information about the seismic source, is updated by performing Bayesian inference using the information from multiple data sources such as free-field seismic stations, global positioning system receivers and structure-mounted accelerometers. A bridge model is considered to illustrate the application of the framework, and the uncertainty reduction stemming from sensor data is demonstrated by comparing prior and posterior statistical distributions. Two measures are used to quantify the added value of information from the observations, based on the concepts of pre-posterior variance and relative entropy reduction. The results shed light on the effectiveness of the various sources of information for the evaluation of the response, damage and losses of the considered bridge and on the benefit of data fusion from all considered sources.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4045
Author(s):  
Alessandro Sassu ◽  
Jose Francisco Saenz-Cogollo ◽  
Maurizio Agelli

Edge computing is the best approach for meeting the exponential demand and the real-time requirements of many video analytics applications. Since most of the recent advances regarding the extraction of information from images and video rely on computation heavy deep learning algorithms, there is a growing need for solutions that allow the deployment and use of new models on scalable and flexible edge architectures. In this work, we present Deep-Framework, a novel open source framework for developing edge-oriented real-time video analytics applications based on deep learning. Deep-Framework has a scalable multi-stream architecture based on Docker and abstracts away from the user the complexity of cluster configuration, orchestration of services, and GPU resources allocation. It provides Python interfaces for integrating deep learning models developed with the most popular frameworks and also provides high-level APIs based on standard HTTP and WebRTC interfaces for consuming the extracted video data on clients running on browsers or any other web-based platform.


Sign in / Sign up

Export Citation Format

Share Document