Probabilistic assessment of time to cracking of concrete cover due to corrosion using semantic segmentation of imaging probe sensor data

2021 ◽  
Vol 132 ◽  
pp. 103963
Author(s):  
Vasantha Ramani ◽  
Limao Zhang ◽  
Kevin Sze Chiang Kuang
Energies ◽  
2021 ◽  
Vol 14 (2) ◽  
pp. 353
Author(s):  
Yu Hou ◽  
Rebekka Volk ◽  
Lucio Soibelman

Multi-sensor imagery data has been used by researchers for the image semantic segmentation of buildings and outdoor scenes. Due to multi-sensor data hunger, researchers have implemented many simulation approaches to create synthetic datasets, and they have also synthesized thermal images because such thermal information can potentially improve segmentation accuracy. However, current approaches are mostly based on the laws of physics and are limited to geometric models’ level of detail (LOD), which describes the overall planning or modeling state. Another issue in current physics-based approaches is that thermal images cannot be aligned to RGB images because the configurations of a virtual camera used for rendering thermal images are difficult to synchronize with the configurations of a real camera used for capturing RGB images, which is important for segmentation. In this study, we propose an image translation approach to directly convert RGB images to simulated thermal images for expanding segmentation datasets. We aim to investigate the benefits of using an image translation approach for generating synthetic aerial thermal images and compare those approaches with physics-based approaches. Our datasets for generating thermal images are from a city center and a university campus in Karlsruhe, Germany. We found that using the generating model established by the city center to generate thermal images for campus datasets performed better than using the latter to generate thermal images for the former. We also found that using a generating model established by one building style to generate thermal images for datasets with the same building styles performed well. Therefore, we suggest using training datasets with richer and more diverse building architectural information, more complex envelope structures, and similar building styles to testing datasets for an image translation approach.


2017 ◽  
Vol 44 (8) ◽  
pp. 598-610 ◽  
Author(s):  
M. Hadiuzzaman ◽  
Nazmul Haque ◽  
Sarder Rafee Musabbir ◽  
Md. Atiqul Islam ◽  
Sanjana Hossain ◽  
...  

This study deals with the reconstruction of vehicle trajectories incorporating a data fusion framework that combines video and probe sensor data in heterogeneous traffic conditions. The framework is based on the application of variational formulation (VF) of kinematic waves for multiple lane conditions. The VF requires cumulative count and reference trajectory as boundary conditions. The VF also requires generation of lopsided network using fundamental diagram (FD) parameters. In this regard, cumulative count and FD parameters are obtained from the video sensor, while reference vehicle trajectory is obtained from the probe sensor. The analysis shows that the framework can provide an accuracy of 83% in trajectory estimation from the nearest reference trajectory. However, the accuracy decreases as the reference trajectory gets farther away from the estimated one. Additionally, an extension of the VF to accommodate roadway side friction is presented. The FD as well as lopsided network reform when the roadway capacity varies due to side friction. Consequently, the vehicle trajectory bends to accommodate the capacity fluctuation.


2021 ◽  
Vol 15 (03) ◽  
pp. 293-312
Author(s):  
Fabian Duerr ◽  
Hendrik Weigel ◽  
Jürgen Beyerer

One of the key tasks for autonomous vehicles or robots is a robust perception of their 3D environment, which is why autonomous vehicles or robots are equipped with a wide range of different sensors. Building upon a robust sensor setup, understanding and interpreting their 3D environment is the next important step. Semantic segmentation of 3D sensor data, e.g. point clouds, provides valuable information for this task and is often seen as key enabler for 3D scene understanding. This work presents an iterative deep fusion architecture for semantic segmentation of 3D point clouds, which builds upon a range image representation of the point clouds and additionally exploits camera features to increase accuracy and robustness. In contrast to other approaches, which fuse lidar and camera features once, the proposed fusion strategy iteratively combines and refines lidar and camera features at different scales inside the network architecture. Additionally, the proposed approach can deal with camera failure as well as jointly predict lidar and camera segmentation. We demonstrate the benefits of the presented iterative deep fusion approach on two challenging datasets, outperforming all range image-based lidar and fusion approaches. An in-depth evaluation underlines the effectiveness of the proposed fusion strategy and the potential of camera features for 3D semantic segmentation.


Author(s):  
F. W. Cathey ◽  
D. J. Dailey

New algorithms are presented that use transit vehicles as probes for determining traffic speeds and travel times along freeways and other primary arterials. A mass transit tracking system based on automatic vehicle location data and a Kalman filter used to estimate vehicle position and speed are described. A system of virtual probe sensors that measure transit vehicle speeds by using the track data is also described. Examples showing the correlation between probe data and inductance loop speed-trap data are presented. Also presented is a method that uses probe sensor data to define vehicle speed along an arbitrary roadway as a function of space and time, a speed function. This speed function is used to estimate travel time given an arbitrary starting time. Finally, a graphical application is introduced for viewing real-time speed measurements from a set of virtual sensors that can be located throughout King County, Washington, on arterials and freeways.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3774 ◽  
Author(s):  
Xuran Pan ◽  
Lianru Gao ◽  
Bing Zhang ◽  
Fan Yang ◽  
Wenzhi Liao

Semantic segmentation of high-resolution aerial images is of great importance in certain fields, but the increasing spatial resolution brings large intra-class variance and small inter-class differences that can lead to classification ambiguities. Based on high-level contextual features, the deep convolutional neural network (DCNN) is an effective method to deal with semantic segmentation of high-resolution aerial imagery. In this work, a novel dense pyramid network (DPN) is proposed for semantic segmentation. The network starts with group convolutions to deal with multi-sensor data in channel wise to extract feature maps of each channel separately; by doing so, more information from each channel can be preserved. This process is followed by the channel shuffle operation to enhance the representation ability of the network. Then, four densely connected convolutional blocks are utilized to both extract and take full advantage of features. The pyramid pooling module combined with two convolutional layers are set to fuse multi-resolution and multi-sensor features through an effective global scenery prior manner, producing the probability graph for each class. Moreover, the median frequency balanced focal loss is proposed to replace the standard cross entropy loss in the training phase to deal with the class imbalance problem. We evaluate the dense pyramid network on the International Society for Photogrammetry and Remote Sensing (ISPRS) Vaihingen and Potsdam 2D semantic labeling dataset, and the results demonstrate that the proposed framework exhibits better performances, compared to the state of the art baseline.


Author(s):  
L. S. Obrock ◽  
E. Gülch

The automated generation of a BIM-Model from sensor data is a huge challenge for the modeling of existing buildings. Currently the measurements and analyses are time consuming, allow little automation and require expensive equipment. We do lack an automated acquisition of semantical information of objects in a building.<br> We are presenting first results of our approach based on imagery and derived products aiming at a more automated modeling of interior for a BIM building model. We examine the building parts and objects visible in the collected images using Deep Learning Methods based on Convolutional Neural Networks. For localization and classification of building parts we apply the FCN8s-Model for pixel-wise Semantic Segmentation. We, so far, reach a Pixel Accuracy of 77.2&amp;thinsp;% and a mean Intersection over Union of 44.2&amp;thinsp;%. We finally use the network for further reasoning on the images of the interior room. We combine the segmented images with the original images and use photogrammetric methods to produce a three-dimensional point cloud. We code the extracted object types as colours of the 3D-points. We thus are able to uniquely classify the points in three-dimensional space. We preliminary investigate a simple extraction method for colour and material of building parts. It is shown, that the combined images are very well suited to further extract more semantic information for the BIM-Model. With the presented methods we see a sound basis for further automation of acquisition and modeling of semantic and geometric information of interior rooms for a BIM-Model.


Sign in / Sign up

Export Citation Format

Share Document