Street-Level Imagery and Deep Learning for Characterization of Exposed Buildings

Author(s):  
Patrick Aravena Pelizari ◽  
Christian Geiß ◽  
Elisabeth Schoepfer ◽  
Torsten Riedlinger ◽  
Paula Aguirre ◽  
...  

<p>Knowledge on the key structural characteristics of exposed buildings is crucial for accurate risk modeling with regard to natural hazards. In risk assessment this information is used to interlink exposed buildings with specific representative vulnerability models and is thus a prerequisite to implement sound risk models. The acquisition of such data by conventional building surveys is usually highly expensive in terms of labor, time, and money. Institutional data bases such as census or tax assessor data provide alternative sources of information. Such data, however, are often inappropriate, out-of-date, or not available. Today, the large-area availability of systematically collected street-level data due to global initiatives such as Google Street View, among others, offers new possibilities for the collection of <em>in-situ</em> data. At the same time, developments in machine learning and computer vision – in deep learning in particular – show high accuracy in solving perceptual tasks in the image domain. Thereon, we explore the potential of an automatized and thus efficient collection of vulnerability related building characteristics. To this end, we elaborated a workflow where the inference of building characteristics (e.g., the seismic building structural type, the material of the lateral load resisting system or the building height) from geotagged street-level imagery is tasked to a custom-trained Deep Convolutional Neural Network. The approach is applied and evaluated for the earthquake-prone Chilean capital Santiago de Chile. Experimental results are presented and show high accuracy in the derivation of addressed target variables. This emphasizes the potential of the proposed methodology to contribute to large-area collection of <em>in-situ</em> information on exposed buildings.</p>

2021 ◽  
Vol 11 (11) ◽  
pp. 4758
Author(s):  
Ana Malta ◽  
Mateus Mendes ◽  
Torres Farinha

Maintenance professionals and other technical staff regularly need to learn to identify new parts in car engines and other equipment. The present work proposes a model of a task assistant based on a deep learning neural network. A YOLOv5 network is used for recognizing some of the constituent parts of an automobile. A dataset of car engine images was created and eight car parts were marked in the images. Then, the neural network was trained to detect each part. The results show that YOLOv5s is able to successfully detect the parts in real time video streams, with high accuracy, thus being useful as an aid to train professionals learning to deal with new equipment using augmented reality. The architecture of an object recognition system using augmented reality glasses is also designed.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1031
Author(s):  
Joseba Gorospe ◽  
Rubén Mulero ◽  
Olatz Arbelaitz ◽  
Javier Muguerza ◽  
Miguel Ángel Antón

Deep learning techniques are being increasingly used in the scientific community as a consequence of the high computational capacity of current systems and the increase in the amount of data available as a result of the digitalisation of society in general and the industrial world in particular. In addition, the immersion of the field of edge computing, which focuses on integrating artificial intelligence as close as possible to the client, makes it possible to implement systems that act in real time without the need to transfer all of the data to centralised servers. The combination of these two concepts can lead to systems with the capacity to make correct decisions and act based on them immediately and in situ. Despite this, the low capacity of embedded systems greatly hinders this integration, so the possibility of being able to integrate them into a wide range of micro-controllers can be a great advantage. This paper contributes with the generation of an environment based on Mbed OS and TensorFlow Lite to be embedded in any general purpose embedded system, allowing the introduction of deep learning architectures. The experiments herein prove that the proposed system is competitive if compared to other commercial systems.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Gregory Palmer ◽  
Mark Green ◽  
Emma Boyland ◽  
Yales Stefano Rios Vasconcelos ◽  
Rahul Savani ◽  
...  

AbstractWhile outdoor advertisements are common features within towns and cities, they may reinforce social inequalities in health. Vulnerable populations in deprived areas may have greater exposure to fast food, gambling and alcohol advertisements, which may encourage their consumption. Understanding who is exposed and evaluating potential policy restrictions requires a substantial manual data collection effort. To address this problem we develop a deep learning workflow to automatically extract and classify unhealthy advertisements from street-level images. We introduce the Liverpool $${360}^{\circ }$$ 360 ∘ Street View (LIV360SV) dataset for evaluating our workflow. The dataset contains 25,349, 360 degree, street-level images collected via cycling with a GoPro Fusion camera, recorded Jan 14th–18th 2020. 10,106 advertisements were identified and classified as food (1335), alcohol (217), gambling (149) and other (8405). We find evidence of social inequalities with a larger proportion of food advertisements located within deprived areas and those frequented by students. Our project presents a novel implementation for the incidental classification of street view images for identifying unhealthy advertisements, providing a means through which to identify areas that can benefit from tougher advertisement restriction policies for tackling social inequalities.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Aydin Demircioğlu ◽  
Magdalena Charis Stein ◽  
Moon-Sung Kim ◽  
Henrike Geske ◽  
Anton S. Quinsten ◽  
...  

AbstractFor CT pulmonary angiograms, a scout view obtained in anterior–posterior projection is usually used for planning. For bolus tracking the radiographer manually locates a position in the CT scout view where the pulmonary trunk will be visible in an axial CT pre-scan. We automate the task of localizing the pulmonary trunk in CT scout views by deep learning methods. In 620 eligible CT scout views of 563 patients between March 2003 and February 2020 the region of the pulmonary trunk as well as an optimal slice (“reference standard”) for bolus tracking, in which the pulmonary trunk was clearly visible, was annotated and used to train a U-Net predicting the region of the pulmonary trunk in the CT scout view. The networks’ performance was subsequently evaluated on 239 CT scout views from 213 patients and was compared with the annotations of three radiographers. The network was able to localize the region of the pulmonary trunk with high accuracy, yielding an accuracy of 97.5% of localizing a slice in the region of the pulmonary trunk on the validation cohort. On average, the selected position had a distance of 5.3 mm from the reference standard. Compared to radiographers, using a non-inferiority test (one-sided, paired Wilcoxon rank-sum test) the network performed as well as each radiographer (P < 0.001 in all cases). Automated localization of the region of the pulmonary trunk in CT scout views is possible with high accuracy and is non-inferior to three radiographers.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Mohsen Moazzami Gudarzi ◽  
Maryana Asaad ◽  
Boyang Mao ◽  
Gergo Pinter ◽  
Jianqiang Guo ◽  
...  

AbstractThe use of two-dimensional materials in bulk functional applications requires the ability to fabricate defect-free 2D sheets with large aspect ratios. Despite huge research efforts, current bulk exfoliation methods require a compromise between the quality of the final flakes and their lateral size, restricting the effectiveness of the product. In this work, we describe an intercalation-assisted exfoliation route, which allows the production of high-quality graphene, hexagonal boron nitride, and molybdenum disulfide 2D sheets with average aspect ratios 30 times larger than that obtained via conventional liquid-phase exfoliation. The combination of chlorosulfuric acid intercalation with in situ pyrene sulfonate functionalisation produces a suspension of thin large-area flakes, which are stable in various polar solvents. The described method is simple and requires no special laboratory conditions. We demonstrate that these suspensions can be used for fabrication of laminates and coatings with electrical properties suitable for a number of real-life applications.


2021 ◽  
Vol 13 (12) ◽  
pp. 2417
Author(s):  
Savvas Karatsiolis ◽  
Andreas Kamilaris ◽  
Ian Cole

Estimating the height of buildings and vegetation in single aerial images is a challenging problem. A task-focused Deep Learning (DL) model that combines architectural features from successful DL models (U-NET and Residual Networks) and learns the mapping from a single aerial imagery to a normalized Digital Surface Model (nDSM) was proposed. The model was trained on aerial images whose corresponding DSM and Digital Terrain Models (DTM) were available and was then used to infer the nDSM of images with no elevation information. The model was evaluated with a dataset covering a large area of Manchester, UK, as well as the 2018 IEEE GRSS Data Fusion Contest LiDAR dataset. The results suggest that the proposed DL architecture is suitable for the task and surpasses other state-of-the-art DL approaches by a large margin.


Symmetry ◽  
2021 ◽  
Vol 13 (3) ◽  
pp. 443
Author(s):  
Chyan-long Jan

Because of the financial information asymmetry, the stakeholders usually do not know a company’s real financial condition until financial distress occurs. Financial distress not only influences a company’s operational sustainability and damages the rights and interests of its stakeholders, it may also harm the national economy and society; hence, it is very important to build high-accuracy financial distress prediction models. The purpose of this study is to build high-accuracy and effective financial distress prediction models by two representative deep learning algorithms: Deep neural networks (DNN) and convolutional neural networks (CNN). In addition, important variables are selected by the chi-squared automatic interaction detector (CHAID). In this study, the data of Taiwan’s listed and OTC sample companies are taken from the Taiwan Economic Journal (TEJ) database during the period from 2000 to 2019, including 86 companies in financial distress and 258 not in financial distress, for a total of 344 companies. According to the empirical results, with the important variables selected by CHAID and modeling by CNN, the CHAID-CNN model has the highest financial distress prediction accuracy rate of 94.23%, and the lowest type I error rate and type II error rate, which are 0.96% and 4.81%, respectively.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2595
Author(s):  
Balakrishnan Ramalingam ◽  
Abdullah Aamir Hayat ◽  
Mohan Rajesh Elara ◽  
Braulio Félix Gómez ◽  
Lim Yi ◽  
...  

The pavement inspection task, which mainly includes crack and garbage detection, is essential and carried out frequently. The human-based or dedicated system approach for inspection can be easily carried out by integrating with the pavement sweeping machines. This work proposes a deep learning-based pavement inspection framework for self-reconfigurable robot named Panthera. Semantic segmentation framework SegNet was adopted to segment the pavement region from other objects. Deep Convolutional Neural Network (DCNN) based object detection is used to detect and localize pavement defects and garbage. Furthermore, Mobile Mapping System (MMS) was adopted for the geotagging of the defects. The proposed system was implemented and tested with the Panthera robot having NVIDIA GPU cards. The experimental results showed that the proposed technique identifies the pavement defects and litters or garbage detection with high accuracy. The experimental results on the crack and garbage detection are presented. It is found that the proposed technique is suitable for deployment in real-time for garbage detection and, eventually, sweeping or cleaning tasks.


Author(s):  
Falk Schwendicke ◽  
Akhilanand Chaurasia ◽  
Lubaina Arsiwala ◽  
Jae-Hong Lee ◽  
Karim Elhennawy ◽  
...  

Abstract Objectives Deep learning (DL) has been increasingly employed for automated landmark detection, e.g., for cephalometric purposes. We performed a systematic review and meta-analysis to assess the accuracy and underlying evidence for DL for cephalometric landmark detection on 2-D and 3-D radiographs. Methods Diagnostic accuracy studies published in 2015-2020 in Medline/Embase/IEEE/arXiv and employing DL for cephalometric landmark detection were identified and extracted by two independent reviewers. Random-effects meta-analysis, subgroup, and meta-regression were performed, and study quality was assessed using QUADAS-2. The review was registered (PROSPERO no. 227498). Data From 321 identified records, 19 studies (published 2017–2020), all employing convolutional neural networks, mainly on 2-D lateral radiographs (n=15), using data from publicly available datasets (n=12) and testing the detection of a mean of 30 (SD: 25; range.: 7–93) landmarks, were included. The reference test was established by two experts (n=11), 1 expert (n=4), 3 experts (n=3), and a set of annotators (n=1). Risk of bias was high, and applicability concerns were detected for most studies, mainly regarding the data selection and reference test conduct. Landmark prediction error centered around a 2-mm error threshold (mean; 95% confidence interval: (–0.581; 95 CI: –1.264 to 0.102 mm)). The proportion of landmarks detected within this 2-mm threshold was 0.799 (0.770 to 0.824). Conclusions DL shows relatively high accuracy for detecting landmarks on cephalometric imagery. The overall body of evidence is consistent but suffers from high risk of bias. Demonstrating robustness and generalizability of DL for landmark detection is needed. Clinical significance Existing DL models show consistent and largely high accuracy for automated detection of cephalometric landmarks. The majority of studies so far focused on 2-D imagery; data on 3-D imagery are sparse, but promising. Future studies should focus on demonstrating generalizability, robustness, and clinical usefulness of DL for this objective.


Sign in / Sign up

Export Citation Format

Share Document