scholarly journals PROCESSING OF CRAWLED URBAN IMAGERY FOR BUILDING USE CLASSIFICATION

Author(s):  
P. Tutzauer ◽  
N. Haala

Recent years have shown a shift from pure geometric 3D city models to data with semantics. This is induced by new applications (e.g. Virtual/Augmented Reality) and also a requirement for concepts like Smart Cities. However, essential urban semantic data like building use categories is often not available. We present a first step in bridging this gap by proposing a pipeline to use crawled urban imagery and link it with ground truth cadastral data as an input for automatic building use classification. We aim to extract this city-relevant semantic information automatically from Street View (SV) imagery. Convolutional Neural Networks (CNNs) proved to be extremely successful for image interpretation, however, require a huge amount of training data. Main contribution of the paper is the automatic provision of such training datasets by linking semantic information as already available from databases provided from national mapping agencies or city administrations to the corresponding façade images extracted from SV. Finally, we present first investigations with a CNN and an alternative classifier as a proof of concept.

Author(s):  
K. Chaturvedi ◽  
T. H. Kolbe

Abstract. Semantic 3D City Models are used worldwide for different application domains ranging from Smart Cities, Simulations, Planning to History and Archeology. Well-defined data models like CityGML, IFC and INSPIRE Data Themes allow describing spatial, graphical and semantic information of physical objects. However, cities and their properties are not static and change with respect to time. Hence, it is important that such semantic data models handle different types of changes that take place in cities and their attributes over time. This paper provides a systematic analysis and recommendations for extensions of Semantic 3D City Models in order to support time-dependent properties. This paper reviews different application domains in order to identify key requirements for temporal and dynamic extensions and proposes ways to incorporate these extensions. Over the last couple of years, different extensions have been proposed for these standards to deal with temporal attributes. This paper also presents an analysis to which degree these extensions cover the requirements for dynamic city models.


2019 ◽  
Author(s):  
Zhengqiao Zhao ◽  
Alexandru Cristian ◽  
Gail Rosen

AbstractIt is a computational challenge for current metagenomic classifiers to keep up with the pace of training data generated from genome sequencing projects, such as the exponentially-growing NCBI RefSeq bacterial genome database. When new reference sequences are added to training data, statically trained classifiers must be rerun on all data, resulting in a highly inefficient process. The rich literature of “incremental learning” addresses the need to update an existing classifier to accommodate new data without sacrificing much accuracy compared to retraining the classifier with all data. We demonstrate how classification improves over time by incrementally training a classifier on progressive RefSeq snapshots and testing it on: (a) all known current genomes (as a ground truth set) and (b) a real experimental metagenomic gut sample. We demonstrate that as a classifier model’s knowledge of genomes grows, classification accuracy increases. The proof-of-concept naïve Bayes implementation, when updated yearly, now runs in 1/4th of the non-incremental time with no accuracy loss. In conclusion, it is evident that classification improves by having the most current knowledge at its disposal. Therefore, it is of utmost importance to make classifiers computationally tractable to keep up with the data deluge.


2020 ◽  
Author(s):  
Prashant Sadashiv Gidde ◽  
Shyam Sunder Prasad ◽  
Ajay Pratap Singh ◽  
Nitin Bhatheja ◽  
Satyartha Prakash ◽  
...  

AbstractThe coronavirus disease of 2019 (COVID-19) pandemic exposed a limitation of artificial intelligence (AI) based medical image interpretation systems. Early in the pandemic, when need was greatest, the absence of sufficient training data prevented effective deep learning (DL) solutions. Even now, there is a need for Chest-X-ray (CxR) screening tools in low and middle income countries (LMIC), when RT-PCR is delayed, to exclude COVID-19 pneumonia (Cov-Pneum) requiring transfer to higher care. In absence of local LMIC data and poor portability of CxR DL algorithms, a new approach is needed. Axiomatically, it is faster to repurpose existing data than to generate new datasets. Here, we describe CovBaseAI, an explainable tool which uses an ensemble of three DL models and an expert decision system (EDS) for Cov-Pneum diagnosis, trained entirely on datasets from the pre-COVID-19 period. Portability, performance, and explainability of CovBaseAI was primarily validated on two independent datasets. First, 1401 randomly selected CxR from an Indian quarantine-center to assess effectiveness in excluding radiologic Cov-Pneum that may require higher care. Second, a curated dataset with 434 RT-PCR positive cases of varying levels of severity and 471 historical scans containing normal studies and non-COVID pathologies, to assess performance in advanced medical settings. CovBaseAI had accuracy of 87% with negative predictive value of 98% in the quarantine-center data for Cov-Pneum. However, sensitivity varied from 0.66 to 0.90 depending on whether RT-PCR or radiologist opinion was set as ground truth. This tool with explainability feature has better performance than publicly available algorithms trained on COVID-19 data but needs further improvement.


2020 ◽  
Vol 7 (Supplement_1) ◽  
pp. S375-S376
Author(s):  
ljubomir Buturovic ◽  
Purvesh Khatri ◽  
Benjamin Tang ◽  
Kevin Lai ◽  
Win Sen Kuan ◽  
...  

Abstract Background While major progress has been made to establish diagnostic tools for the diagnosis of SARS-CoV-2 infection, determining the severity of COVID-19 remains an unmet medical need. With limited hospital resources, gauging severity would allow for some patients to safely recover in home quarantine while ensuring sicker patients get needed care. We discovered a 5 host mRNA-based classifier for the severity of influenza and other acute viral infections and validated the classifier in COVID-19 patients from Greece. Methods We used training data (N=705) from 21 retrospective clinical studies of influenza and other viral illnesses. Five host mRNAs from a preselected panel were applied to train a logistic regression classifier for predicting 30-day mortality in influenza and other viral illnesses. We then applied this classifier, with fixed weights, to an independent cohort of subjects with confirmed COVID-19 from Athens, Greece (N=71) using NanoString nCounter. Finally, we developed a proof-of-concept rapid, isothermal qRT-LAMP assay for the 5-mRNA host signature using the QuantStudio 6 qPCR platform. Results In 71 patients with COVID-19, the 5 mRNA classifier had an AUROC of 0.88 (95% CI 0.80-0.97) for identifying patients with severe respiratory failure and/or 30-day mortality (Figure 1). Applying a preset cutoff based on training data, the 5-mRNA classifier had 100% sensitivity and 46% specificity for identifying mortality, and 88% sensitivity and 68% specificity for identifying severe respiratory failure. Finally, our proof-of-concept qRT-LAMP assay showed high correlation with the reference NanoString 5-mRNA classifier (r=0.95). Figure 1. Validation of the 5-mRNA classifier in the COVID-19 cohort. (A) Expression of the 5 genes used in the logistic regression model in patients with (red) and without (blue) mortality. (B) The 5-mRNA classifier accurately distinguishes non-severe and severe patients with COVID-19 as well as those at risk of death. Conclusion Our 5-mRNA classifier demonstrated very high accuracy for the prediction of COVID-19 severity and could assist in the rapid, point-of-impact assessment of patients with confirmed COVID-19 to determine level of care thereby improving patient management and healthcare burden. Disclosures ljubomir Buturovic, PhD, Inflammatix Inc. (Employee, Shareholder) Purvesh Khatri, PhD, Inflammatix Inc. (Shareholder) Oliver Liesenfeld, MD, Inflammatix Inc. (Employee, Shareholder) James Wacker, n/a, Inflammatix Inc. (Employee, Shareholder) Uros Midic, PhD, Inflammatix Inc. (Employee, Shareholder) Roland Luethy, PhD, Inflammatix Inc. (Employee, Shareholder) David C. Rawling, PhD, Inflammatix Inc. (Employee, Shareholder) Timothy Sweeney, MD, Inflammatix, Inc. (Employee)


Robotics ◽  
2020 ◽  
Vol 10 (1) ◽  
pp. 2
Author(s):  
Camilla Follini ◽  
Valerio Magnago ◽  
Kilian Freitag ◽  
Michael Terzer ◽  
Carmen Marcher ◽  
...  

The application of robotics in construction is hindered by the site environment, which is unstructured and subject to change. At the same time, however, buildings and corresponding sites can be accurately described by Building Information Modeling (BIM). Such a model contains geometric and semantic data about the construction and operation phases of the building and it is already available at the design phase. We propose a method to leverage BIM for simple yet efficient deployment of robotic systems for construction and operation of buildings. With our proposed approach, BIM is used to provide the robot with a priori geometric and semantic information on the environment and to store information on the operation progress. We present two applications that verify the effectiveness of our proposed method. This system represents a step forward towards an easier application of robots in construction.


2020 ◽  
Vol 499 (4) ◽  
pp. 5641-5652
Author(s):  
Georgios Vernardos ◽  
Grigorios Tsagkatakis ◽  
Yannis Pantazis

ABSTRACT Gravitational lensing is a powerful tool for constraining substructure in the mass distribution of galaxies, be it from the presence of dark matter sub-haloes or due to physical mechanisms affecting the baryons throughout galaxy evolution. Such substructure is hard to model and is either ignored by traditional, smooth modelling, approaches, or treated as well-localized massive perturbers. In this work, we propose a deep learning approach to quantify the statistical properties of such perturbations directly from images, where only the extended lensed source features within a mask are considered, without the need of any lens modelling. Our training data consist of mock lensed images assuming perturbing Gaussian Random Fields permeating the smooth overall lens potential, and, for the first time, using images of real galaxies as the lensed source. We employ a novel deep neural network that can handle arbitrary uncertainty intervals associated with the training data set labels as input, provides probability distributions as output, and adopts a composite loss function. The method succeeds not only in accurately estimating the actual parameter values, but also reduces the predicted confidence intervals by 10 per cent in an unsupervised manner, i.e. without having access to the actual ground truth values. Our results are invariant to the inherent degeneracy between mass perturbations in the lens and complex brightness profiles for the source. Hence, we can quantitatively and robustly quantify the smoothness of the mass density of thousands of lenses, including confidence intervals, and provide a consistent ranking for follow-up science.


2021 ◽  
Vol 22 (Supplement_1) ◽  
Author(s):  
D Zhao ◽  
E Ferdian ◽  
GD Maso Talou ◽  
GM Quill ◽  
K Gilbert ◽  
...  

Abstract Funding Acknowledgements Type of funding sources: Public grant(s) – National budget only. Main funding source(s): National Heart Foundation (NHF) of New Zealand Health Research Council (HRC) of New Zealand Artificial intelligence shows considerable promise for automated analysis and interpretation of medical images, particularly in the domain of cardiovascular imaging. While application to cardiac magnetic resonance (CMR) has demonstrated excellent results, automated analysis of 3D echocardiography (3D-echo) remains challenging, due to the lower signal-to-noise ratio (SNR), signal dropout, and greater interobserver variability in manual annotations. As 3D-echo is becoming increasingly widespread, robust analysis methods will substantially benefit patient evaluation.  We sought to leverage the high SNR of CMR to provide training data for a convolutional neural network (CNN) capable of analysing 3D-echo. We imaged 73 participants (53 healthy volunteers, 20 patients with non-ischaemic cardiac disease) under both CMR and 3D-echo (<1 hour between scans). 3D models of the left ventricle (LV) were independently constructed from CMR and 3D-echo, and used to spatially align the image volumes using least squares fitting to a cardiac template. The resultant transformation was used to map the CMR mesh to the 3D-echo image. Alignment of mesh and image was verified through volume slicing and visual inspection (Fig. 1) for 120 paired datasets (including 47 rescans) each at end-diastole and end-systole. 100 datasets (80 for training, 20 for validation) were used to train a shallow CNN for mesh extraction from 3D-echo, optimised with a composite loss function consisting of normalised Euclidian distance (for 290 mesh points) and volume. Data augmentation was applied in the form of rotations and tilts (<15 degrees) about the long axis. The network was tested on the remaining 20 datasets (different participants) of varying image quality (Tab. I). For comparison, corresponding LV measurements from conventional manual analysis of 3D-echo and associated interobserver variability (for two observers) were also estimated. Initial results indicate that the use of embedded CMR meshes as training data for 3D-echo analysis is a promising alternative to manual analysis, with improved accuracy and precision compared with conventional methods. Further optimisations and a larger dataset are expected to improve network performance. (n = 20) LV EDV (ml) LV ESV (ml) LV EF (%) LV mass (g) Ground truth CMR 150.5 ± 29.5 57.9 ± 12.7 61.5 ± 3.4 128.1 ± 29.8 Algorithm error -13.3 ± 15.7 -1.4 ± 7.6 -2.8 ± 5.5 0.1 ± 20.9 Manual error -30.1 ± 21.0 -15.1 ± 12.4 3.0 ± 5.0 Not available Interobserver error 19.1 ± 14.3 14.4 ± 7.6 -6.4 ± 4.8 Not available Tab. 1. LV mass and volume differences (means ± standard deviations) for 20 test cases. Algorithm: CNN – CMR (as ground truth). Abstract Figure. Fig 1. CMR mesh registered to 3D-echo.


2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Bingyin Hu ◽  
Anqi Lin ◽  
L. Catherine Brinson

AbstractThe inconsistency of polymer indexing caused by the lack of uniformity in expression of polymer names is a major challenge for widespread use of polymer related data resources and limits broad application of materials informatics for innovation in broad classes of polymer science and polymeric based materials. The current solution of using a variety of different chemical identifiers has proven insufficient to address the challenge and is not intuitive for researchers. This work proposes a multi-algorithm-based mapping methodology entitled ChemProps that is optimized to solve the polymer indexing issue with easy-to-update design both in depth and in width. RESTful API is enabled for lightweight data exchange and easy integration across data systems. A weight factor is assigned to each algorithm to generate scores for candidate chemical names and optimized to maximize the minimum value of the score difference between the ground truth chemical name and the other candidate chemical names. Ten-fold validation is utilized on the 160 training data points to prevent overfitting issues. The obtained set of weight factors achieves a 100% test accuracy on the 54 test data points. The weight factors will evolve as ChemProps grows. With ChemProps, other polymer databases can remove duplicate entries and enable a more accurate “search by SMILES” function by using ChemProps as a common name-to-SMILES translator through API calls. ChemProps is also an excellent tool for auto-populating polymer properties thanks to its easy-to-update design.


Author(s):  
G. Agugiaro

This paper presents and discusses the results regarding the initial steps (selection, analysis, preparation and eventual integration of a number of datasets) for the creation of an integrated, semantic, three-dimensional, and CityGML-based virtual model of the city of Vienna. CityGML is an international standard conceived specifically as information and data model for semantic city models at urban and territorial scale. It is being adopted by more and more cities all over the world. <br><br> The work described in this paper is embedded within the European Marie-Curie ITN project “Ci-nergy, Smart cities with sustainable energy systems”, which aims, among the rest, at developing urban decision making and operational optimisation software tools to minimise non-renewable energy use in cities. Given the scope and scale of the project, it is therefore vital to set up a common, unique and spatio-semantically coherent urban model to be used as information hub for all applications being developed. This paper reports about the experiences done so far, it describes the test area and the available data sources, it shows and exemplifies the data integration issues, the strategies developed to solve them in order to obtain the integrated 3D city model. The first results as well as some comments about their quality and limitations are presented, together with the discussion regarding the next steps and some planned improvements.


Nanomaterials ◽  
2021 ◽  
Vol 11 (11) ◽  
pp. 2975
Author(s):  
Long Liu ◽  
Xinge Guo ◽  
Weixin Liu ◽  
Chengkuo Lee

With the fast development of energy harvesting technology, micro-nano or scale-up energy harvesters have been proposed to allow sensors or internet of things (IoT) applications with self-powered or self-sustained capabilities. Facilitation within smart homes, manipulators in industries and monitoring systems in natural settings are all moving toward intellectually adaptable and energy-saving advances by converting distributed energies across diverse situations. The updated developments of major applications powered by improved energy harvesters are highlighted in this review. To begin, we study the evolution of energy harvesting technologies from fundamentals to various materials. Secondly, self-powered sensors and self-sustained IoT applications are discussed regarding current strategies for energy harvesting and sensing. Third, subdivided classifications investigate typical and new applications for smart homes, gas sensing, human monitoring, robotics, transportation, blue energy, aircraft, and aerospace. Lastly, the prospects of smart cities in the 5G era are discussed and summarized, along with research and application directions that have emerged.


Sign in / Sign up

Export Citation Format

Share Document