scholarly journals Geometry-Guided Street-View Panorama Synthesis from Satellite Imagery

Author(s):  
Yujiao Shi ◽  
Dylan John Campbell ◽  
Xin Yu ◽  
Hongdong Li
Urban Science ◽  
2020 ◽  
Vol 4 (2) ◽  
pp. 27
Author(s):  
Kerry A. Nice ◽  
Jason Thompson ◽  
Jasper S. Wijnands ◽  
Gideon D. P. A. Aschwanden ◽  
Mark Stevenson

Urban typologies allow areas to be categorised according to form and the social, demographic, and political uses of the areas. The use of these typologies and finding similarities and dissimilarities between cities enables better targeted interventions for improved health, transport, and environmental outcomes in urban areas. A better understanding of local contexts can also assist in applying lessons learned from other cities. Constructing urban typologies at a global scale through traditional methods, such as functional or network analysis, requires the collection of data across multiple political districts, which can be inconsistent and then require a level of subjective classification. To overcome these limitations, we use neural networks to analyse millions of images of urban form (consisting of street view, satellite imagery, and street maps) to find shared characteristics between the largest 1692 cities in the world. The comparison city of Paris is used as an exemplar and we perform a case study using two Australian cities, Melbourne and Sydney, to determine if a “Paris-end” of town exists or can be found in these cities using these three big data imagery sets. The results show specific advantages and disadvantages of each type of imagery in constructing urban typologies. Neural networks trained with map imagery will be highly influenced by the structural mix of roads, public transport, and green and blue space. Satellite imagery captures a combination of both urban form and decorative and natural details. The use of street view imagery emphasises the features of a human-scaled visual geography of streetscapes. However, for both satellite and street view imagery to be highly effective, a reduction in scale and more aggressive pre-processing might be required in order to reduce detail and create greater abstraction in the imagery.


2020 ◽  
pp. 435-486
Author(s):  
Pablo Diego‐Rosell ◽  
Stafford Nichols ◽  
Rajesh Srinivasan ◽  
Ben Dilday

2020 ◽  
Vol 2020 (1) ◽  
pp. 78-81
Author(s):  
Simone Zini ◽  
Simone Bianco ◽  
Raimondo Schettini

Rain removal from pictures taken under bad weather conditions is a challenging task that aims to improve the overall quality and visibility of a scene. The enhanced images usually constitute the input for subsequent Computer Vision tasks such as detection and classification. In this paper, we present a Convolutional Neural Network, based on the Pix2Pix model, for rain streaks removal from images, with specific interest in evaluating the results of the processing operation with respect to the Optical Character Recognition (OCR) task. In particular, we present a way to generate a rainy version of the Street View Text Dataset (R-SVTD) for "text detection and recognition" evaluation in bad weather conditions. Experimental results on this dataset show that our model is able to outperform the state of the art in terms of two commonly used image quality metrics, and that it is capable to improve the performances of an OCR model to detect and recognise text in the wild.


2020 ◽  
Vol 2020 (8) ◽  
pp. 114-1-114-7
Author(s):  
Bryan Blakeslee ◽  
Andreas Savakis

Change detection in image pairs has traditionally been a binary process, reporting either “Change” or “No Change.” In this paper, we present LambdaNet, a novel deep architecture for performing pixel-level directional change detection based on a four class classification scheme. LambdaNet successfully incorporates the notion of “directional change” and identifies differences between two images as “Additive Change” when a new object appears, “Subtractive Change” when an object is removed, “Exchange” when different objects are present in the same location, and “No Change.” To obtain pixel annotated change maps for training, we generated directional change class labels for the Change Detection 2014 dataset. Our tests illustrate that LambdaNet would be suitable for situations where the type of change is unstructured, such as change detection scenarios in satellite imagery.


Author(s):  
SiMing Liang ◽  
FengYang Qi ◽  
YiFan Ding ◽  
Rui Cao ◽  
Qiang Yang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document