scholarly journals An enhanced 3D model and generative adversarial network for automated generation of horizontal building mask images and cloudless aerial photographs

2021 ◽  
Vol 50 ◽  
pp. 101380
Author(s):  
Kazunosuke Ikeno ◽  
Tomohiro Fukuda ◽  
Nobuyoshi Yabuki
IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 177585-177594
Author(s):  
Long Zhang ◽  
Li Liu ◽  
Huaxiang Zhang ◽  
Xiuxiu Chen ◽  
Tianshi Wang ◽  
...  

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 170355-170363
Author(s):  
Xinying Wang ◽  
Dikai Xu ◽  
Fangming Gu

2021 ◽  
Vol 11 (16) ◽  
pp. 7536
Author(s):  
Kyungho Yu ◽  
Juhyeon Noh ◽  
Hee-Deok Yang

Recently, three-dimensional (3D) content used in various fields has attracted attention owing to the development of virtual reality and augmented reality technologies. To produce 3D content, we need to model the objects as vertices. However, high-quality modeling is time-consuming and costly. Drawing-based modeling is a technique that shortens the time required for modeling. It refers to creating a 3D model based on a user’s line drawing, which is a 3D feature represented by two-dimensional (2D) lines. The extracted line drawing provides information about a 3D model in the 2D space. It is sometimes necessary to generate a line drawing from a 2D cartoon image to represent the 3D information of a 2D cartoon image. The extraction of consistent line drawings from 2D cartoons is difficult because the styles and techniques differ depending on the designer who produces the 2D cartoons. Therefore, it is necessary to extract line drawings that show the geometric characteristics well in 2D cartoon shapes of various styles. This paper proposes a method for automatically extracting line drawings. The 2D cartoon shading image and line drawings are learned using a conditional generative adversarial network model, which outputs the line drawings of the cartoon artwork. The experimental results show that the proposed method can obtain line drawings representing the 3D geometric characteristics with a 2D line when a 2D cartoon painting is used as the input.


2019 ◽  
Vol 11 (8) ◽  
pp. 930 ◽  
Author(s):  
Xiangrong Zhang ◽  
Xiao Han ◽  
Chen Li ◽  
Xu Tang ◽  
Huiyu Zhou ◽  
...  

Aerial photographs and satellite images are one of the resources used for earth observation. In practice, automated detection of roads on aerial images is of significant values for the application such as car navigation, law enforcement, and fire services. In this paper, we present a novel road extraction method from aerial images based on an improved generative adversarial network, which is an end-to-end framework only requiring a few samples for training. Experimental results on the Massachusetts Roads Dataset show that the proposed method provides better performance than several state of the art techniques in terms of detection accuracy, recall, precision and F1-score.


Technologies ◽  
2019 ◽  
Vol 7 (4) ◽  
pp. 82
Author(s):  
Hang Zhang

Machine learning, especially the GAN (Generative Adversarial Network) model, has been developed tremendously in recent years. Since the NVIDIA Machine Learning group presented the StyleGAN in December 2018, it has become a new way for designers to make machines learn different or similar types of architectural photos, drawings, and renderings, then generate (a) similar fake images, (b) style-mixing images, and (c) truncation trick images. The author both collected and created input image data, and specially made architectural plan and section drawing inputs with a clear design purpose, then applied StyleGAN to train specific networks on these datasets. With the training process, we could look into the deep relationship between these input architectural plans or sections, then generate serialized transformation images (truncation trick images) to form the 3D (three-dimensional) model with a decent resolution (up to 1024 × 1024 × 1024 pixels). Though the results of the 3D model generation are difficult to use directly in 3D spatial modeling, these unexpected 3D forms still could inspire new design methods and greater possibilities of architectural plan and section design.


Author(s):  
V. Gorbatsevich ◽  
B. Kulgildin ◽  
M. Melnichenko ◽  
O. Vygolov ◽  
Y. Vizilter

Abstract. The paper addresses the problem of a city heightmap restoration using satellite view image and some manually created area with 3D data. We propose the approach based on generative adversarial networks. Our algorithm contains three steps: low quality 3D restoration, buildings segmentation using restored model, and high-quality 3D restoration. CNN architecture based on original ResDilation blocks and ResNet is used for steps one and three. Training and test datasets were retrieved from National Lidar Dataset (United States) and the algorithm achieved approximately MSE = 3.84 m on this data. In addition, we tested our model on the completely different ISPRS Potsdam dataset and obtained MSE = 5.1 m.


Author(s):  
Q. Poterek ◽  
P.-A. Herrault ◽  
G. Forestier ◽  
D. Schwartz

Abstract. Landscape reconstruction is crucial to measure the effects of climate change or past land use on current biodiversity. In particular, retracing past phenological changes can serve as a basis for explaining current patterns of plant communities and predict the future extinction of species. Old spatial data are currently used to reconstruct vegetation changes, both morphologically (with landscape metrics) and semantically (grasslands to crops for instance). However, poor radiometric properties (single panchromatic channel, illumination variation, etc.) do not offer the possibility to compute environmental variables (e.g. NDVI and color indices), which strongly limits long-term phenological reconstruction. In this study, we propose a workflow for reconstructing phenological trajectories of grasslands from 1958 to 2011, in the French central Vosges, from old aerial black and white (B&W) photographs. Noise and vignetting corruptions were first corrected in B&W photographs with non-local filtering algorithms. Panchromatic scans were then colorized with a Generative Adversarial Network (GAN). Based on the predicted channels, we finally computed digital greenness metrics (Green Chromatic Coordinate, Excess Greenness) to measure vegetation activity in grasslands. Our results demonstrated the feasibility of reconstructing long-term phenological trajectories from legacy photographs with insights at different levels: (1) the proposed correction methods provided radiometric improvements in old aerial missions; (2) the colorization process led to promising and plausible colorized historical products; (3) digital greenness metrics were useful for describing past vegetation activity.


Sign in / Sign up

Export Citation Format

Share Document