scholarly journals A 3D Reconstruction Framework of Buildings Using Single Off-Nadir Satellite Image

2021 ◽  
Vol 13 (21) ◽  
pp. 4434
Author(s):  
Chunhui Zhao ◽  
Chi Zhang ◽  
Yiming Yan ◽  
Nan Su

A novel framework for 3D reconstruction of buildings based on a single off-nadir satellite image is proposed in this paper. Compared with the traditional methods of reconstruction using multiple images in remote sensing, recovering 3D information that utilizes the single image can reduce the demands of reconstruction tasks from the perspective of input data. It solves the problem that multiple images suitable for traditional reconstruction methods cannot be acquired in some regions, where remote sensing resources are scarce. However, it is difficult to reconstruct a 3D model containing a complete shape and accurate scale from a single image. The geometric constraints are not sufficient as the view-angle, size of buildings, and spatial resolution of images are different among remote sensing images. To solve this problem, the reconstruction framework proposed consists of two convolutional neural networks: Scale-Occupancy-Network (Scale-ONet) and model scale optimization network (Optim-Net). Through reconstruction using the single off-nadir satellite image, Scale-Onet can generate water-tight mesh models with the exact shape and rough scale of buildings. Meanwhile, the Optim-Net can reduce the error of scale for these mesh models. Finally, the complete reconstructed scene is recovered by Model-Image matching. Profiting from well-designed networks, our framework has good robustness for different input images, with different view-angle, size of buildings, and spatial resolution. Experimental results show that an ideal reconstruction accuracy can be obtained both on the model shape and scale of buildings.

2020 ◽  
Vol 12 (5) ◽  
pp. 758 ◽  
Author(s):  
Mengjiao Qin ◽  
Sébastien Mavromatis ◽  
Linshu Hu ◽  
Feng Zhang ◽  
Renyi Liu ◽  
...  

Super-resolution (SR) is able to improve the spatial resolution of remote sensing images, which is critical for many practical applications such as fine urban monitoring. In this paper, a new single-image SR method, deep gradient-aware network with image-specific enhancement (DGANet-ISE) was proposed to improve the spatial resolution of remote sensing images. First, DGANet was proposed to model the complex relationship between low- and high-resolution images. A new gradient-aware loss was designed in the training phase to preserve more gradient details in super-resolved remote sensing images. Then, the ISE approach was proposed in the testing phase to further improve the SR performance. By using the specific features of each test image, ISE can further boost the generalization capability and adaptability of our method on inexperienced datasets. Finally, three datasets were used to verify the effectiveness of our method. The results indicate that DGANet-ISE outperforms the other 14 methods in the remote sensing image SR, and the cross-database test results demonstrate that our method exhibits satisfactory generalization performance in adapting to new data.


Author(s):  
L. Liebel ◽  
M. Körner

In optical remote sensing, spatial resolution of images is crucial for numerous applications. Space-borne systems are most likely to be affected by a lack of spatial resolution, due to their natural disadvantage of a large distance between the sensor and the sensed object. Thus, methods for <i>single-image super resolution</i> are desirable to exceed the limits of the sensor. Apart from assisting visual inspection of datasets, post-processing operations—e.g., segmentation or feature extraction—can benefit from detailed and distinguishable structures. In this paper, we show that recently introduced state-of-the-art approaches for single-image super resolution of conventional photographs, making use of <i>deep learning</i> techniques, such as <i>convolutional neural networks</i> (CNN), can successfully be applied to remote sensing data. With a huge amount of training data available, <i>end-to-end learning</i> is reasonably easy to apply and can achieve results unattainable using conventional handcrafted algorithms. <br><br> We trained our CNN on a specifically designed, domain-specific dataset, in order to take into account the special characteristics of multispectral remote sensing data. This dataset consists of publicly available SENTINEL-2 images featuring 13 spectral bands, a ground resolution of up to 10m, and a high radiometric resolution and thus satisfying our requirements in terms of quality and quantity. In experiments, we obtained results superior compared to competing approaches trained on generic image sets, which failed to reasonably scale satellite images with a high radiometric resolution, as well as conventional interpolation methods.


2018 ◽  
Vol 14 (3) ◽  
pp. 25-34
Author(s):  
M A Kupriaynov ◽  
G A Kochergin ◽  
Y M Polishchuk

Based on the simulation, a relationship was established between the spatial resolution of the satellite image and the relative error in determining the area of the object being measured. The formula is proposed for calculating the relative error of measuring the area of an object using remote sensing techniques. A method for constructing random flat geometric figures with a given shape factor is proposed.


Author(s):  
L. Liebel ◽  
M. Körner

In optical remote sensing, spatial resolution of images is crucial for numerous applications. Space-borne systems are most likely to be affected by a lack of spatial resolution, due to their natural disadvantage of a large distance between the sensor and the sensed object. Thus, methods for <i>single-image super resolution</i> are desirable to exceed the limits of the sensor. Apart from assisting visual inspection of datasets, post-processing operations—e.g., segmentation or feature extraction—can benefit from detailed and distinguishable structures. In this paper, we show that recently introduced state-of-the-art approaches for single-image super resolution of conventional photographs, making use of <i>deep learning</i> techniques, such as <i>convolutional neural networks</i> (CNN), can successfully be applied to remote sensing data. With a huge amount of training data available, <i>end-to-end learning</i> is reasonably easy to apply and can achieve results unattainable using conventional handcrafted algorithms. <br><br> We trained our CNN on a specifically designed, domain-specific dataset, in order to take into account the special characteristics of multispectral remote sensing data. This dataset consists of publicly available SENTINEL-2 images featuring 13 spectral bands, a ground resolution of up to 10m, and a high radiometric resolution and thus satisfying our requirements in terms of quality and quantity. In experiments, we obtained results superior compared to competing approaches trained on generic image sets, which failed to reasonably scale satellite images with a high radiometric resolution, as well as conventional interpolation methods.


2021 ◽  
Vol 22 (80) ◽  
pp. 201-219
Author(s):  
Jaiza Santos Motta ◽  
César Claudio Cáceres Encina ◽  
Eliane Guaraldo ◽  
Ariadne Brabosa Gonçalves ◽  
Roberto Macedo Gamarra ◽  
...  

The objective of this study is to adapt the calculations of the Pasture Degradation Index (GDI) to the Brazilian savannah using medium spatial resolution satellite image for the dry season. Vegetation cover is the main evaluation parameter used to calculate the GDI. The extreme ranges of the grazing class were determined by the NDVI histogram of a single date. Pasture cover was distinguished into five classes called Vegetable Pasture Cover (GVC), derived from NDVI and compared with five other classes derived from field photographs, named Green Coverage Percentage (GCP). The similarity between GVC and GVP demonstrated that GVC can be used to classify pasture cover. As a product of GVC, GDI was obtained. The GDI showed that pasture degradation in Paraíso das Águas is very serious. Extremely severe and Severe degradation occupy 9.28% and 25.22% of the study area, moderate and light degradation of pasture occupy 8.29% and 4.50%, respectively, and the non-degradation area covers 1.43 % of pastures. The results suggest the possibility of applying the GDI, originally developed for natural fields and multitemporal remote sensing data, to evaluate the conditions of the tropical savanna planted fields by means of a unique image.


2017 ◽  
Vol 31 (2) ◽  
pp. 195-202 ◽  
Author(s):  
Jitka Kumhálová ◽  
Štěpánka Matějková

Abstract Currently, remote sensing sensors are very popular for crop monitoring and yield prediction. This paper describes how satellite images with moderate (Landsat satellite data) and very high (QuickBird and WorldView-2 satellite data) spatial resolution, together with GreenSeeker hand held crop sensor, can be used to estimate yield and crop growth variability. Winter barley (2007 and 2015) and winter wheat (2009 and 2011) were chosen because of cloud-free data availability in the same time period for experimental field from Landsat satellite images and QuickBird or WorldView-2 images. Very high spatial resolution images were resampled to worse spatial resolution. Normalised difference vegetation index was derived from each satellite image data sets and it was also measured with GreenSeeker handheld crop sensor for the year 2015 only. Results showed that each satellite image data set can be used for yield and plant variability estimation. Nevertheless, better results, in comparison with crop yield, were obtained for images acquired in later phenological phases, e.g. in 2007 - BBCH 59 - average correlation coefficient 0.856, and in 2011 - BBCH 59-0.784. GreenSeeker handheld crop sensor was not suitable for yield estimation due to different measuring method.


Author(s):  
Adriana Verschoor ◽  
Ronald Milligan ◽  
Suman Srivastava ◽  
Joachim Frank

We have studied the eukaryotic ribosome from two vertebrate species (rabbit reticulocyte and chick embryo ribosomes) in several different electron microscopic preparations (Fig. 1a-d), and we have applied image processing methods to two of the types of images. Reticulocyte ribosomes were examined in both negative stain (0.5% uranyl acetate, in a double-carbon preparation) and frozen hydrated preparation as single-particle specimens. In addition, chick embryo ribosomes in tetrameric and crystalline assemblies in frozen hydrated preparation have been examined. 2D averaging, multivariate statistical analysis, and classification methods have been applied to the negatively stained single-particle micrographs and the frozen hydrated tetramer micrographs to obtain statistically well defined projection images of the ribosome (Fig. 2a,c). 3D reconstruction methods, the random conical reconstruction scheme and weighted back projection, were applied to the negative-stain data, and several closely related reconstructions were obtained. The principal 3D reconstruction (Fig. 2b), which has a resolution of 3.7 nm according to the differential phase residual criterion, can be compared to the images of individual ribosomes in a 2D tetramer average (Fig. 2c) at a similar resolution, and a good agreement of the general morphology and of many of the characteristic features is seen.Both data sets show the ribosome in roughly the same ’view’ or orientation, with respect to the adsorptive surface in the electron microscopic preparation, as judged by the agreement in both the projected form and the distribution of characteristic density features. The negative-stain reconstruction reveals details of the ribosome morphology; the 2D frozen-hydrated average provides projection information on the native mass-density distribution within the structure. The 40S subunit appears to have an elongate core of higher density, while the 60S subunit shows a more complex pattern of dense features, comprising a rather globular core, locally extending close to the particle surface.


Sign in / Sign up

Export Citation Format

Share Document