scholarly journals Theoretical Evaluation of Anisotropic Reflectance Correction Approaches for Addressing Multi-Scale Topographic Effects on the Radiation-Transfer Cascade in Mountain Environments

2019 ◽  
Vol 11 (23) ◽  
pp. 2728 ◽  
Author(s):  
Michael P. Bishop ◽  
Brennan W. Young ◽  
Jeffrey D. Colby ◽  
Roberto Furfaro ◽  
Enrico Schiassi ◽  
...  

Research involving anisotropic-reflectance correction (ARC) of multispectral imagery to account for topographic effects has been ongoing for approximately 40 years. A large body of research has focused on evaluating empirical ARC methods, resulting in inconsistent results. Consequently, our research objective was to evaluate commonly used ARC methods using first-order radiation-transfer modeling to simulate ASTER multispectral imagery over Nanga Parbat, Himalaya. Specifically, we accounted for orbital dynamics, atmospheric absorption and scattering, direct- and diffuse-skylight irradiance, land cover structure, and surface biophysical variations to evaluate their effectiveness in reducing multi-scale topographic effects. Our results clearly reveal that the empirical methods we evaluated could not reasonably account for multi-scale topographic effects at Nanga Parbat. The magnitude of reflectance and the correlation structure of biophysical properties were not preserved in the topographically-corrected multispectral imagery. The CCOR and SCS+C methods were able to remove topographic effects, given the Lambertian assumption, although atmospheric correction was required, and we did not account for other primary and secondary topographic effects that are thought to significantly influence spectral variation in imagery acquired over mountains. Evaluation of structural-similarity index images revealed spatially variable results that are wavelength dependent. Collectively, our simulation and evaluation procedures strongly suggest that empirical ARC methods have significant limitations for addressing anisotropic reflectance caused by multi-scale topographic effects. Results indicate that atmospheric correction is essential, and most methods failed to adequately produce the appropriate magnitude and spatial variation of surface reflectance in corrected imagery. Results were also wavelength dependent, as topographic effects influence radiation-transfer components differently in different regions of the electromagnetic spectrum. Our results explain inconsistencies described in the literature, and indicate that numerical modeling efforts are required to better account for multi-scale topographic effects in various radiation-transfer components.

Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1269
Author(s):  
Jiabin Luo ◽  
Wentai Lei ◽  
Feifei Hou ◽  
Chenghao Wang ◽  
Qiang Ren ◽  
...  

Ground-penetrating radar (GPR), as a non-invasive instrument, has been widely used in civil engineering. In GPR B-scan images, there may exist random noise due to the influence of the environment and equipment hardware, which complicates the interpretability of the useful information. Many methods have been proposed to eliminate or suppress the random noise. However, the existing methods have an unsatisfactory denoising effect when the image is severely contaminated by random noise. This paper proposes a multi-scale convolutional autoencoder (MCAE) to denoise GPR data. At the same time, to solve the problem of training dataset insufficiency, we designed the data augmentation strategy, Wasserstein generative adversarial network (WGAN), to increase the training dataset of MCAE. Experimental results conducted on both simulated, generated, and field datasets demonstrated that the proposed scheme has promising performance for image denoising. In terms of three indexes: the peak signal-to-noise ratio (PSNR), the time cost, and the structural similarity index (SSIM), the proposed scheme can achieve better performance of random noise suppression compared with the state-of-the-art competing methods (e.g., CAE, BM3D, WNNM).


Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 319
Author(s):  
Yi Wang ◽  
Xiao Song ◽  
Guanghong Gong ◽  
Ni Li

Due to the rapid development of deep learning and artificial intelligence techniques, denoising via neural networks has drawn great attention due to their flexibility and excellent performances. However, for most convolutional network denoising methods, the convolution kernel is only one layer deep, and features of distinct scales are neglected. Moreover, in the convolution operation, all channels are treated equally; the relationships of channels are not considered. In this paper, we propose a multi-scale feature extraction-based normalized attention neural network (MFENANN) for image denoising. In MFENANN, we define a multi-scale feature extraction block to extract and combine features at distinct scales of the noisy image. In addition, we propose a normalized attention network (NAN) to learn the relationships between channels, which smooths the optimization landscape and speeds up the convergence process for training an attention model. Moreover, we introduce the NAN to convolutional network denoising, in which each channel gets gain; channels can play different roles in the subsequent convolution. To testify the effectiveness of the proposed MFENANN, we used both grayscale and color image sets whose noise levels ranged from 0 to 75 to do the experiments. The experimental results show that compared with some state-of-the-art denoising methods, the restored images of MFENANN have larger peak signal-to-noise ratios (PSNR) and structural similarity index measure (SSIM) values and get better overall appearance.


2017 ◽  
pp. 711-723
Author(s):  
Vikrant Bhateja ◽  
Abhinav Krishn ◽  
Himanshi Patel ◽  
Akanksha Sahu

Medical image fusion facilitates the retrieval of complementary information from medical images and has been employed diversely for computer-aided diagnosis of life threatening diseases. Fusion has been performed using various approaches such as Pyramidal, Multi-resolution, multi-scale etc. Each and every approach of fusion depicts only a particular feature (i.e. the information content or the structural properties of an image). Therefore, this paper presents a comparative analysis and evaluation of multi-modal medical image fusion methodologies employing wavelet as a multi-resolution approach and ridgelet as a multi-scale approach. The current work tends to highlight upon the utility of these approaches according to the requirement of features in the fused image. Principal Component Analysis (PCA) based fusion algorithm has been employed in both ridgelet and wavelet domains for purpose of minimisation of redundancies. Simulations have been performed for different sets of MR and CT-scan images taken from ‘The Whole Brain Atlas'. The performance evaluation has been carried out using different parameters of image quality evaluation like: Entropy (E), Fusion Factor (FF), Structural Similarity Index (SSIM) and Edge Strength (QFAB). The outcome of this analysis highlights the trade-off between the retrieval of information content and the morphological details in finally fused image in wavelet and ridgelet domains.


Author(s):  
Shenghan Mei ◽  
Xiaochun Liu ◽  
Shuli Mei

The locust slice images have all the features such as strong self-similarity, piecewise smoothness and nonlinear texture structure. Multi-scale interpolation operator is an effective tool to describe such structures, but it cannot overcome the influence of noise on images. Therefore, this research designed the Shannon–Cosine wavelet which possesses all the excellent properties such as interpolation, smoothness, compact support and normalization, then constructing multi-scale wavelet interpolative operator, the operator can be applied to decompose and reconstruct the images adaptively. Combining the operator with the local filter operator (mean and median), a multi-scale Shannon–Cosine wavelet denoising algorithm based on cell filtering is constructed in this research. The algorithm overcomes the disadvantages of multi-scale interpolation wavelet, which is only suitable for describing smooth signals, and realizes multi-scale noise reduction of locust slice images. The experimental results show that the proposed method can keep all kinds of texture structures in the slice image of locust. In the experiments, the locust slice images with mixture noise of Gaussian and salt–pepper are taken as examples to compare the performances of the proposed method and other typical denoising methods. The experimental results show that the Peak Signal-To-Noise Ratio (PSNR) of the denoised images obtained by the proposed method is greater 27.3%, 24.6%, 2.94%, 22.9% than Weiner filter, wavelet transform method, median and average filtering, respectively; and the Structural Similarity Index (SSIM) for measuring image quality is greater 31.1%, 31.3%, 15.5%, 10.2% than other four methods, respectively. As the variance of Gaussian white noise increases from 0.02 to 0.1, the values of PSNR and SSIM obtained by the proposed method only decrease by 11.94% and 13.33%, respectively, which are much less than other 4 methods. This shows that the proposed method possesses stronger adaptability.


2020 ◽  
Author(s):  
Vincent Vionnet ◽  
Christopher B. Marsh ◽  
Brian Menounos ◽  
Simon Gascoin ◽  
Nicholas E. Wayand ◽  
...  

Abstract. The interaction of mountain terrain with meteorological processes causes substantial temporal and spatial variability in snow accumulation and ablation. Processes impacted by complex terrain include large-scale orographic enhancement of snowfall, small-scale processes such as gravitational and wind-induced transport of snow, and variability in the radiative balance such as through terrain shadowing. In this study, a multi-scale modeling approach is proposed to simulate the temporal and spatial evolution of high mountain snowpacks using the Canadian Hydrological Model (CHM), a multi-scale, spatially distributed modelling framework. CHM permits a variable spatial resolution by using the efficient terrain representation by unstructured triangular meshes. The model simulates processes such as radiation shadowing and irradiance to slopes, blowing snow redistribution and sublimation, avalanching, forest canopy interception and sublimation and snowpack melt. Short-term, km-scale atmospheric forecasts from Environment and Climate Change Canada's Global Environmental Multiscale Model through its High Resolution Deterministic Prediction System (HRDPS) drive CHM, and were downscaled to the unstructured mesh scale using process-based procedures. In particular, a new wind downscaling strategy combines meso-scale HRDPS outputs and micro-scale pre-computed wind fields to allow for blowing snow calculations. HRDPS-CHM was applied to simulate snow conditions down to 50-m resolution during winter 2017/2018 in a domain around the Kananaskis Valley (~1000 km2) in the Canadian Rockies. Simulations were evaluated using high-resolution airborne Light Detection and Ranging (LiDAR) snow depth data and snow persistence indexes derived from remotely sensed imagery. Results included model falsifications and showed that both blowing snow and gravitational snow redistribution need to be simulated to capture the snowpack variability and the evolution of snow depth and persistence with elevation across the region. Accumulation of wind-blown snow on leeward slopes and associated snow-cover persistence were underestimated in a CHM simulation driven by wind fields that did not capture leeside flow recirculation and associated wind speed decreases. A terrain-based metric helped to identify these lee-side areas and improved the wind field and the associated snow redistribution. An overestimation of snow redistribution from windward to leeward slopes and subsequent avalanching was still found. The results of this study highlight the need for further improvements of snowdrift-permitting models for large-scale applications, in particular the representation of subgrid topographic effects on snow transport.


2017 ◽  
Vol 43 (3) ◽  
pp. 92-104 ◽  
Author(s):  
Ali DEHGHANI ◽  
Alireza CHEHREGHAN ◽  
Rahim ALI ABBASPOUR

One of the main steps of acquiring and handling data in a multi-scale database is generation of automatic links between corresponding objects in different scales, which is provided by matching them in the datasets. The basic concept of this process is to detect and measure the spatial similarity between various objects, which differ from one application to another, largely depends on the intrinsic properties of the input data. In fact, spatial similarity index, which is a function of other criteria such as geometric, topological, and semantic ones, is to some extent uncertain. Therefore, the present study aims to provide a matching algorithm based on fuzzy reasoning, while considering human spatial cognition. The proposed algorithm runs on two road datasets of Yazd city in Iran, which are in the scales of 1:5000 and 1:25000. The evaluation results show that matching rate and correctness of the algorithm is 92.7% and 88%, respectively, which validates the appropriate function of the proposed algorithm in matching.


2020 ◽  
Vol 13 (1) ◽  
pp. 076
Author(s):  
Cristiane Nunes Francisco ◽  
Paulo Roberto da Silva Ruiz ◽  
Cláudia Maria de Almeida ◽  
Nina Cardoso Gruber ◽  
Camila Souza dos Anjos

As operações aritméticas efetuadas entre bandas espectrais de imagens de sensoriamento remoto necessitam de correção atmosférica para eliminar os efeitos atmosféricos na resposta espectral dos alvos, pois os números digitais não apresentam escala equivalente em todas as bandas. Índices de vegetação, calculados com base em operações aritméticas, além de caracterizarem a vegetação, minimizam os efeitos da iluminação da cena causados pela topografia. Com o objetivo de analisar a eficácia da correção atmosférica no cálculo de índices de vegetação, este trabalho comparou os Índices de Vegetação por Diferença Normalizada (Normalized Difference Vegetation Index - NDVI), calculados com base em imagens corrigidas e não corrigidas de um recorte de uma cena Landsat 8/OLI situado na cidade do Rio de Janeiro, Brasil. Os resultados mostraram que o NDVI calculado pela reflectância, ou seja, imagem corrigida, apresentou o melhor resultado, devido ao maior discriminação das classes de vegetação e de corpos d'água na imagem, bem como à minimização do efeito topográfico nos valores dos índices de vegetação.  Analysis of the atmospheric correction impact on the assessment of the Normalized Difference Vegetation Index for a Landsat 8 oli image A B S T R A C TThe image arithmetic operations must be executed on previously atmospherically corrected bands, since the digital numbers do not present equivalent scales in all bands. Vegetation indices, calculated by means of arithmetic operations, are meant for both targets characterization and the minimization of illumination effects caused by the topography. With the purpose to analyze the efficacy of atmospheric correction in the calculation of vegetation indices with respect to the mitigation of atmospheric and topographic effects on the targets spectral response, this paper compared the NDVI (Normalized Difference Vegetation Index) calculated using corrected and uncorrected images related to an inset of a Landsat 8 OLI scene from Rio de Janeiro, Brazil. The result showed that NDVI calculated from reflectance values, i.e, corrected images, presented the best results due to a greater number of vegetation patches and water bodies classes that could be discriminated in the image, as well the mitigation of the topographic effect in the vegetation indices values.Keywords: remote sensing, urban forest, atmospheric correction.


Sign in / Sign up

Export Citation Format

Share Document