Using spatial information for evaluating the quality of prediction maps from hyperspectral images: A geostatistical approach

2019 ◽  
Vol 1077 ◽  
pp. 116-128 ◽  
Author(s):  
Ana Herrero-Langreo ◽  
Nathalie Gorretta ◽  
Bruno Tisseyre ◽  
Aoife Gowen ◽  
Jun-Li Xu ◽  
...  
2013 ◽  
Vol 11 (1) ◽  
pp. 8-13
Author(s):  
V. Behar ◽  
V. Bogdanova

Abstract In this paper the use of a set of nonlinear edge-preserving filters is proposed as a pre-processing stage with the purpose to improve the quality of hyperspectral images before object detection. The capability of each nonlinear filter to improve images, corrupted by spatially and spectrally correlated Gaussian noise, is evaluated in terms of the average Improvement factor in the Peak Signal to Noise Ratio (IPSNR), estimated at the filter output. The simulation results demonstrate that this pre-processing procedure is efficient only in case the spatial and spectral correlation coefficients of noise do not exceed the value of 0.6


2021 ◽  
Vol 13 (2) ◽  
pp. 268
Author(s):  
Xiaochen Lv ◽  
Wenhong Wang ◽  
Hongfu Liu

Hyperspectral unmixing is an important technique for analyzing remote sensing images which aims to obtain a collection of endmembers and their corresponding abundances. In recent years, non-negative matrix factorization (NMF) has received extensive attention due to its good adaptability for mixed data with different degrees. The majority of existing NMF-based unmixing methods are developed by incorporating additional constraints into the standard NMF based on the spectral and spatial information of hyperspectral images. However, they neglect to exploit the nature of imbalanced pixels included in the data, which may cause the pixels mixed with imbalanced endmembers to be ignored, and thus the imbalanced endmembers generally cannot be accurately estimated due to the statistical property of NMF. To exploit the information of imbalanced samples in hyperspectral data during the unmixing procedure, in this paper, a cluster-wise weighted NMF (CW-NMF) method for the unmixing of hyperspectral images with imbalanced data is proposed. Specifically, based on the result of clustering conducted on the hyperspectral image, we construct a weight matrix and introduce it into the model of standard NMF. The proposed weight matrix can provide an appropriate weight value to the reconstruction error between each original pixel and the reconstructed pixel in the unmixing procedure. In this way, the adverse effect of imbalanced samples on the statistical accuracy of NMF is expected to be reduced by assigning larger weight values to the pixels concerning imbalanced endmembers and giving smaller weight values to the pixels mixed by majority endmembers. Besides, we extend the proposed CW-NMF by introducing the sparsity constraints of abundance and graph-based regularization, respectively. The experimental results on both synthetic and real hyperspectral data have been reported, and the effectiveness of our proposed methods has been demonstrated by comparing them with several state-of-the-art methods.


2021 ◽  
Vol 10 (1) ◽  
pp. 30
Author(s):  
Alfonso Quarati ◽  
Monica De Martino ◽  
Sergio Rosim

The Open Government Data portals (OGD), thanks to the presence of thousands of geo-referenced datasets, containing spatial information are of extreme interest for any analysis or process relating to the territory. For this to happen, users must be enabled to access these datasets and reuse them. An element often considered as hindering the full dissemination of OGD data is the quality of their metadata. Starting from an experimental investigation conducted on over 160,000 geospatial datasets belonging to six national and international OGD portals, this work has as its first objective to provide an overview of the usage of these portals measured in terms of datasets views and downloads. Furthermore, to assess the possible influence of the quality of the metadata on the use of geospatial datasets, an assessment of the metadata for each dataset was carried out, and the correlation between these two variables was measured. The results obtained showed a significant underutilization of geospatial datasets and a generally poor quality of their metadata. In addition, a weak correlation was found between the use and quality of the metadata, not such as to assert with certainty that the latter is a determining factor of the former.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 1949
Author(s):  
Lukas Sevcik ◽  
Miroslav Voznak

Video quality evaluation needs a combined approach that includes subjective and objective metrics, testing, and monitoring of the network. This paper deals with the novel approach of mapping quality of service (QoS) to quality of experience (QoE) using QoE metrics to determine user satisfaction limits, and applying QoS tools to provide the minimum QoE expected by users. Our aim was to connect objective estimations of video quality with the subjective estimations. A comprehensive tool for the estimation of the subjective evaluation is proposed. This new idea is based on the evaluation and marking of video sequences using the sentinel flag derived from spatial information (SI) and temporal information (TI) in individual video frames. The authors of this paper created a video database for quality evaluation, and derived SI and TI from each video sequence for classifying the scenes. Video scenes from the database were evaluated by objective and subjective assessment. Based on the results, a new model for prediction of subjective quality is defined and presented in this paper. This quality is predicted using an artificial neural network based on the objective evaluation and the type of video sequences defined by qualitative parameters such as resolution, compression standard, and bitstream. Furthermore, the authors created an optimum mapping function to define the threshold for the variable bitrate setting based on the flag in the video, determining the type of scene in the proposed model. This function allows one to allocate a bitrate dynamically for a particular segment of the scene and maintains the desired quality. Our proposed model can help video service providers with the increasing the comfort of the end users. The variable bitstream ensures consistent video quality and customer satisfaction, while network resources are used effectively. The proposed model can also predict the appropriate bitrate based on the required quality of video sequences, defined using either objective or subjective assessment.


2018 ◽  
Vol 10 (11) ◽  
pp. 1827 ◽  
Author(s):  
Ahram Song ◽  
Jaewan Choi ◽  
Youkyung Han ◽  
Yongil Kim

Hyperspectral change detection (CD) can be effectively performed using deep-learning networks. Although these approaches require qualified training samples, it is difficult to obtain ground-truth data in the real world. Preserving spatial information during training is difficult due to structural limitations. To solve such problems, our study proposed a novel CD method for hyperspectral images (HSIs), including sample generation and a deep-learning network, called the recurrent three-dimensional (3D) fully convolutional network (Re3FCN), which merged the advantages of a 3D fully convolutional network (FCN) and a convolutional long short-term memory (ConvLSTM). Principal component analysis (PCA) and the spectral correlation angle (SCA) were used to generate training samples with high probabilities of being changed or unchanged. The strategy assisted in training fewer samples of representative feature expression. The Re3FCN was mainly comprised of spectral–spatial and temporal modules. Particularly, a spectral–spatial module with a 3D convolutional layer extracts the spectral–spatial features from the HSIs simultaneously, whilst a temporal module with ConvLSTM records and analyzes the multi-temporal HSI change information. The study first proposed a simple and effective method to generate samples for network training. This method can be applied effectively to cases with no training samples. Re3FCN can perform end-to-end detection for binary and multiple changes. Moreover, Re3FCN can receive multi-temporal HSIs directly as input without learning the characteristics of multiple changes. Finally, the network could extract joint spectral–spatial–temporal features and it preserved the spatial structure during the learning process through the fully convolutional structure. This study was the first to use a 3D FCN and a ConvLSTM for the remote-sensing CD. To demonstrate the effectiveness of the proposed CD method, we performed binary and multi-class CD experiments. Results revealed that the Re3FCN outperformed the other conventional methods, such as change vector analysis, iteratively reweighted multivariate alteration detection, PCA-SCA, FCN, and the combination of 2D convolutional layers-fully connected LSTM.


TecnoLógicas ◽  
2019 ◽  
Vol 22 (46) ◽  
pp. 1-14 ◽  
Author(s):  
Jorge Luis Bacca ◽  
Henry Arguello

Spectral image clustering is an unsupervised classification method which identifies distributions of pixels using spectral information without requiring a previous training stage. The sparse subspace clustering-based methods (SSC) assume that hyperspectral images lie in the union of multiple low-dimensional subspaces.  Using this, SSC groups spectral signatures in different subspaces, expressing each spectral signature as a sparse linear combination of all pixels, ensuring that the non-zero elements belong to the same class. Although these methods have shown good accuracy for unsupervised classification of hyperspectral images, the computational complexity becomes intractable as the number of pixels increases, i.e. when the spatial dimension of the image is large. For this reason, this paper proposes to reduce the number of pixels to be classified in the hyperspectral image, and later, the clustering results for the missing pixels are obtained by exploiting the spatial information. Specifically, this work proposes two methodologies to remove the pixels, the first one is based on spatial blue noise distribution which reduces the probability to remove cluster of neighboring pixels, and the second is a sub-sampling procedure that eliminates every two contiguous pixels, preserving the spatial structure of the scene. The performance of the proposed spectral image clustering framework is evaluated in three datasets showing that a similar accuracy is obtained when up to 50% of the pixels are removed, in addition, it is up to 7.9 times faster compared to the classification of the data sets without incomplete pixels.


Author(s):  
S. A. Zotov ◽  
E. V. Dmitriev ◽  
S. Yu. Shibanov ◽  
V. V. Kozoderov ◽  
S. A. Donskoy

Within the framework of the program on Earth remote sensing from space, the hyperspectral camera NA-GS (scientific instrument "Hyperspectrometer") produced by NPO Lepton (Zelenograd, Moscow) will be installed on the Russian segment of the International Space Station (ISS) for experimental testing of the ground-space system for monitoring and forecasting natural and man-made disasters. The practical use of this system is associated with solving certain problems of thematic processing hyperspectral images that must meet certain quality criteria. In this paper, we propose a technique for determining the operational capabilities of NA-GS instrument based on statistical simulation modeling (SSM) data. The concept of the proposed SSM includes the ability to perform model experiments for a test polygon of complex shape, simulation of hyperspectral imaging of selected parts of the polygon with a specified accuracy, and taking into account the clouds and the zenith angle of the sun. The influence of external observation conditions on the quality of hyperspectral images is considered. Numerical experiments were carried out for selected test areas. The analysis of the results obtained confirms reliability of the proposed technique.


Author(s):  
A. K. Singh ◽  
H. V. Kumar ◽  
G. R. Kadambi ◽  
J. K. Kishore ◽  
J. Shuttleworth ◽  
...  

In this paper, the quality metrics evaluation on hyperspectral images has been presented using k-means clustering and segmentation. After classification the assessment of similarity between original image and classified image is achieved by measurements of image quality parameters. Experiments were carried out on four different types of hyperspectral images. Aerial and spaceborne hyperspectral images with different spectral and geometric resolutions were considered for quality metrics evaluation. Principal Component Analysis (PCA) has been applied to reduce the dimensionality of hyperspectral data. PCA was ultimately used for reducing the number of effective variables resulting in reduced complexity in processing. In case of ordinary images a human viewer plays an important role in quality evaluation. Hyperspectral data are generally processed by automatic algorithms and hence cannot be viewed directly by human viewers. Therefore evaluating quality of classified image becomes even more significant. An elaborate comparison is made between k-means clustering and segmentation for all the images by taking Peak Signal-to-Noise Ratio (PSNR), Mean Square Error (MSE), Maximum Squared Error, ratio of squared norms called L2RAT and Entropy. First four parameters are calculated by comparing the quality of original hyperspectral image and classified image. Entropy is a measure of uncertainty or randomness which is calculated for classified image. Proposed methodology can be used for assessing the performance of any hyperspectral image classification techniques.


Author(s):  
Brian N. Hilton ◽  
Richard J. Burkhard ◽  
Tarun Abhichandani

An approach to an ontology-based information system design theory for spatial information system development is presented. This approach addresses the dynamic nature of information system development at the beginning of the 21st century and addresses the question of how to establish relationships between the various design components of a spatial information system. It should also help to automate and guide the design process while at the same time improve the quality of the process along with its outputs. An example of this approach is presented, along with examples of the various ontologies utilized in the design of this particular spatial information system. Finally, a method to mitigate the issues regarding the organization and management of a growing library of ontologies is discussed.


2018 ◽  
Vol 2018 ◽  
pp. 1-7
Author(s):  
Erdal Akyol ◽  
Mutlu Alkan ◽  
Ali Kaya ◽  
Suat Tasdelen ◽  
Ali Aydin

In recent years, life quality of the urban areas is a growing interest of civil engineering. Environmental quality is essential to display the position of sustainable development and asserts the corresponding countermeasures to the protection of environment. Urban environmental quality involves multidisciplinary parameters and difficulties to be analyzed. The problem is not only complex but also involves many uncertainties, and decision-making on these issues is a challenging problem which contains many parameters and alternatives inherently. Multicriteria decision analysis (MCDA) is a very prepotent technique to solve that sort of problems, and it guides the users confidence by synthesizing that information. Environmental concerns frequently contain spatial information. Spatial multicriteria decision analysis (SMCDA) that includes Geographic Information System (GIS) is efficient to tackle that type of problems. This study has employed some geographic and urbanization parameters to assess the environmental urbanization quality used by those methods. The study area has been described in five categories: very favorable, favorable, moderate, unfavorable, and very unfavorable. The results are momentous to see the current situation, and they could help to mitigate the related concerns. The study proves that the SMCDA descriptions match the environmental quality perception in the city.


Sign in / Sign up

Export Citation Format

Share Document