decision level fusion
Recently Published Documents


TOTAL DOCUMENTS

201
(FIVE YEARS 49)

H-INDEX

18
(FIVE YEARS 3)

2021 ◽  
Vol 13 (22) ◽  
pp. 4668
Author(s):  
Stella Ofori-Ampofo ◽  
Charlotte Pelletier ◽  
Stefan Lang

Crop maps are key inputs for crop inventory production and yield estimation and can inform the implementation of effective farm management practices. Producing these maps at detailed scales requires exhaustive field surveys that can be laborious, time-consuming, and expensive to replicate. With a growing archive of remote sensing data, there are enormous opportunities to exploit dense satellite image time series (SITS), temporal sequences of images over the same area. Generally, crop type mapping relies on single-sensor inputs and is solved with the help of traditional learning algorithms such as random forests or support vector machines. Nowadays, deep learning techniques have brought significant improvements by leveraging information in both spatial and temporal dimensions, which are relevant in crop studies. The concurrent availability of Sentinel-1 (synthetic aperture radar) and Sentinel-2 (optical) data offers a great opportunity to utilize them jointly; however, optimizing their synergy has been understudied with deep learning techniques. In this work, we analyze and compare three fusion strategies (input, layer, and decision levels) to identify the best strategy that optimizes optical-radar classification performance. They are applied to a recent architecture, notably, the pixel-set encoder–temporal attention encoder (PSE-TAE) developed specifically for object-based classification of SITS and based on self-attention mechanisms. Experiments are carried out in Brittany, in the northwest of France, with Sentinel-1 and Sentinel-2 time series. Input and layer-level fusion competitively achieved the best overall F-score surpassing decision-level fusion by 2%. On a per-class basis, decision-level fusion increased the accuracy of dominant classes, whereas layer-level fusion improves up to 13% for minority classes. Against single-sensor baseline, multi-sensor fusion strategies identified crop types more accurately: for example, input-level outperformed Sentinel-2 and Sentinel-1 by 3% and 9% in F-score, respectively. We have also conducted experiments that showed the importance of fusion for early time series classification and under high cloud cover condition.


2021 ◽  
pp. 100287
Author(s):  
Abdu Gumaei ◽  
Walaa N. Ismail ◽  
Md. Rafiul Hassan ◽  
Mohammad Mehedi Hassan ◽  
Ebtsam Mohamed ◽  
...  

2021 ◽  
Vol 13 (18) ◽  
pp. 3579
Author(s):  
Junge Shen ◽  
Chi Zhang ◽  
Yu Zheng ◽  
Ruxin Wang

Remote sensing image scene classification acts as an important task in remote sensing image applications, which benefits from the pleasing performance brought by deep convolution neural networks (CNNs). When applying deep models in this task, the challenges are, on one hand, that the targets with highly different scales may exist in the image simultaneously and the small targets could be lost in the deep feature maps of CNNs; and on the other hand, the remote sensing image data exhibits the properties of high inter-class similarity and high intra-class variance. Both factors could limit the performance of the deep models, which motivates us to develop an adaptive decision-level information fusion framework that can incorporate with any CNN backbones. Specifically, given a CNN backbone that predicts multiple classification scores based on the feature maps of different layers, we develop a pluginable importance factor generator that aims at predicting a factor for each score. The factors measure how confident the scores in different layers are with respect to the final output. Formally, the final score is obtained by a class-wise and weighted summation based on the scores and the corresponding factors. To reduce the co-adaptation effect among the scores of different layers, we propose a stochastic decision-level fusion training strategy that enables each classification score to randomly participate in the decision-level fusion. Experiments on four popular datasets including the UC Merced Land-Use dataset, the RSSCN 7 dataset, the AID dataset, and the NWPU-RESISC 45 dataset demonstrate the superiority of the proposed method over other state-of-the-art methods.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Peng Zhao ◽  
Zhen-Yu Li ◽  
Cheng-Kun Wang

A novel wood species spectral classification scheme is proposed based on a fuzzy rule classifier. The visible/near-infrared (VIS/NIR) spectral reflectance curve of a wood sample’s cross section was captured using a USB 2000-VIS-NIR spectrometer and a FLAME-NIR spectrometer. First, the wood spectral curve—with spectral bands of 376.64–779.84 nm and 950–1650 nm—was processed using the principal component analysis (PCA) dimension reduction algorithm. The wood spectral data were divided into two datasets, namely, training and testing sets. The training set was used to generate the membership functions and the initial fuzzy rule set, with the fuzzy rule being adjusted to supplement and refine the classification rules to form a perfect fuzzy rule set. Second, a fuzzy classifier was applied to the VIS and NIR bands. An improved decision-level fusion scheme based on the Dempster–Shafer (D-S) evidential theory was proposed to further improve the accuracy of wood species recognition. The test results using the testing set indicated that the overall recognition accuracy (ORA) of our scheme reached 94.76% for 50 wood species, which is superior to that of conventional classification algorithms and recent state-of-the-art wood species classification schemes. This method can rapidly achieve good recognition results, especially using small datasets, owing to its low computational time and space complexity.


Sign in / Sign up

Export Citation Format

Share Document