Extraction of object features from high resolution SAR images based on SURF features

Author(s):  
Xiaonjuan Tian ◽  
Chao Wang ◽  
Hong Zhang
Sensors ◽  
2015 ◽  
Vol 15 (9) ◽  
pp. 23071-23094 ◽  
Author(s):  
Yihua Tan ◽  
Qingyun Li ◽  
Yansheng Li ◽  
Jinwen Tian

2021 ◽  
Vol 13 (2) ◽  
pp. 328
Author(s):  
Wenkai Liang ◽  
Yan Wu ◽  
Ming Li ◽  
Yice Cao ◽  
Xin Hu

The classification of high-resolution (HR) synthetic aperture radar (SAR) images is of great importance for SAR scene interpretation and application. However, the presence of intricate spatial structural patterns and complex statistical nature makes SAR image classification a challenging task, especially in the case of limited labeled SAR data. This paper proposes a novel HR SAR image classification method, using a multi-scale deep feature fusion network and covariance pooling manifold network (MFFN-CPMN). MFFN-CPMN combines the advantages of local spatial features and global statistical properties and considers the multi-feature information fusion of SAR images in representation learning. First, we propose a Gabor-filtering-based multi-scale feature fusion network (MFFN) to capture the spatial pattern and get the discriminative features of SAR images. The MFFN belongs to a deep convolutional neural network (CNN). To make full use of a large amount of unlabeled data, the weights of each layer of MFFN are optimized by unsupervised denoising dual-sparse encoder. Moreover, the feature fusion strategy in MFFN can effectively exploit the complementary information between different levels and different scales. Second, we utilize a covariance pooling manifold network to extract further the global second-order statistics of SAR images over the fusional feature maps. Finally, the obtained covariance descriptor is more distinct for various land covers. Experimental results on four HR SAR images demonstrate the effectiveness of the proposed method and achieve promising results over other related algorithms.


2021 ◽  
Author(s):  
Aedan Yue Li ◽  
Keisuke Fukuda ◽  
Morgan Barense

Though much progress has been made to understand feature integration, debate remains regarding how objects are represented in mind based on their constituent features. Here, we advance this debate by introducing a novel shape-color “conjunction task” to reconstruct memory resolution for multiple object features simultaneously. In a first experiment, we replicated and extended a classic change detection paradigm using our task. Replicating previous work, memory resolution for individual features was reduced when the number of objects increased, regardless of the number of to-be-remembered features. Extending previous work, we found that high resolution memory near perfect in resemblance to the target was selectively impacted by the number of to-be-remembered features. Applying a statistical model of stochastic dependence, we found evidence primarily for integration of low-resolution feature memories, but less evidence for integration of high-resolution feature memories. These results suggest a resolution trade-off, such that memory resolution for individual features can be higher when those features are represented independently compared to when those features are integrated. In a second experiment which manipulated the nature of distracting information, we examined whether object features were directly bound to each other or by virtue of shared spatial location. Feature integration was disrupted by distractors sharing visual features of target objects but not disrupted when distractors shared spatial location – suggesting that feature integration was driven by direct binding between shape and color features. Our results constrain theoretical models of object representation, providing empirical support for hierarchical representations of both integrated and independent features.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3580 ◽  
Author(s):  
Jie Wang ◽  
Ke-Hong Zhu ◽  
Li-Na Wang ◽  
Xing-Dong Liang ◽  
Long-Yong Chen

In recent years, multi-input multi-output (MIMO) synthetic aperture radar (SAR) systems, which can promote the performance of 3D imaging, high-resolution wide-swath remote sensing, and multi-baseline interferometry, have received considerable attention. Several papers on MIMO-SAR have been published, but the research of such systems is seriously limited. This is mainly because the superposed echoes of the multiple transmitted orthogonal waveforms cannot be separated perfectly. The imperfect separation will introduce ambiguous energy and degrade SAR images dramatically. In this paper, a novel orthogonal waveform separation scheme based on echo-compression is proposed for airborne MIMO-SAR systems. Specifically, apart from the simultaneous transmissions, the transmitters are required to radiate several times alone in a synthetic aperture to sense their private inner-aperture channels. Since the channel responses at the neighboring azimuth positions are relevant, the energy of the solely radiated orthogonal waveforms in the superposed echoes will be concentrated. To this end, the echoes of the multiple transmitted orthogonal waveforms can be separated by cancelling the peaks. In addition, the cleaned echoes, along with original superposed one, can be used to reconstruct the unambiguous echoes. The proposed scheme is validated by simulations.


Sign in / Sign up

Export Citation Format

Share Document