spatial feature
Recently Published Documents


TOTAL DOCUMENTS

327
(FIVE YEARS 142)

H-INDEX

26
(FIVE YEARS 8)

2021 ◽  
Vol 14 (1) ◽  
pp. 79
Author(s):  
Miaomiao Liang ◽  
Huai Wang ◽  
Xiangchun Yu ◽  
Zhe Meng ◽  
Jianbing Yi ◽  
...  

Hyperspectral images (HSIs), acquired as a 3D data set, contain spectral and spatial information that is important for ground–object recognition. A 3D convolutional neural network (3DCNN) could therefore be more suitable than a 2D one for extracting multiscale neighborhood information in the spectral and spatial domains simultaneously, if it is not restrained by mass parameters and computation cost. In this paper, we propose a novel lightweight multilevel feature fusion network (LMFN) that can achieve satisfactory HSI classification with fewer parameters and a lower computational burden. The LMFN decouples spectral–spatial feature extraction into two modules: point-wise 3D convolution to learn correlations between adjacent bands with no spatial perception, and depth-wise convolution to obtain local texture features while the spectral receptive field remains unchanged. Then, a target-guided fusion mechanism (TFM) is introduced to achieve multilevel spectral–spatial feature fusion between the two modules. More specifically, multiscale spectral features are endowed with spatial long-range dependency, which is quantified by central target pixel-guided similarity measurement. Subsequently, the results obtained from shallow to deep layers are added, respectively, to the spatial modules, in an orderly manner. The TFM block can enhance adjacent spectral correction and focus on pixels that actively boost the target classification accuracy, while performing multiscale feature fusion. Experimental results across three benchmark HSI data sets indicate that our proposed LMFN has competitive advantages, in terms of both classification accuracy and lightweight deep network architecture engineering. More importantly, compared to state-of-the-art methods, the LMFN presents better robustness and generalization.


2021 ◽  
Vol 13 (24) ◽  
pp. 5082
Author(s):  
Qianguang Tu ◽  
Yun Zhao ◽  
Jing Guo ◽  
Chunmei Cheng ◽  
Liangliang Shi ◽  
...  

Six years of hourly aerosol optical thickness (AOT) data retrieved from Himawari-8 were used to investigate the spatial and temporal variations, especially diurnal variations, of aerosols over the China Seas. First, the Himawari-8 AOT data were consistent with the AERONET measurements over most of the China Seas, except for some coastal regions. The spatial feature showed that AOT over high latitude seas was generally larger than over low latitude seas, and it is distributed in strips along the coastline and decreases gradually with increasing distance from the coastline. AOT undergoes diurnal variation as it decreases from 9:00 a.m. local time, reaching a minimum at noon, and then begins to increase in the afternoon. The percentage daily departure of AOT over the East China Seas generally ranged ±20%, increasing sharply in the afternoon; however, over the northern part of the South China Sea, daily departure reached a maximum of >40% at 4:00 p.m. The monthly variation in AOT showed a pronounced annual cycle. Seasonal variations of the spatial pattern showed that the largest AOT was usually observed in spring and varies in other seasons for different seas.


2021 ◽  
Vol 13 (22) ◽  
pp. 4591
Author(s):  
Xiaoteng Zhou ◽  
Chun Liu ◽  
Akram Akbar ◽  
Yun Xue ◽  
Yuan Zhou

Urban river networks have the characteristics of medium and micro scales, complex water quality, rapid change, and time–space incoherence. Aiming to monitor the water quality accurately, it is necessary to extract suitable features and establish a universal inversion model for key water quality parameters. In this paper, we describe a spectral- and spatial-feature-integrated ensemble learning method for urban river network water quality grading. We proposed an in situ sampling method for urban river networks. Factor and correlation analyses were applied to extract the spectral features. Moreover, we analyzed the maximum allowed bandwidth for feature bands. We demonstrated that spatial features can improve the accuracy of water quality grading using kernel canonical correlation analysis (KCCA). Based on the spectral and spatial features, an ensemble learning model was established for total phosphorus (TP) and ammonia nitrogen (NH3-N). Both models were evaluated by means of fivefold validation. Furthermore, we proposed an unmanned aerial vehicle (UAV)-borne water quality multispectral remote sensing application process for urban river networks. Based on the process, we tested the model in practice. The experiment confirmed that our model can improve the grading accuracy by 30% compared to other machine learning models that use only spectral features. Our research can extend the application field of water quality remote sensing to complex urban river networks.


2021 ◽  
Vol 13 (21) ◽  
pp. 4472
Author(s):  
Tianyu Zhang ◽  
Cuiping Shi ◽  
Diling Liao ◽  
Liguo Wang

Convolutional neural networks (CNNs) have been widely used in hyperspectral image classification in recent years. The training of CNNs relies on a large amount of labeled sample data. However, the number of labeled samples of hyperspectral data is relatively small. Moreover, for hyperspectral images, fully extracting spectral and spatial feature information is the key to achieve high classification performance. To solve the above issues, a deep spectral spatial inverted residuals network (DSSIRNet) is proposed. In this network, a data block random erasing strategy is introduced to alleviate the problem of limited labeled samples by data augmentation of small spatial blocks. In addition, a deep inverted residuals (DIR) module for spectral spatial feature extraction is proposed, which locks the effective features of each layer while avoiding network degradation. Furthermore, a global 3D attention module is proposed, which can realize the fine extraction of spectral and spatial global context information under the condition of the same number of input and output feature maps. Experiments are carried out on four commonly used hyperspectral datasets. A large number of experimental results show that compared with some state-of-the-art classification methods, the proposed method can provide higher classification accuracy for hyperspectral images.


2021 ◽  
Vol 18 (6) ◽  
pp. 172988142110555
Author(s):  
Jie Wang ◽  
Shuxiao Li

Accurately detecting the appropriate grasp configurations is the central task for the robot to grasp an object. Existing grasp detection methods usually overlook the depth image or only regard it as a two-dimensional distance image, which makes it difficult to capture the three-dimensional structural characteristics of target object. In this article, we transform the depth image to point cloud and propose a two-stage grasp detection method based on candidate grasp detection from RGB image and spatial feature rescoring from point cloud. Specifically, we first adopt the recently proposed high-performance rotation object detection method for aerial images, named R3Det, to grasp detection task, obtaining the candidate grasp boxes and their appearance scores. Then, point clouds within each candidate grasp box are normalized and evaluated to get the point cloud quality scores, which are fused with the established point cloud quantity scoring model to obtain spatial scores. Finally, appearance scores and their corresponding spatial scores are combined to output high-quality grasp detection results. The proposed method effectively fuses three types of grasp scoring modules, thus is called Score Fusion Grasp Net. Besides, we propose and adopt top-k grasp metric to effectively reflect the success rate of algorithm in actual grasp execution. Score Fusion Grasp Net obtains 98.5% image-wise accuracy and 98.1% object-wise accuracy on Cornell Grasp Dataset, which exceeds the performances of state-of-the-art methods. We also use the robotic arm to conduct physical grasp experiments on 15 kinds of household objects and 11 kinds of adversarial objects. The results show that the proposed method still has a high success rate when facing new objects.


Sign in / Sign up

Export Citation Format

Share Document