Automatic Extraction of Layover From InSAR Imagery Based on Multilayer Feature Fusion Attention Mechanism

Author(s):  
Xingmin Cai ◽  
Lifu Chen ◽  
Jin Xing ◽  
Xuemin Xing ◽  
Ru Luo ◽  
...  
2021 ◽  
Vol 13 (14) ◽  
pp. 2794
Author(s):  
Shuhao Ran ◽  
Xianjun Gao ◽  
Yuanwei Yang ◽  
Shaohua Li ◽  
Guangbin Zhang ◽  
...  

Deep learning approaches have been widely used in building automatic extraction tasks and have made great progress in recent years. However, the missing detection and wrong detection causing by spectrum confusion is still a great challenge. The existing fully convolutional networks (FCNs) cannot effectively distinguish whether the feature differences are from one building or the building and its adjacent non-building objects. In order to overcome the limitations, a building multi-feature fusion refined network (BMFR-Net) was presented in this paper to extract buildings accurately and completely. BMFR-Net is based on an encoding and decoding structure, mainly consisting of two parts: the continuous atrous convolution pyramid (CACP) module and the multiscale output fusion constraint (MOFC) structure. The CACP module is positioned at the end of the contracting path and it effectively minimizes the loss of effective information in multiscale feature extraction and fusion by using parallel continuous small-scale atrous convolution. To improve the ability to aggregate semantic information from the context, the MOFC structure performs predictive output at each stage of the expanding path and integrates the results into the network. Furthermore, the multilevel joint weighted loss function effectively updates parameters well away from the output layer, enhancing the learning capacity of the network for low-level abstract features. The experimental results demonstrate that the proposed BMFR-Net outperforms the other five state-of-the-art approaches in both visual interpretation and quantitative evaluation.


2021 ◽  
Vol 13 (10) ◽  
pp. 1950
Author(s):  
Cuiping Shi ◽  
Xin Zhao ◽  
Liguo Wang

In recent years, with the rapid development of computer vision, increasing attention has been paid to remote sensing image scene classification. To improve the classification performance, many studies have increased the depth of convolutional neural networks (CNNs) and expanded the width of the network to extract more deep features, thereby increasing the complexity of the model. To solve this problem, in this paper, we propose a lightweight convolutional neural network based on attention-oriented multi-branch feature fusion (AMB-CNN) for remote sensing image scene classification. Firstly, we propose two convolution combination modules for feature extraction, through which the deep features of images can be fully extracted with multi convolution cooperation. Then, the weights of the feature are calculated, and the extracted deep features are sent to the attention mechanism for further feature extraction. Next, all of the extracted features are fused by multiple branches. Finally, depth separable convolution and asymmetric convolution are implemented to greatly reduce the number of parameters. The experimental results show that, compared with some state-of-the-art methods, the proposed method still has a great advantage in classification accuracy with very few parameters.


2021 ◽  
Author(s):  
Zhangfang Hu ◽  
Yanling Xia ◽  
Yuan Luo ◽  
Lan Wang

2020 ◽  
Vol 133 ◽  
pp. 327-333 ◽  
Author(s):  
Heng Zhou ◽  
Zhijun Fang ◽  
Yongbin Gao ◽  
Bo Huang ◽  
Cengsi Zhong ◽  
...  

2021 ◽  
Author(s):  
Tingting Feng ◽  
Liang Guo ◽  
Hongli Gao ◽  
Tao Chen ◽  
Yaoxiang Yu ◽  
...  

Abstract In order to accurately monitor the tool wear process, it is usually necessary to collect a variety of sensor signals during the cutting process. Different sensor signals in the feature space can provide complementary information. In addition, the monitoring signal is time series data, which also contains a wealth of tool degradation information in the time dimension. However, how to fuse multi-sensor information in time and space dimensions is a key issue that needs to be solved. This paper proposes a new time-space attention mechanism driven multi-feature fusion method to realize the tool wear monitoring. Firstly, lots of features are established from different sensor signals and selected preliminarily. Then, a new feature fusion model with time-space attention mechanism is constructed to fuse features in time and space dimensions. Finally, the tool degradation model is established according to the predicted wear, and the tool remaining useful life is predicted by particle filter. The effectiveness of this method is verified by a tool life cycle wear experiment. Through comparing with other feature fusion models, it is demonstrated that the proposed method realizes the tool wear monitoring more accurately and has better stability.


2021 ◽  
Vol 2035 (1) ◽  
pp. 012023
Author(s):  
Yuhao You ◽  
Houjin Chen ◽  
Yanfeng Li ◽  
Minjun Wang ◽  
Jinlei Zhu

Author(s):  
Zhenjian Yang ◽  
Jiamei Shang ◽  
Zhongwei Zhang ◽  
Yan Zhang ◽  
Shudong Liu

Traditional image dehazing algorithms based on prior knowledge and deep learning rely on the atmospheric scattering model and are easy to cause color distortion and incomplete dehazing. To solve these problems, an end-to-end image dehazing algorithm based on residual attention mechanism is proposed in this paper. The network includes four modules: encoder, multi-scale feature extraction, feature fusion and decoder. The encoder module encodes the input haze image into feature map, which is convenient for subsequent feature extraction and reduces memory consumption; the multi-scale feature extraction module includes residual smoothed dilated convolution module, residual block and efficient channel attention, which can expand the receptive field and extract different scale features by filtering and weighting; the feature fusion module with efficient channel attention adjusts the channel weight dynamically, acquires rich context information and suppresses redundant information so as to enhance the ability to extract haze density image of the network; finally, the encoder module maps the fused feature nonlinearly to obtain the haze density image and then restores the haze free image. The qualitative and quantitative tests based on SOTS test set and natural haze images show good objective and subjective evaluation results. This algorithm improves the problems of color distortion and incomplete dehazing effectively.


2022 ◽  
Vol 2022 ◽  
pp. 1-11
Author(s):  
Hai Tan ◽  
Hao Xu ◽  
Jiguang Dai

Automatic extraction of road information from remote sensing images is widely used in many fields, such as urban planning and automatic navigation. However, due to interference from noise and occlusion, the existing road extraction methods can easily lead to road discontinuity. To solve this problem, a road extraction network with bidirectional spatial information reasoning (BSIRNet) is proposed, in which neighbourhood feature fusion is used to capture spatial context dependencies and expand the receptive field, and an information processing unit with a recurrent neural network structure is used to capture channel dependencies. BSIRNet enhances the connectivity of road information through spatial information reasoning. Using the public Massachusetts road dataset and Wuhan University road dataset, the superiority of the proposed method is verified by comparing its results with those of other models.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Gaihua Wang ◽  
Qianyu Zhai

AbstractContextual information is a key factor affecting semantic segmentation. Recently, many methods have tried to use the self-attention mechanism to capture more contextual information. However, these methods with self-attention mechanism need a huge computation. In order to solve this problem, a novel self-attention network, called FFANet, is designed to efficiently capture contextual information, which reduces the amount of calculation through strip pooling and linear layers. It proposes the feature fusion (FF) module to calculate the affinity matrix. The affinity matrix can capture the relationship between pixels. Then we multiply the affinity matrix with the feature map, which can selectively increase the weight of the region of interest. Extensive experiments on the public datasets (PASCAL VOC2012, CityScapes) and remote sensing dataset (DLRSD) have been conducted and achieved Mean Iou score 74.5%, 70.3%, and 63.9% respectively. Compared with the current typical algorithms, the proposed method has achieved excellent performance.


Sign in / Sign up

Export Citation Format

Share Document