scholarly journals Retrogressive thaw slumps along the Qinghai-Tibet Engineering Corridor: a comprehensive inventory and their distribution characteristics

2022 ◽  
Author(s):  
Zhuoxuan Xia ◽  
Lingcao Huang ◽  
Chengyan Fan ◽  
Shichao Jia ◽  
Zhanjun Lin ◽  
...  

Abstract. The important Qinghai Tibet Engineering Corridor (QTEC) covers the part of the Highway and Railway underlain by permafrost. The permafrost on the QTEC is sensitive to climate warming and human disturbance and suffers accelerating degradation. Retrogressive thaw slumps (RTSs) are slope failures due to the thawing of ice-rich permafrost. They typically retreat and expand at high rates, damaging infrastructure, and releasing carbon preserved in frozen ground. Along the critical and essential corridor, RTSs are commonly distributed but remain poorly investigated. To compile the first comprehensive inventory of RTSs, this study uses an iteratively semi-automatic method built on deep learning to delineate thaw slumps in the 2019 PlanetScope CubeSat images over a ~54,000 km2 corridor area. The method effectively assesses every image pixel using DeepLabv3+ with limited training samples and manually inspects the deep-learning-identified thaw slumps based on their geomorphic features and temporal changes. The inventory includes 875 RTSs, of which 474 are clustered in the Beiluhe region, and 38 are near roads or railway lines. The dataset is available at https://doi.org/10.1594/PANGAEA.933957 (Xia et al., 2021), with the Chinese version at https://data.tpdc.ac.cn/zh-hans/disallow/50de2d4f-75e1-4bad-b316-6fb91d915a1a/. These RTSs tend to be located on north-facing slopes with gradients of 1.2°–18.1° and distributed at medium elevations ranging from 4511 to 5212 m. a.s.l. They prefer to develop on land receiving relatively low annual solar radiation (from 2900 to 3200 kWh m−2), alpine meadow covered, and silt loam underlay. The results provide a significant and fundamental benchmark dataset for quantifying thaw slump changes in this vulnerable region undergoing strong climatic warming and extensive human activities.

Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 3153 ◽  
Author(s):  
Fei Deng ◽  
Shengliang Pu ◽  
Xuehong Chen ◽  
Yusheng Shi ◽  
Ting Yuan ◽  
...  

Deep learning techniques have boosted the performance of hyperspectral image (HSI) classification. In particular, convolutional neural networks (CNNs) have shown superior performance to that of the conventional machine learning algorithms. Recently, a novel type of neural networks called capsule networks (CapsNets) was presented to improve the most advanced CNNs. In this paper, we present a modified two-layer CapsNet with limited training samples for HSI classification, which is inspired by the comparability and simplicity of the shallower deep learning models. The presented CapsNet is trained using two real HSI datasets, i.e., the PaviaU (PU) and SalinasA datasets, representing complex and simple datasets, respectively, and which are used to investigate the robustness or representation of every model or classifier. In addition, a comparable paradigm of network architecture design has been proposed for the comparison of CNN and CapsNet. Experiments demonstrate that CapsNet shows better accuracy and convergence behavior for the complex data than the state-of-the-art CNN. For CapsNet using the PU dataset, the Kappa coefficient, overall accuracy, and average accuracy are 0.9456, 95.90%, and 96.27%, respectively, compared to the corresponding values yielded by CNN of 0.9345, 95.11%, and 95.63%. Moreover, we observed that CapsNet has much higher confidence for the predicted probabilities. Subsequently, this finding was analyzed and discussed with probability maps and uncertainty analysis. In terms of the existing literature, CapsNet provides promising results and explicit merits in comparison with CNN and two baseline classifiers, i.e., random forests (RFs) and support vector machines (SVMs).


2020 ◽  
Vol 12 (3) ◽  
pp. 536
Author(s):  
Bingqing Niu ◽  
Jinhui Lan ◽  
Yang Shao ◽  
Hui Zhang

The convolutional neural network (CNN) has been gradually applied to the hyperspectral images (HSIs) classification, but the lack of training samples caused by the difficulty of HSIs sample marking and ignoring of correlation between spatial and spectral information seriously restrict the HSIs classification accuracy. In an attempt to solve these problems, this paper proposes a dual-branch extraction and classification method under limited samples of hyperspectral images based on deep learning (DBECM). At first, a sample augmentation method based on local and global constraints in this model is designed to augment the limited training samples and balance the number of different class samples. Then spatial-spectral features are simultaneously extracted by the dual-branch spatial-spectral feature extraction method, which improves the utilization of HSIs data information. Finally, the extracted spatial-spectral feature fusion and classification are integrated into a unified network. The experimental results of two typical datasets show that the DBECM proposed in this paper has certain competitive advantages in classification accuracy compared with other public HSIs classification methods, especially in the Indian pines dataset. The parameters of the overall accuracy (OA), average accuracy (AA), and Kappa of the method proposed in this paper are at least 4.7%, 5.7%, and 5% higher than the existing methods.


2019 ◽  
Vol 9 (22) ◽  
pp. 4749
Author(s):  
Lingyun Jiang ◽  
Kai Qiao ◽  
Linyuan Wang ◽  
Chi Zhang ◽  
Jian Chen ◽  
...  

Decoding human brain activities, especially reconstructing human visual stimuli via functional magnetic resonance imaging (fMRI), has gained increasing attention in recent years. However, the high dimensionality and small quantity of fMRI data impose restrictions on satisfactory reconstruction, especially for the reconstruction method with deep learning requiring huge amounts of labelled samples. When compared with the deep learning method, humans can recognize a new image because our human visual system is naturally capable of extracting features from any object and comparing them. Inspired by this visual mechanism, we introduced the mechanism of comparison into deep learning method to realize better visual reconstruction by making full use of each sample and the relationship of the sample pair by learning to compare. In this way, we proposed a Siamese reconstruction network (SRN) method. By using the SRN, we improved upon the satisfying results on two fMRI recording datasets, providing 72.5% accuracy on the digit dataset and 44.6% accuracy on the character dataset. Essentially, this manner can increase the training data about from n samples to 2n sample pairs, which takes full advantage of the limited quantity of training samples. The SRN learns to converge sample pairs of the same class or disperse sample pairs of different class in feature space.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1613
Author(s):  
Man Li ◽  
Feng Li ◽  
Jiahui Pan ◽  
Dengyong Zhang ◽  
Suna Zhao ◽  
...  

In addition to helping develop products that aid the disabled, brain–computer interface (BCI) technology can also become a modality of entertainment for all people. However, most BCI games cannot be widely promoted due to the poor control performance or because they easily cause fatigue. In this paper, we propose a P300 brain–computer-interface game (MindGomoku) to explore a feasible and natural way to play games by using electroencephalogram (EEG) signals in a practical environment. The novelty of this research is reflected in integrating the characteristics of game rules and the BCI system when designing BCI games and paradigms. Moreover, a simplified Bayesian convolutional neural network (SBCNN) algorithm is introduced to achieve high accuracy on limited training samples. To prove the reliability of the proposed algorithm and system control, 10 subjects were selected to participate in two online control experiments. The experimental results showed that all subjects successfully completed the game control with an average accuracy of 90.7% and played the MindGomoku an average of more than 11 min. These findings fully demonstrate the stability and effectiveness of the proposed system. This BCI system not only provides a form of entertainment for users, particularly the disabled, but also provides more possibilities for games.


Algorithms ◽  
2018 ◽  
Vol 11 (8) ◽  
pp. 112 ◽  
Author(s):  
Ruhua Wang ◽  
Ling Li ◽  
Jun Li

In this paper, damage detection/identification for a seven-storey steel structure is investigated via using the vibration signals and deep learning techniques. Vibration characteristics, such as natural frequencies and mode shapes are captured and utilized as input for a deep learning network while the output vector represents the structural damage associated with locations. The deep auto-encoder with sparsity constraint is used for effective feature extraction for different types of signals and another deep auto-encoder is used to learn the relationship of different signals for final regression. The existing SAF model in a recent research study for the same problem processed all signals in one serial auto-encoder model. That kind of models have the following difficulties: (1) the natural frequencies and mode shapes are in different magnitude scales and it is not logical to normalize them in the same scale in building the models with training samples; (2) some frequencies and mode shapes may not be related to each other and it is not fair to use them for dimension reduction together. To tackle the above-mentioned problems for the multi-scale dataset in SHM, a novel parallel auto-encoder framework (Para-AF) is proposed in this paper. It processes the frequency signals and mode shapes separately for feature selection via dimension reduction and then combine these features together in relationship learning for regression. Furthermore, we introduce sparsity constraint in model reduction stage for performance improvement. Two experiments are conducted on performance evaluation and our results show the significant advantages of the proposed model in comparison with the existing approaches.


2018 ◽  
Vol 10 (11) ◽  
pp. 1827 ◽  
Author(s):  
Ahram Song ◽  
Jaewan Choi ◽  
Youkyung Han ◽  
Yongil Kim

Hyperspectral change detection (CD) can be effectively performed using deep-learning networks. Although these approaches require qualified training samples, it is difficult to obtain ground-truth data in the real world. Preserving spatial information during training is difficult due to structural limitations. To solve such problems, our study proposed a novel CD method for hyperspectral images (HSIs), including sample generation and a deep-learning network, called the recurrent three-dimensional (3D) fully convolutional network (Re3FCN), which merged the advantages of a 3D fully convolutional network (FCN) and a convolutional long short-term memory (ConvLSTM). Principal component analysis (PCA) and the spectral correlation angle (SCA) were used to generate training samples with high probabilities of being changed or unchanged. The strategy assisted in training fewer samples of representative feature expression. The Re3FCN was mainly comprised of spectral–spatial and temporal modules. Particularly, a spectral–spatial module with a 3D convolutional layer extracts the spectral–spatial features from the HSIs simultaneously, whilst a temporal module with ConvLSTM records and analyzes the multi-temporal HSI change information. The study first proposed a simple and effective method to generate samples for network training. This method can be applied effectively to cases with no training samples. Re3FCN can perform end-to-end detection for binary and multiple changes. Moreover, Re3FCN can receive multi-temporal HSIs directly as input without learning the characteristics of multiple changes. Finally, the network could extract joint spectral–spatial–temporal features and it preserved the spatial structure during the learning process through the fully convolutional structure. This study was the first to use a 3D FCN and a ConvLSTM for the remote-sensing CD. To demonstrate the effectiveness of the proposed CD method, we performed binary and multi-class CD experiments. Results revealed that the Re3FCN outperformed the other conventional methods, such as change vector analysis, iteratively reweighted multivariate alteration detection, PCA-SCA, FCN, and the combination of 2D convolutional layers-fully connected LSTM.


Author(s):  
P. Zhong ◽  
Z. Q. Gong ◽  
C. Schönlieb

In recent years, researches in remote sensing demonstrated that deep architectures with multiple layers can potentially extract abstract and invariant features for better hyperspectral image classification. Since the usual real-world hyperspectral image classification task cannot provide enough training samples for a supervised deep model, such as convolutional neural networks (CNNs), this work turns to investigate the deep belief networks (DBNs), which allow unsupervised training. The DBN trained over limited training samples usually has many “dead” (never responding) or “potential over-tolerant” (always responding) latent factors (neurons), which decrease the DBN’s description ability and thus finally decrease the hyperspectral image classification performance. This work proposes a new diversified DBN through introducing a diversity promoting prior over the latent factors during the DBN pre-training and fine-tuning procedures. The diversity promoting prior in the training procedures will encourage the latent factors to be uncorrelated, such that each latent factor focuses on modelling unique information, and all factors will be summed up to capture a large proportion of information and thus increase description ability and classification performance of the diversified DBNs. The proposed method was evaluated over the well-known real-world hyperspectral image dataset. The experiments demonstrate that the diversified DBNs can obtain much better results than original DBNs and comparable or even better performances compared with other recent hyperspectral image classification methods.


Entropy ◽  
2022 ◽  
Vol 24 (1) ◽  
pp. 128
Author(s):  
Zhenwei Guan ◽  
Feng Min ◽  
Wei He ◽  
Wenhua Fang ◽  
Tao Lu

Forest fire detection from videos or images is vital to forest firefighting. Most deep learning based approaches rely on converging image loss, which ignores the content from different fire scenes. In fact, complex content of images always has higher entropy. From this perspective, we propose a novel feature entropy guided neural network for forest fire detection, which is used to balance the content complexity of different training samples. Specifically, a larger weight is given to the feature of the sample with a high entropy source when calculating the classification loss. In addition, we also propose a color attention neural network, which mainly consists of several repeated multiple-blocks of color-attention modules (MCM). Each MCM module can extract the color feature information of fire adequately. The experimental results show that the performance of our proposed method outperforms the state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document