adaptive network
Recently Published Documents


TOTAL DOCUMENTS

847
(FIVE YEARS 218)

H-INDEX

47
(FIVE YEARS 7)

Author(s):  
Zonghao Yuan ◽  
Zengqiang Ma ◽  
Li Xin ◽  
Dayong Gao ◽  
Fu Zhipeng

Abstract Fault diagnosis of rolling bearings is key to maintain and repair modern rotating machinery. Rolling bearings are usually working in non-stationary conditions with time-varying loads and speeds. Existing diagnosis methods based on vibration signals only don’ t have the ability to adapt to rotational speed. And when the load changes, the accuracy rate of them will be obviously reduced. A method is put forward which fuses multi-modal sensor signals to fit speed information. Firstly, the features are extracted from raw vibration signals and instantaneous rotating speed signals, and fused by 1D-CNN-based networks. Secondly, to improve the robustness of the model when the load changes, a majority voting mechanism is proposed in the diagnosis stage. Lastly, Multiple variable speed samples of four bearings under three loads are obtained to evaluate the performance of the proposed method by analyzing the loss function, accuracy rate and F1 score under different variable speed samples. It is empirically found that the proposed method achieves higher diagnostic accuracy and speed-adaptive ability than the algorithms based on vibration signal only. Moreover, A couple of ablation studies are also conducted to investigate the inner mechanism of the proposed speed-adaptive network.


2021 ◽  
Vol 30 (4) ◽  
pp. 483-512
Author(s):  
Jan Treur ◽  

In this paper, a self-modeling mental network model is presented for cognitive analysis and support processes for a human. These cognitive analysis and support processes are modeled by internal mental models. At the base level, the model is able to perform the analysis and support processes based on these internal mental models. To obtain adaptation of these internal mental models, a first-order self-model is included in the network model. In addition, to obtain control of this adaptation, a second-order self-model is included. This makes the network model a second-order self-modeling network model. The adaptive network model is illustrated for a number of realistic scenarios for a supported car driver.


2021 ◽  
Author(s):  
Lei Zhang ◽  
Hongxia Wang ◽  
Peisong He ◽  
Sani M. Abdullahi ◽  
Bin Li

Abstract Steganalysis aims to detect covert communication established via steganography. In recent years, numerous deep learning-based image steganalysis methods with high performance have been proposed. However, these methods tend to suffer from distinct performance degradation when cover images in the train and test set are quite different, also known as cover source mismatch. To address this limitation, in this paper, a feature-guided deep subdomain adaptation network is proposed. Initially, the predictions of the pretrained model are used as pseudo labels to divide the unlabeled samples of the target domain into different subdomains, and the distributions of the relevant subdomains are aligned by subdomain adaptation. Afterwards, since the steganalysis model may assign incorrect predictions to samples in the target domain, we integrate guiding features to make the division of subdomains more precise. The experimental results show that the proposed network is significantly better than other three networks such as Steganalysis Residual Network (SRNet), deep adaptive network (J-Net) and Deep Subdomain Adaptation Network (DSAN), when it is used to detect three spatial steganographic algorithms with a wide variety of datasets and payloads. Especially, compared with SRNet, the average accuracy of our method is increased by 5.4% at 0.4bpp and 8.5% at 0.2bpp in the case of dataset mismatch.


2021 ◽  
Author(s):  
Jie Hao ◽  
William Zhu

Abstract Differentiable architecture search (DARTS) approach has made great progress in reducing the com- putational costs of neural architecture search. DARTS tries to discover an optimal architecture module called cell from a predefined super network. However, the obtained cell is then repeatedly and simply stacked to build a target network, failing to extract layered fea- tures hidden in different network depths. Therefore, this target network cannot meet the requirements of prac- tical applications. To address this problem, we propose an effective approach called Layered Feature Repre- sentation for Differentiable Architecture Search (LFR- DARTS). Specifically, we iteratively search for multiple cells with different architectures from shallow to deep layers of the super network. For each iteration, we optimize the architecture of a cell by gradient descent and prune out weak connections from this cell. After obtain- ing the optimal architecture of this cell, we deepen the super network by increasing the number of this cell, so as to create an adaptive network context to search for a deeper-adaptive cell in the next iteration. Thus, our LFR-DARTS can discover the architecture of each cell at a specific and adaptive network depth, which embeds the ability of layered feature representations into each cell to sufficiently extract layered features in different depths. Extensive experiments show that our algorithm achieves an advanced performance on the datasets of CIFAR10, fashionMNIST and ImageNet while at low search costs.


2021 ◽  
pp. 179-196
Author(s):  
Morteza Saadatmorad ◽  
Ramazan-Ali Jafari-Talookolaei ◽  
Mohammad-Hadi Pashaei ◽  
Samir Khatir ◽  
Magd Abdel Wahab

Sign in / Sign up

Export Citation Format

Share Document