scholarly journals AdjMix: simplifying and attending graph convolutional networks

Author(s):  
Xun Liu ◽  
Fangyuan Lei ◽  
Guoqing Xia ◽  
Yikuan Zhang ◽  
Wenguo Wei

AbstractSimple graph convolution (SGC) achieves competitive classification accuracy to graph convolutional networks (GCNs) in various tasks while being computationally more efficient and fitting fewer parameters. However, the width of SGC is narrow due to the over-smoothing of SGC with higher power, which limits the learning ability of graph representations. Here, we propose AdjMix, a simple and attentional graph convolutional model, that is scalable to wider structure and captures more nodes features information, by simultaneously mixing the adjacency matrices of different powers. We point out that the key factor of over-smoothing is the mismatched weights of adjacency matrices, and design AdjMix to address the over-smoothing of SGC and GCNs by adjusting the weights to matching values. Experiments on citation networks including Pubmed, Citeseer, and Cora show that our AdjMix improves over SGC by 2.4%, 2.2%, and 3.2%, respectively, while achieving same performance in terms of parameters and complexity, and obtains better performance in terms of classification accuracy, parameters, and complexity, compared to other baselines.

Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 2929 ◽  
Author(s):  
Yuanyuan Wang ◽  
Chao Wang ◽  
Hong Zhang

With the capability to automatically learn discriminative features, deep learning has experienced great success in natural images but has rarely been explored for ship classification in high-resolution SAR images due to the training bottleneck caused by the small datasets. In this paper, convolutional neural networks (CNNs) are applied to ship classification by using SAR images with the small datasets. First, ship chips are constructed from high-resolution SAR images and split into training and validation datasets. Second, a ship classification model is constructed based on very deep convolutional networks (VGG). Then, VGG is pretrained via ImageNet, and fine tuning is utilized to train our model. Six scenes of COSMO-SkyMed images are used to evaluate our proposed model with regard to the classification accuracy. The experimental results reveal that (1) our proposed ship classification model trained by fine tuning achieves more than 95% average classification accuracy, even with 5-cross validation; (2) compared with other models, the ship classification model based on VGG16 achieves at least 2% higher accuracies for classification. These experimental results reveal the effectiveness of our proposed method.


2020 ◽  
Vol 12 (4) ◽  
pp. 655
Author(s):  
Chu He ◽  
Mingxia Tu ◽  
Dehui Xiong ◽  
Mingsheng Liao

Synthetic Aperture Rradar (SAR) provides rich ground information for remote sensing survey and can be used all time and in all weather conditions. Polarimetric SAR (PolSAR) can further reveal surface scattering difference and improve radar’s application ability. Most existing classification methods for PolSAR imagery are based on manual features, such methods with fixed pattern has poor data adaptability and low feature utilization, if directly input to the classifier. Therefore, combining PolSAR data characteristics and deep network with auto-feature learning ability forms a new breakthrough direction. In fact, feature learning of deep network is to realize function approximation from data to label, through multi-layer accumulation, but finite layers limit the network’s mapping ability. According to manifold hypothesis, high-dimensional data exists in potential low-dimensional manifold and different types of data locates in different manifolds. Manifold learning can model core variables of the target, and separate different data’s manifold as much as possible, so as to complete data classification better. Therefore, taking manifold hypothesis as a starting point, nonlinear manifold learning integrated with fully convolutional networks for PolSAR image classification method is proposed in this paper. Firstly, high-dimensional polarized features are extracted based on scattering matrix and coherence matrix of original PolSAR data, whose compact representation is mined by manifold learning. Meanwhile, drawing on transfer learning, pre-trained Fully Convolutional Networks (FCN) model is utilized to learn deep spatial features of PolSAR imagery. Considering complementary advantages, weighted strategy is adopted to embed manifold representation into deep spatial features, which are input into support vector machine (SVM) classifier for final classification. A series of experiments on three PolSAR datasets have verified effectiveness and superiority of the proposed classification algorithm.


2020 ◽  
Vol 34 (02) ◽  
pp. 1342-1350 ◽  
Author(s):  
Uttaran Bhattacharya ◽  
Trisha Mittal ◽  
Rohan Chandra ◽  
Tanmay Randhavane ◽  
Aniket Bera ◽  
...  

We present a novel classifier network called STEP, to classify perceived human emotion from gaits, based on a Spatial Temporal Graph Convolutional Network (ST-GCN) architecture. Given an RGB video of an individual walking, our formulation implicitly exploits the gait features to classify the perceived emotion of the human into one of four emotions: happy, sad, angry, or neutral. We train STEP on annotated real-world gait videos, augmented with annotated synthetic gaits generated using a novel generative network called STEP-Gen, built on an ST-GCN based Conditional Variational Autoencoder (CVAE). We incorporate a novel push-pull regularization loss in the CVAE formulation of STEP-Gen to generate realistic gaits and improve the classification accuracy of STEP. We also release a novel dataset (E-Gait), which consists of 4,227 human gaits annotated with perceived emotions along with thousands of synthetic gaits. In practice, STEP can learn the affective features and exhibits classification accuracy of 88% on E-Gait, which is 14–30% more accurate over prior methods.


Author(s):  
Ankita Singh ◽  
◽  
Pawan Singh

The Classification of images is a paramount topic in artificial vision systems which have drawn a notable amount of interest over the past years. This field aims to classify an image, which is an input, based on its visual content. Currently, most people relied on hand-crafted features to describe an image in a particular way. Then, using classifiers that are learnable, such as random forest, and decision tree was applied to the extract features to come to a final decision. The problem arises when large numbers of photos are concerned. It becomes a too difficult problem to find features from them. This is one of the reasons that the deep neural network model has been introduced. Owing to the existence of Deep learning, it can become feasible to represent the hierarchical nature of features using a various number of layers and corresponding weight with them. The existing image classification methods have been gradually applied in real-world problems, but then there are various problems in its application processes, such as unsatisfactory effect and extremely low classification accuracy or then and weak adaptive ability. Models using deep learning concepts have robust learning ability, which combines the feature extraction and the process of classification into a whole which then completes an image classification task, which can improve the image classification accuracy effectively. Convolutional Neural Networks are a powerful deep neural network technique. These networks preserve the spatial structure of a problem and were built for object recognition tasks such as classifying an image into respective classes. Neural networks are much known because people are getting a state-of-the-art outcome on complex computer vision and natural language processing tasks. Convolutional neural networks have been extensively used.


Author(s):  
Péter Kovács ◽  
Gergő Bognár ◽  
Christian Huber ◽  
Mario Huemer

In this paper, we introduce VPNet, a novel model-driven neural network architecture based on variable projection (VP). Applying VP operators to neural networks results in learnable features, interpretable parameters, and compact network structures. This paper discusses the motivation and mathematical background of VPNet and presents experiments. The VPNet approach was evaluated in the context of signal processing, where we classified a synthetic dataset and real electrocardiogram (ECG) signals. Compared to fully connected and one-dimensional convolutional networks, VPNet offers fast learning ability and good accuracy at a low computational cost of both training and inference. Based on these advantages and the promising results obtained, we anticipate a profound impact on the broader field of signal processing, in particular on classification, regression and clustering problems.


2019 ◽  
Vol 35 (18) ◽  
pp. 3461-3467 ◽  
Author(s):  
Mohamed Amgad ◽  
Habiba Elfandy ◽  
Hagar Hussein ◽  
Lamees A Atteya ◽  
Mai A T Elsebaie ◽  
...  

Abstract Motivation While deep-learning algorithms have demonstrated outstanding performance in semantic image segmentation tasks, large annotation datasets are needed to create accurate models. Annotation of histology images is challenging due to the effort and experience required to carefully delineate tissue structures, and difficulties related to sharing and markup of whole-slide images. Results We recruited 25 participants, ranging in experience from senior pathologists to medical students, to delineate tissue regions in 151 breast cancer slides using the Digital Slide Archive. Inter-participant discordance was systematically evaluated, revealing low discordance for tumor and stroma, and higher discordance for more subjectively defined or rare tissue classes. Feedback provided by senior participants enabled the generation and curation of 20 000+ annotated tissue regions. Fully convolutional networks trained using these annotations were highly accurate (mean AUC=0.945), and the scale of annotation data provided notable improvements in image classification accuracy. Availability and Implementation Dataset is freely available at: https://goo.gl/cNM4EL. Supplementary information Supplementary data are available at Bioinformatics online.


2021 ◽  
pp. 147592172098866
Author(s):  
Shunlong Li ◽  
Jin Niu ◽  
Zhonglong Li

The novelty detection of bridges using monitoring data is an effective technique for diagnosing structural changes and possible damages, providing a critical basis for assessing the structural states of bridges. As cable forces describe the state of cable-stayed bridges, a novelty detection method was developed in this study using spatiotemporal graph convolutional networks by analysing spatiotemporal correlations among cable forces determined from different cable dynamometers. The spatial dependency of the sensor network was represented as a directed graph with cable dynamometers as vertices, and a graph convolutional network with learnable adjacency matrices was used to capture the spatial dependency of the locally connected vertices. A one-dimensional convolutional neural network was operated along the time axis to capture the temporal dependency. Sensor faults and structural variations could be distinguished based on the local or global anomalies of the spatiotemporal model parameters. Faulty sensors were detected and isolated using weighted adjacency matrices along with diagnostic indicators of the model residuals. After eliminating the effect of the sensor fault, the underlying variations in the state of the cable-stayed bridge could be determined based on the changing data patterns of the spatiotemporal model. The application of the proposed method to a long-span cable-stayed bridge demonstrates its effectiveness in sensor fault localization and structural variation detection.


2014 ◽  
pp. 210-216
Author(s):  
Hirotaka Inoue ◽  
Kyoshiro Sugiyama

R ecently, mul tiple classifier systems have been used for practical applications to improve classification accuracy. Self-generating neural networks are one of the most suitable base-classifiers for multiple classifier systems because of their simple settings and fast learning ability. However, the computation cost of the multiple classifier system based on self-generating neural networks increases in proportion to the numbers of self-gene rating neural networks. In this paper, w e propose a novel prunin g method for efficient classification and we call this model a self-organizing neural grove. Experiments have been conducted to compare the self-organizing neural grove with bagging and the self-organizing neural grove with boosting, and support vector machine. The results show that the self-organizing neural grove can improve its classification accuracy as well as reducing the computation cost.


ScienceRise ◽  
2020 ◽  
pp. 10-16
Author(s):  
Svitlana Shapovalova ◽  
Yurii Moskalenko

Object of research: basic architectures of deep learning neural networks. Investigated problem: insufficient accuracy of solving the classification problem based on the basic architectures of deep learning neural networks. An increase in accuracy requires a significant complication of the architecture, which, in turn, leads to an increase in the required computing resources, as well as the consumption of video memory and the cost of learning/output time. Therefore, the problem arises of determining such methods for modifying basic architectures that improve the classification accuracy and require insignificant additional computing resources. Main scientific results: based on the analysis of existing methods for improving the classification accuracy on the convolutional networks of basic architectures, it is determined what is most effective: scaling the ScanNet architecture, learning the ensemble of TreeNet models, integrating several CBNet backbone networks. For computational experiments, these modifications of the basic architectures are implemented, as well as their combinations: ScanNet + TreeNet, ScanNet + CBNet. The effectiveness of these methods in comparison with basic architectures has been proven when solving the problem of recognizing malignant tumors with diagnostic images – SIIM-ISIC Melanoma Classification, the train/test set of which is presented on the Kaggle platform. The accuracy value for the area under the ROC curve metric has increased from 0.94489 (basic architecture network) to 0.96317 (network with ScanNet + CBNet modifications). At the same time, the output compared to the basic architecture (EfficientNet-b5) increased from 440 to 490 seconds, and the consumption of video memory increased from 8 to 9.2 gigabytes, which is acceptable. Innovative technological product: methods for achieving high recognition accuracy from a diagnostic signal based on deep learning neural networks of basic architectures. Scope of application of the innovative technological product: automatic diagnostics systems in the following areas: medicine, seismology, astronomy (classification by images) onboard control systems and systems for monitoring transport and vehicle flows or visitors (recognition of scenes with camera frames).


Sign in / Sign up

Export Citation Format

Share Document