Large-scale cellular traffic prediction based on graph convolutional networks with transfer learning

Author(s):  
Xu Zhou ◽  
Yong Zhang ◽  
Zhao Li ◽  
Xing Wang ◽  
Juan Zhao ◽  
...  
2019 ◽  
Vol 37 (6) ◽  
pp. 1389-1401 ◽  
Author(s):  
Chuanting Zhang ◽  
Haixia Zhang ◽  
Jingping Qiao ◽  
Dongfeng Yuan ◽  
Minggao Zhang

2020 ◽  
Author(s):  
Yu Zhang ◽  
Pierre Bellec

AbstractTransfer learning has been a very active research topic in natural image processing. But few studies have reported notable benefits of transfer learning on medical imaging. In this study, we sought to investigate the transferability of deep artificial neural networks (DNN) in brain decoding, i.e. inferring brain state using fMRI brain response over a short window. Instead of using pretrained models from ImageNet, we trained our base model on a large-scale neuroimaging dataset using graph convolutional networks (GCN). The transferability of learned graph representations were evaluated under different circumstances, including knowledge transfer across cognitive domains, between different groups of subjects, and among different sites using distinct scanning sequences. We observed a significant performance boost via transfer learning either from the same cognitive domain or from other task domains. But the transferability was highly impacted by the scanner site effect. Specifically, for datasets acquired from the same site using the same scanning sequences, using transferred features highly improved the decoding performance. By contrast, the transferability of representations highly decreased between different sites, with the performance boost reducing from 20% down to 7% for the Motor task and decreasing from 15% to 5% for Working-memory tasks. Our results indicate that in contrast to natural images, the scanning condition, instead of task domain, has a larger impact on feature transfer for medical imaging. With other advanced tools such as layer-wise fine-tuning, the decoding performance can be further improved through learning more site-specific high-level features while retaining the transferred low-level representations of brain dynamics.


Author(s):  
Qingtian Zeng ◽  
Qiang Sun ◽  
Geng Chen ◽  
Hua Duan

AbstractWireless cellular traffic prediction is a critical issue for researchers and practitioners in the 5G/B5G field. However, it is very challenging since the wireless cellular traffic usually shows high nonlinearities and complex patterns. Most existing wireless cellular traffic prediction methods lack the abilities of modeling the dynamic spatial–temporal correlations of wireless cellular traffic data, thus cannot yield satisfactory prediction results. In order to improve the accuracy of 5G/B5G cellular network traffic prediction, an attention-based multi-component spatiotemporal cross-domain neural network model (att-MCSTCNet) is proposed, which uses Conv-LSTM or Conv-GRU for neighbor data, daily cycle data, and weekly cycle data modeling, and then assigns different weights to the three kinds of feature data through the attention layer, improves their feature extraction ability, and suppresses the feature information that interferes with the prediction time. Finally, the model is combined with timestamp feature embedding, multiple cross-domain data fusion, and jointly with other models to assist the model in traffic prediction. Experimental results show that compared with the existing models, the prediction performance of the proposed model is better. Among them, the RMSE performance of the att-MCSTCNet (Conv-LSTM) model on Sms, Call, and Internet datasets is improved by 13.70 ~ 54.96%, 10.50 ~ 28.15%, and 35.85 ~ 100.23%, respectively, compared with other existing models. The RMSE performance of the att-MCSTCNet (Conv-GRU) model on Sms, Call, and Internet datasets is about 14.56 ~ 55.82%, 12.24 ~ 29.89%, and 38.79 ~ 103.17% higher than other existing models, respectively.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2852
Author(s):  
Parvathaneni Naga Srinivasu ◽  
Jalluri Gnana SivaSai ◽  
Muhammad Fazal Ijaz ◽  
Akash Kumar Bhoi ◽  
Wonjoon Kim ◽  
...  

Deep learning models are efficient in learning the features that assist in understanding complex patterns precisely. This study proposed a computerized process of classifying skin disease through deep learning based MobileNet V2 and Long Short Term Memory (LSTM). The MobileNet V2 model proved to be efficient with a better accuracy that can work on lightweight computational devices. The proposed model is efficient in maintaining stateful information for precise predictions. A grey-level co-occurrence matrix is used for assessing the progress of diseased growth. The performance has been compared against other state-of-the-art models such as Fine-Tuned Neural Networks (FTNN), Convolutional Neural Network (CNN), Very Deep Convolutional Networks for Large-Scale Image Recognition developed by Visual Geometry Group (VGG), and convolutional neural network architecture that expanded with few changes. The HAM10000 dataset is used and the proposed method has outperformed other methods with more than 85% accuracy. Its robustness in recognizing the affected region much faster with almost 2× lesser computations than the conventional MobileNet model results in minimal computational efforts. Furthermore, a mobile application is designed for instant and proper action. It helps the patient and dermatologists identify the type of disease from the affected region’s image at the initial stage of the skin disease. These findings suggest that the proposed system can help general practitioners efficiently and effectively diagnose skin conditions, thereby reducing further complications and morbidity.


IEEE Network ◽  
2018 ◽  
Vol 32 (6) ◽  
pp. 108-115 ◽  
Author(s):  
Jie Feng ◽  
Xinlei Chen ◽  
Rundong Gao ◽  
Ming Zeng ◽  
Yong Li

2021 ◽  
Vol 14 (3) ◽  
pp. 1088-1105
Author(s):  
Varun Kurri ◽  
Vishweshvaran Raja ◽  
P. Prakasam

Author(s):  
Weida Zhong ◽  
Qiuling Suo ◽  
Abhishek Gupta ◽  
Xiaowei Jia ◽  
Chunming Qiao ◽  
...  

With the popularity of smartphones, large-scale road sensing data is being collected to perform traffic prediction, which is an important task in modern society. Due to the nature of the roving sensors on smartphones, the collected traffic data which is in the form of multivariate time series, is often temporally sparse and unevenly distributed across regions. Moreover, different regions can have different traffic patterns, which makes it challenging to adapt models learned from regions with sufficient training data to target regions. Given that many regions may have very sparse data, it is also impossible to build individual models for each region separately. In this paper, we propose a meta-learning based framework named MetaTP to overcome these challenges. MetaTP has two key parts, i.e., basic traffic prediction network (base model) and meta-knowledge transfer. In base model, a two-layer interpolation network is employed to map original time series onto uniformly-spaced reference time points, so that temporal prediction can be effectively performed in the reference space. The meta-learning framework is employed to transfer knowledge from source regions with a large amount of data to target regions with a few data examples via fast adaptation, in order to improve model generalizability on target regions. Moreover, we use two memory networks to capture the global patterns of spatial and temporal information across regions. We evaluate the proposed framework on two real-world datasets, and experimental results show the effectiveness of the proposed framework.


Sign in / Sign up

Export Citation Format

Share Document