Web Page Information Extraction Service Based on Graph Convolutional Neural Network and Multimodal Data Fusion

Author(s):  
Mingzhu Zhang ◽  
Zhongguo Yang ◽  
Sikandar Ali ◽  
Weilong Ding
2020 ◽  
Vol 36 (8) ◽  
pp. 2561-2568 ◽  
Author(s):  
Xia-an Bi ◽  
Yingchao Liu ◽  
Yiming Xie ◽  
Xi Hu ◽  
Qinghua Jiang

Abstract Motivation The multimodal data fusion analysis becomes another important field for brain disease detection and increasing researches concentrate on using neural network algorithms to solve a range of problems. However, most current neural network optimizing strategies focus on internal nodes or hidden layer numbers, while ignoring the advantages of external optimization. Additionally, in the multimodal data fusion analysis of brain science, the problems of small sample size and high-dimensional data are often encountered due to the difficulty of data collection and the specialization of brain science data, which may result in the lower generalization performance of neural network. Results We propose a genetically evolved random neural network cluster (GERNNC) model. Specifically, the fusion characteristics are first constructed to be taken as the input and the best type of neural network is selected as the base classifier to form the initial random neural network cluster. Second, the cluster is adaptively genetically evolved. Based on the GERNNC model, we further construct a multi-tasking framework for the classification of patients with brain disease and the extraction of significant characteristics. In a study of genetic data and functional magnetic resonance imaging data from the Alzheimer’s Disease Neuroimaging Initiative, the framework exhibits great classification performance and strong morbigenous factor detection ability. This work demonstrates that how to effectively detect pathogenic components of the brain disease on the high-dimensional medical data and small samples. Availability and implementation The Matlab code is available at https://github.com/lizi1234560/GERNNC.git.


Author(s):  
Wen Qi ◽  
Hang Su ◽  
Ke Fan ◽  
Ziyang Chen ◽  
Jiehao Li ◽  
...  

The generous application of robot-assisted minimally invasive surgery (RAMIS) promotes human-machine interaction (HMI). Identifying various behaviors of doctors can enhance the RAMIS procedure for the redundant robot. It bridges intelligent robot control and activity recognition strategies in the operating room, including hand gestures and human activities. In this paper, to enhance identification in a dynamic situation, we propose a multimodal data fusion framework to provide multiple information for accuracy enhancement. Firstly, a multi-sensors based hardware structure is designed to capture varied data from various devices, including depth camera and smartphone. Furthermore, in different surgical tasks, the robot control mechanism can shift automatically. The experimental results evaluate the efficiency of developing the multimodal framework for RAMIS by comparing it with a single sensor system. Implementing the KUKA LWR4+ in a surgical robot environment indicates that the surgical robot systems can work with medical staff in the future.


2020 ◽  
Vol 64 ◽  
pp. 149-187 ◽  
Author(s):  
Yu-Dong Zhang ◽  
Zhengchao Dong ◽  
Shui-Hua Wang ◽  
Xiang Yu ◽  
Xujing Yao ◽  
...  

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Yang Wang ◽  
Moyang Li

Modern urban landscape is a simple ecosystem, which is of great significance to the sustainable development of the city. This study proposes a landscape information extraction model based on deep convolutional neural network, studies the multiscale landscape convolutional neural network classification method, constructs a landscape information extraction model based on multiscale CNN, and finally analyzes the quantitative effect of deep convolutional neural network. The results show that the overall kappa coefficient is 0.91 and the classification accuracy is 93% by calculating the confusion matrix, production accuracy, and user accuracy. The method proposed in this study can identify more than 90% of water targets, the user accuracy and production accuracy are 99.78% and 91.94%, respectively, and the overall accuracy is 93.33%. The method proposed in this study is obviously better than other methods, and the kappa coefficient and overall accuracy are the best. This study provides a certain reference value for the quantitative evaluation of modern urban landscape spatial scale.


2016 ◽  
Vol 64 (18) ◽  
pp. 4830-4844 ◽  
Author(s):  
Rodrigo Cabral Farias ◽  
Jeremy Emile Cohen ◽  
Pierre Comon

Sign in / Sign up

Export Citation Format

Share Document