An image super-resolution deep learning network based on multi-level feature extraction module

Author(s):  
Xin Yang ◽  
Yifan Zhang ◽  
Yingqing Guo ◽  
Dake Zhou
IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 12319-12327 ◽  
Author(s):  
Shengxiang Zhang ◽  
Gaobo Liang ◽  
Shuwan Pan ◽  
Lixin Zheng

2020 ◽  
Vol 17 (6) ◽  
pp. 1961-1970
Author(s):  
Viet Khanh Ha ◽  
Jinchang Ren ◽  
Xinying Xu ◽  
Wenzhi Liao ◽  
Sophia Zhao ◽  
...  

PLoS ONE ◽  
2020 ◽  
Vol 15 (10) ◽  
pp. e0241313
Author(s):  
Zhengqiang Xiong ◽  
Manhui Lin ◽  
Zhen Lin ◽  
Tao Sun ◽  
Guangyi Yang ◽  
...  

Electronics ◽  
2020 ◽  
Vol 9 (4) ◽  
pp. 649
Author(s):  
Long Hoang ◽  
Suk-Hwan Lee ◽  
Ki-Ryong Kwon

3D shape recognition becomes necessary due to the popularity of 3D data resources. This paper aims to introduce the new method, hybrid deep learning network convolution neural network–support vector machine (CNN–SVM), for 3D recognition. The vertices of the 3D mesh are interpolated to be converted into Point Clouds; those Point Clouds are rotated for 3D data augmentation. We obtain and store the 2D projection of this 3D augmentation data in a 32 × 32 × 12 matrix, the input data of CNN–SVM. An eight-layer CNN is used as the algorithm for feature extraction, then SVM is applied for classifying feature extraction. Two big datasets, ModelNet40 and ModelNet10, of the 3D model are used for model validation. Based on our numerical experimental results, CNN–SVM is more accurate and efficient than other methods. The proposed method is 13.48% more accurate than the PointNet method in ModelNet10 and 8.5% more precise than 3D ShapeNets for ModelNet40. The proposed method works with both the 3D model in the augmented/virtual reality system and in the 3D Point Clouds, an output of the LIDAR sensor in autonomously driving cars.


Sign in / Sign up

Export Citation Format

Share Document