scholarly journals Single-Shot 3D Shape Reconstruction Using Structured Light and Deep Convolutional Neural Networks

Sensors ◽  
2020 ◽  
Vol 20 (13) ◽  
pp. 3718 ◽  
Author(s):  
Hieu Nguyen ◽  
Yuzeng Wang ◽  
Zhaoyang Wang

Single-shot 3D imaging and shape reconstruction has seen a surge of interest due to the ever-increasing evolution in sensing technologies. In this paper, a robust single-shot 3D shape reconstruction technique integrating the structured light technique with the deep convolutional neural networks (CNNs) is proposed. The input of the technique is a single fringe-pattern image, and the output is the corresponding depth map for 3D shape reconstruction. The essential training and validation datasets with high-quality 3D ground-truth labels are prepared by using a multi-frequency fringe projection profilometry technique. Unlike the conventional 3D shape reconstruction methods which involve complex algorithms and intensive computation to determine phase distributions or pixel disparities as well as depth map, the proposed approach uses an end-to-end network architecture to directly carry out the transformation of a 2D image to its corresponding 3D depth map without extra processing. In the approach, three CNN-based models are adopted for comparison. Furthermore, an accurate structured-light-based 3D imaging dataset used in this paper is made publicly available. Experiments have been conducted to demonstrate the validity and robustness of the proposed technique. It is capable of satisfying various 3D shape reconstruction demands in scientific research and engineering applications.

2021 ◽  
pp. 100104
Author(s):  
Hieu Nguyen ◽  
Khanh L. Ly ◽  
Tan Tran ◽  
Yuzheng Wang ◽  
Zhaoyang Wang

Symmetry ◽  
2020 ◽  
Vol 12 (3) ◽  
pp. 360
Author(s):  
Aihua Chen ◽  
Benquan Yang ◽  
Yueli Cui ◽  
Yuefen Chen ◽  
Shiqing Zhang ◽  
...  

In order to save people’s shopping time and reduce labor cost of supermarket operations, this paper proposes to design a supermarket service robot based on deep convolutional neural networks (DCNNs). Firstly, according to the shopping environment and needs of supermarket, the hardware and software structure of supermarket service robot is designed. The robot uses a robot operating system (ROS) middleware on Raspberry PI as a control kernel to implement wireless communication with customers and staff. So as to move flexibly, the omnidirectional wheels symmetrically installed under the robot chassis are adopted for tracking. The robot uses an infrared detection module to detect whether there are commodities in the warehouse or shelves or not, thereby grasping and placing commodities accurately. Secondly, the recently-developed single shot multibox detector (SSD), as a typical DCNN model, is employed to detect and identify objects. Finally, in order to verify robot performance, a supermarket environment is designed to simulate real-world scenario for experiments. Experimental results show that the designed supermarket service robot can automatically complete the procurement and replenishment of commodities well and present promising performance on commodity detection and recognition tasks.


Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2393 ◽  
Author(s):  
Daniel Octavian Melinte ◽  
Luige Vladareanu

The interaction between humans and an NAO robot using deep convolutional neural networks (CNN) is presented in this paper based on an innovative end-to-end pipeline method that applies two optimized CNNs, one for face recognition (FR) and another one for the facial expression recognition (FER) in order to obtain real-time inference speed for the entire process. Two different models for FR are considered, one known to be very accurate, but has low inference speed (faster region-based convolutional neural network), and one that is not as accurate but has high inference speed (single shot detector convolutional neural network). For emotion recognition transfer learning and fine-tuning of three CNN models (VGG, Inception V3 and ResNet) has been used. The overall results show that single shot detector convolutional neural network (SSD CNN) and faster region-based convolutional neural network (Faster R-CNN) models for face detection share almost the same accuracy: 97.8% for Faster R-CNN on PASCAL visual object classes (PASCAL VOCs) evaluation metrics and 97.42% for SSD Inception. In terms of FER, ResNet obtained the highest training accuracy (90.14%), while the visual geometry group (VGG) network had 87% accuracy and Inception V3 reached 81%. The results show improvements over 10% when using two serialized CNN, instead of using only the FER CNN, while the recent optimization model, called rectified adaptive moment optimization (RAdam), lead to a better generalization and accuracy improvement of 3%-4% on each emotion recognition CNN.


2021 ◽  
Author(s):  
Hieu Nguyen ◽  
Khanh Ly ◽  
Thanh Nguyen ◽  
Yuzeng Wang ◽  
Zhaoyang Wang

Sign in / Sign up

Export Citation Format

Share Document