Micro-lens Image Stack Upsampling for Densely-Sampled Light Field Reconstruction

Author(s):  
Shuo Zhang ◽  
Song Chang ◽  
Zeqi Shen ◽  
Youfang Lin
2021 ◽  
pp. 108121
Author(s):  
Wenhui Zhou ◽  
Jiangwei Shi ◽  
Yongjie Hong ◽  
Lili Lin ◽  
Ercan Engin Kuruoglu

IEEE Access ◽  
2018 ◽  
Vol 6 ◽  
pp. 76331-76338
Author(s):  
Jun Qiu ◽  
Xinkai Kang ◽  
Zhong Su ◽  
Qing Li ◽  
Chang Liu

Author(s):  
Henry Wing Fung Yeung ◽  
Junhui Hou ◽  
Jie Chen ◽  
Yuk Ying Chung ◽  
Xiaoming Chen

Author(s):  
Xiuxiu Jing ◽  
Yike Ma ◽  
Qiang Zhao ◽  
Ke Lyu ◽  
Feng Dai

Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 2129 ◽  
Author(s):  
Hyun Myung Kim ◽  
Min Seok Kim ◽  
Gil Ju Lee ◽  
Hyuk Jae Jang ◽  
Young Min Song

The miniaturization of 3D depth camera systems to reduce cost and power consumption is essential for their application in electrical devices that are trending toward smaller sizes (such as smartphones and unmanned aerial systems) and in other applications that cannot be realized via conventional approaches. Currently, equipment exists for a wide range of depth-sensing devices, including stereo vision, structured light, and time-of-flight. This paper reports on a miniaturized 3D depth camera based on a light field camera (LFC) configured with a single aperture and a micro-lens array (MLA). The single aperture and each micro-lens of the MLA serve as multi-camera systems for 3D surface imaging. To overcome the optical alignment challenge in the miniaturized LFC system, the MLA was designed to focus by attaching it to an image sensor. Theoretical analysis of the optical parameters was performed using optical simulation based on Monte Carlo ray tracing to find the valid optical parameters for miniaturized 3D camera systems. Moreover, we demonstrated multi-viewpoint image acquisition via a miniaturized 3D camera module integrated into a smartphone.


2019 ◽  
Vol 15 (8) ◽  
pp. 155014771987065 ◽  
Author(s):  
Lei Cai ◽  
Peien Luo ◽  
Guangfu Zhou ◽  
Zhenxue Chen

It is difficult to reconstruct the complete light field, and the reconstructed light field can only recognize specific fixed targets. These have limited the applications of the light field in practice. To solve the problems above, this article introduces the multi-perspective distributed information fusion into light field reconstruction to monitor and recognize the maneuvering targets. First, the light field is represented as sub-light fields at different perspectives (i.e. the Multi-sensor distributed network), and sparse representation and reconstruction are then performed. Second, we establish the multi-perspective distributed information fusion under the condition of regional full-coverage constraints. Finally, the light field data from multiple perspectives are fused and the states of the maneuvering targets are estimated. Experimental results show that the light field reconstruction time of the proposed method is less than 583 s, and the reconstruction accuracy exceeds 92.447% compared with the existing spatially variable bidirectional reflectance distribution function, micro-lens array, and others. In the aspect of maneuvering target recognition, the recognition time of the algorithm in this article is no more than 3.5 s. The recognition accuracy of the algorithm in this article is up to 86.739%. Moreover, the more viewing angles used, the higher the accuracy.


Sign in / Sign up

Export Citation Format

Share Document