plenoptic camera
Recently Published Documents


TOTAL DOCUMENTS

178
(FIVE YEARS 43)

H-INDEX

18
(FIVE YEARS 2)

2022 ◽  
Author(s):  
Timothy Fahringer ◽  
Paul Danehy ◽  
William Hutchins ◽  
Brian Thurow
Keyword(s):  

Author(s):  
Jenna K Davis ◽  
Christopher J. Clifford ◽  
Dustin Kelly ◽  
B Thurow

Abstract The development of a tomographic BOS implementation system utilizing up to four plenoptic cameras is presented. A systematic set of experiments was performed using a pair of solid dimethylpolysiloxan (PDMS) cylinders immersed in a nearly refractive index matched gylcerol/water solution to represent discrete flow features with known sizes, shapes, separation distances, and orientation. A study was conducted to assess the influence of these features on the accuracy of 3D reconstructions of the refractive index field. It was determined that the limited angular information collected by a single plenoptic camera is insufficient for single-camera 3D reconstructions. In multi-camera configurations, the additional views collected by a plenoptic camera were shown to improve the overall reconstruction accuracy compared to an equivalent single view per camera reconstruction, potentially reducing the number of overall cameras needed to achieve a desired accuracy. For the imaging of two cylinders, three or more cameras are generally needed to avoid significant ghosting artifacts in the reconstruction. Quantitative results are presented that show that: (1) two separate cylinders will be individually resolved as long as measurements from one camera are able to observe separation between the cylinders; (2) the error in the reconstructed 3D refractive index field increases as the size of the feature decreases; and (3) the use of volumetric masking within the reconstruction algorithm is critical in order to improve the accuracy of the solution.


Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6141
Author(s):  
Amin Amini ◽  
Jamil Kanfoud ◽  
Tat-Hean Gan

With the advancement of miniaturization in electronics and the ubiquity of micro-electro-mechanical systems (MEMS) in different applications including computing, sensing and medical apparatus, the importance of increasing production yields and ensuring the quality standard of products has become an important focus in manufacturing. Hence, the need for high-accuracy and automatic defect detection in the early phases of MEMS production has been recognized. This not only eliminates human interaction in the defect detection process, but also saves raw material and labor required. This research developed an automated defects recognition (ADR) system using a unique plenoptic camera capable of detecting surface defects of MEMS wafers using a machine-learning approach. The developed algorithm could be applied at any stage of the production process detecting defects at both entire MEMS wafer and single component scale. The developed system showed an F1 score of 0.81 U on average for true positive defect detection, with a processing time of 18 s for each image based on 6 validation sample images including 371 labels.


Author(s):  
Mahyar Moaven ◽  
Abbishek Gururaj ◽  
Zu Puayen Tan ◽  
Sarah Morris ◽  
Brian Thurow ◽  
...  

Rotating 3D velocimetry (R3DV) is a single-camera PIV technique designed to track the evolution of flow over a rotor in the rotating reference frame. A high-speed (stationary) plenoptic camera capable of 3D imaging captures the motion of particles within the volume of interest through a revolving mirror from the central hub of a hydrodynamic rotor facility, a by-product being an undesired image rotation. R3DV employs a calibration method adapted for rotation such that during MART reconstruction, voxels are mapped to pixel coordinates based on the mirror’s instantaneous azimuthal position. Interpolation of calibration polynomial coefficients using a fitted Fourier series is performed to bypass the need to physically calibrate volumes corresponding to each fine azimuth angle. Reprojection error associated with calibration is calculated on average to be less than 0.6 of a pixel. Experimental uncertainty of cross-correlated 3D/3C vector fields is quantified by comparing vectors obtained from imaging quiescent flow via a rotating mirror to an idealized model based purely on rotational kinematics. The uncertainty shows no dependency on azimuth angle while amounting to approximately less than 0.21 voxels per timestep in the in-plane directions and correspondingly 1.7 voxels in the radial direction, both comparable to previously established uncertainty estimations for single-camera plenoptic PIV.


Author(s):  
Dustin Kelly ◽  
Ralf Fischer ◽  
Ari Goldman ◽  
Sarah Morris ◽  
Bart Prorok ◽  
...  

In this work, a high-speed spectral plenoptic camera was used for three-dimensional (3D) simultaneous particle tracking and pyrometry measurements of hot spatter particles ejected during the metal additive manufacturing process. Additive manufacturing (AM) has an increasing role in the aerospace, energy, medical and automotive industry (DebRoy et al., 2018). While this new technology enables the production of highly advanced parts, research on the fundamental mechanisms governing the laser-matter interactions are an ongoing challenge because of the spatial and temporal resolution inherent to the AM process. One challenge is the characterization of spatter particles ejected from the melt pool, as these particles can be incorporated into the final part affecting the mechanical properties (Deng et al., 2020). One potential solution for simultaneously measuring velocity and temperature of the spatter particles is the spectral plenoptic camera.


Author(s):  
Bibek Sapkota ◽  
Dustin Kelly ◽  
Zu Puayen Tan ◽  
Brian S. Thurow

This paper investigates the effect of smoothing operation in 3D reconstruction using a plenoptic camera. A plenoptic camera - also known as light field camera - features a commercial off the shelf camera with added microlens array (MLA) behind the imaging lens, directly in front of the sensor. The main lens focuses the light to the MLA plane, where each microlens then re-directs the light to small regions of pixels behind, each pixel corresponding to different angle of incident (T. Fahringer (2015)) (Adelson and Wang (1992)). Thus, MLA encodes angular information of incident light rays into the recorded image that assist to acquire 4D information (u,v,s,t) of light-field including both position and angular information of light rays captured by the camera (Ng et al. (2005)) (Adelson and Wang (1992)).


Author(s):  
Viktor Eckstein ◽  
Tobias Schmid-Schirling ◽  
Daniel Carl ◽  
Ulrike Wallrabe

2021 ◽  
Author(s):  
Martin F. Eberhart ◽  
Stefan Löhle ◽  
Felix Grigat

Sign in / Sign up

Export Citation Format

Share Document