scholarly journals Binary defocusing technique based on complementary decoding with unconstrained dual projectors

Author(s):  
Xuexing Li ◽  
Wenhui Zhang

AbstractBinary defocusing technique can effectively break the limitation of hardware speed, which has been widely used in the real-time three-dimensional (3D) reconstruction. In addition, fusion technique can reduce captured images count for a 3D scene, which helps to improve real-time performance. Unfortunately, it is difficult for binary defocusing technique and fusion technique working simultaneously. To this end, our research established a novel system framework consisting of dual projectors and a camera, where the position and posture of the dual projectors are not strictly required. And, the dual projectors can adjust defocusing level independently. Based on this, this paper proposed a complementary decoding method with unconstrained dual projectors. The core idea is that low-resolution information is employed for high-resolution phase unwrapping. For this purpose, we developed the low-resolution depth extraction strategy based on periodic space-time coding patterns and the method from the low-resolution order to high-resolution order of fringe. Finally, experimental results demonstrated the performance of our proposed method, and the proposed method only requires three images for a 3D scene, as well as has strong robustness, expansibility, and implementation.

The technique of freeze-etching is illustrated with reference to striated muscle. Besides features of immediate biological interest, the material demonstrates various ways in which the process may be used in general to yield new information. These fall broadly into two classes: ( a ) qualitative: visualizing structures not readily seen by other methods, for example, general three-dimensional structure (low resolution) and membrane particles (high resolution); ( b ) quantitative, for example, the distribution of membrane features over extensive uneven surfaces (low and high resolution).


2019 ◽  
Vol 10 (1) ◽  
Author(s):  
Julián Tachella ◽  
Yoann Altmann ◽  
Nicolas Mellado ◽  
Aongus McCarthy ◽  
Rachael Tobin ◽  
...  

Abstract Single-photon lidar has emerged as a prime candidate technology for depth imaging through challenging environments. Until now, a major limitation has been the significant amount of time required for the analysis of the recorded data. Here we show a new computational framework for real-time three-dimensional (3D) scene reconstruction from single-photon data. By combining statistical models with highly scalable computational tools from the computer graphics community, we demonstrate 3D reconstruction of complex outdoor scenes with processing times of the order of 20 ms, where the lidar data was acquired in broad daylight from distances up to 320 metres. The proposed method can handle an unknown number of surfaces in each pixel, allowing for target detection and imaging through cluttered scenes. This enables robust, real-time target reconstruction of complex moving scenes, paving the way for single-photon lidar at video rates for practical 3D imaging applications.


Electronics ◽  
2020 ◽  
Vol 9 (8) ◽  
pp. 1312
Author(s):  
Debapriya Hazra ◽  
Yung-Cheol Byun

Video super-resolution has become an emerging topic in the field of machine learning. The generative adversarial network is a framework that is widely used to develop solutions for low-resolution videos. Video surveillance using closed-circuit television (CCTV) is significant in every field, all over the world. A common problem with CCTV videos is sudden video loss or poor quality. In this paper, we propose a generative adversarial network that implements spatio-temporal generators and discriminators to enhance real-time low-resolution CCTV videos to high-resolution. The proposed model considers both foreground and background motion of a CCTV video and effectively models the spatial and temporal consistency from low-resolution video frames to generate high-resolution videos. Quantitative and qualitative experiments on benchmark datasets, including Kinetics-700, UCF101, HMDB51 and IITH_Helmet2, showed that our model outperforms the existing GAN models for video super-resolution.


The Ribosome ◽  
2014 ◽  
pp. 151-163 ◽  
Author(s):  
Richard Brimacombe ◽  
Barbara Greuer ◽  
Florian Mueller ◽  
Monika Osswald ◽  
Jutta Rinke-Appel ◽  
...  

Author(s):  
CHAN-SU LEE ◽  
DIMITRIS SAMARAS

Facial expressions convey personal characteristics and subtle emotional states. This paper presents a new framework for modeling subtle facial motions of different people with different types of expressions from high-resolution facial expression tracking data to synthesize new stylized subtle facial expressions. A conceptual facial motion manifold is used for a unified representation of facial motion dynamics from three-dimensional (3D) high-resolution facial motions as well as from two-dimensional (2D) low-resolution facial motions. Variant subtle facial motions in different people with different expressions are modeled by nonlinear mappings from the embedded conceptual manifold to input facial motions using empirical kernel maps. We represent facial expressions by a factorized nonlinear generative model, which decomposes expression style factors and expression type factors from different people with multiple expressions. We also provide a mechanism to control the high-resolution facial motion model from low-resolution facial video sequence tracking and analysis. Using the decomposable generative model with a common motion manifold embedding, we can estimate parameters to control 3D high resolution facial expressions from 2D tracking results, which allows performance-driven control of high-resolution facial expressions.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2164
Author(s):  
Md. Shahinur Alam ◽  
Ki-Chul Kwon ◽  
Munkh-Uchral Erdenebat ◽  
Mohammed Y. Abbass ◽  
Md. Ashraful Alam ◽  
...  

The integral imaging microscopy system provides a three-dimensional visualization of a microscopic object. However, it has a low-resolution problem due to the fundamental limitation of the F-number (the aperture stops) by using micro lens array (MLA) and a poor illumination environment. In this paper, a generative adversarial network (GAN)-based super-resolution algorithm is proposed to enhance the resolution where the directional view image is directly fed as input. In a GAN network, the generator regresses the high-resolution output from the low-resolution input image, whereas the discriminator distinguishes between the original and generated image. In the generator part, we use consecutive residual blocks with the content loss to retrieve the photo-realistic original image. It can restore the edges and enhance the resolution by ×2, ×4, and even ×8 times without seriously hampering the image quality. The model is tested with a variety of low-resolution microscopic sample images and successfully generates high-resolution directional view images with better illumination. The quantitative analysis shows that the proposed model performs better for microscopic images than the existing algorithms.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Rachael Tobin ◽  
Abderrahim Halimi ◽  
Aongus McCarthy ◽  
Philip J. Soan ◽  
Gerald S. Buller

AbstractRecently, time-of-flight LiDAR using the single-photon detection approach has emerged as a potential solution for three-dimensional imaging in challenging measurement scenarios, such as over distances of many kilometres. The high sensitivity and picosecond timing resolution afforded by single-photon detection offers high-resolution depth profiling of remote, complex scenes while maintaining low power optical illumination. These properties are ideal for imaging in highly scattering environments such as through atmospheric obscurants, for example fog and smoke. In this paper we present the reconstruction of depth profiles of moving objects through high levels of obscurant equivalent to five attenuation lengths between transceiver and target at stand-off distances up to 150 m. We used a robust statistically based processing algorithm designed for the real time reconstruction of single-photon data obtained in the presence of atmospheric obscurant, including providing uncertainty estimates in the depth reconstruction. This demonstration of real-time 3D reconstruction of moving scenes points a way forward for high-resolution imaging from mobile platforms in degraded visual environments.


Sign in / Sign up

Export Citation Format

Share Document