A UNIFIED APPROACH TO REAL-TIME, MULTI-RESOLUTION, MULTI-BASELINE 2D VIEW SYNTHESIS AND 3D DEPTH ESTIMATION USING COMMODITY GRAPHICS HARDWARE

2004 ◽  
Vol 04 (04) ◽  
pp. 627-651 ◽  
Author(s):  
RUIGANG YANG ◽  
MARC POLLEFEYS ◽  
HUA YANG ◽  
GREG WELCH

We present a new method for using commodity graphics hardware to achieve real-time, on-line, 2D view synthesis or 3D depth estimation from two or more calibrated cameras. Our method combines a 3D plane-sweeping approach with 2D multi-resolution color consistency tests. We project camera imagery onto each plane, compute measures of color consistency throughout the plane at multiple resolutions, and then choose the color or depth (corresponding plane) that is most consistent. The key to achieving real-time performance is our use of the advanced features included with recent commodity computer graphics hardware to implement the computations simultaneously (in parallel) across all reference image pixels on a plane. Our method is relatively simple to implement, and flexible in term of the number and placement of cameras. With two cameras and an NVIDIA GeForce4 graphics card we can achieve 50–70 M disparity evaluations per second, including image download and read-back overhead. This performance matches the fastest available commercial software-only implementation of correlation-based stereo algorithms, while freeing up the CPU for other uses.

Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 15
Author(s):  
Filippo Aleotti ◽  
Giulio Zaccaroni ◽  
Luca Bartolomei ◽  
Matteo Poggi ◽  
Fabio Tosi ◽  
...  

Depth perception is paramount for tackling real-world problems, ranging from autonomous driving to consumer applications. For the latter, depth estimation from a single image would represent the most versatile solution since a standard camera is available on almost any handheld device. Nonetheless, two main issues limit the practical deployment of monocular depth estimation methods on such devices: (i) the low reliability when deployed in the wild and (ii) the resources needed to achieve real-time performance, often not compatible with low-power embedded systems. Therefore, in this paper, we deeply investigate all these issues, showing how they are both addressable by adopting appropriate network design and training strategies. Moreover, we also outline how to map the resulting networks on handheld devices to achieve real-time performance. Our thorough evaluation highlights the ability of such fast networks to generalize well to new environments, a crucial feature required to tackle the extremely varied contexts faced in real applications. Indeed, to further support this evidence, we report experimental results concerning real-time, depth-aware augmented reality and image blurring with smartphones in the wild.


2012 ◽  
Vol 155-156 ◽  
pp. 1074-1079
Author(s):  
Zi Hui Zhang ◽  
Yue Shan Xiong

To study the path planning problem of multiple mobile robots in dynamic environments, an on-line centralized path planning algorithm is proposed. It is difficult to obtain real-time performance for path planning of multiple robots in dynamic environment. The harmonic potential field for multiple mobile robots is built by using the panel method known in fluid mechanics, which represents the outward normal velocity of each line of a polygonal obstacle as a function of the length of its characteristic line. The simulation results indicate that it is a simple, efficient and effective path planning algorithm for multiple mobile robots in the dynamic environments that the geometries and trajectories of obstacles are known in advance, and can achieve real-time performance.


Author(s):  
Nicholas Cook

The previous focus of musical creativity studies on the solitary composer has given way to a focus on collaborative, real-time performance. This chapter discusses a unified approach to both practices, according to which musical creativity is social, but the social is inherent in ostensively solitary work. At the core of this approach is ‘unpredictable emergence’ (Sawyer 2003) out of networks of human and nonhuman agents; this chapter extends Sawyer’s model of creative collaboration in jazz in such a way that it becomes equally applicable to classical and contemporary music making. The same approach is applied to compositional imagination, with particular emphasis on the materiality of bodies, instruments, and notations, and on how representations of music “talk back” to composers and prompt the emergence of unforeseen outcomes. The aim is to provide a more realistic account of musical creativity than the Romantic mythologies that have long skewed research in this area.


2021 ◽  
Author(s):  
Yupeng Xie ◽  
Sarah Fachada ◽  
Daniele Bonatto ◽  
Mehrdad Teratani ◽  
Gauthier Lafruit

Depth-Image-Based Rendering (DIBR) can synthesize a virtual view image from a set of multiview images and corresponding depth maps. However, this requires an accurate depth map estimation that incurs a high compu- tational cost over several minutes per frame in DERS (MPEG-I’s Depth Estimation Reference Software) even by using a high-class computer. LiDAR cameras can thus be an alternative solution to DERS in real-time DIBR ap- plications. We compare the quality of a low-cost LiDAR camera, the Intel Realsense LiDAR L515 calibrated and configured adequately, with DERS using MPEG-I’s Reference View Synthesizer (RVS). In IV-PSNR, the LiDAR camera reaches 32.2dB view synthesis quality with a 15cm camera baseline and 40.3dB with a 2cm baseline. Though DERS outperforms the LiDAR camera with 4.2dB, the latter provides a better quality-performance trade- off. However, visual inspection demonstrates that LiDAR’s virtual views have even slightly higher quality than with DERS in most tested low-texture scene areas, except for object borders. Overall, we highly recommend using LiDAR cameras over advanced depth estimation methods (like DERS) in real-time DIBR applications. Neverthe- less, this requires delicate calibration with multiple tools further exposed in the paper.


2017 ◽  
Vol 31 (19-21) ◽  
pp. 1740040
Author(s):  
Biao Yang ◽  
Jinmeng Cao ◽  
Ling Zou

Robust principal component analysis (RPCA) decomposition is widely applied in moving object detection due to its ability in suppressing environmental noises while separating sparse foreground from low rank background. However, it may suffer from constant punishing parameters (resulting in confusion between foreground and background) and holistic processing of all input frames (leading to bad real-time performance). Improvements to these issues are studied in this paper. A block-RPCA decomposition approach was proposed to handle the confusion while separating foreground from background. Input frame was initially separated into blocks using three-frame difference. Then, punishing parameter of each block was computed by its motion saliency acquired based on selective spatio-temporal interesting points. Aiming to improve the real-time performance of the proposed method, an on-line solution to block-RPCA decomposition was utilized. Both qualitative and quantitative tests were implemented and the results indicate the superiority of our method to some state-of-the-art approaches in detection accuracy or real-time performance, or both of them.


1994 ◽  
Vol 33 (01) ◽  
pp. 60-63 ◽  
Author(s):  
E. J. Manders ◽  
D. P. Lindstrom ◽  
B. M. Dawant

Abstract:On-line intelligent monitoring, diagnosis, and control of dynamic systems such as patients in intensive care units necessitates the context-dependent acquisition, processing, analysis, and interpretation of large amounts of possibly noisy and incomplete data. The dynamic nature of the process also requires a continuous evaluation and adaptation of the monitoring strategy to respond to changes both in the monitored patient and in the monitoring equipment. Moreover, real-time constraints may imply data losses, the importance of which has to be minimized. This paper presents a computer architecture designed to accomplish these tasks. Its main components are a model and a data abstraction module. The model provides the system with a monitoring context related to the patient status. The data abstraction module relies on that information to adapt the monitoring strategy and provide the model with the necessary information. This paper focuses on the data abstraction module and its interaction with the model.


Sign in / Sign up

Export Citation Format

Share Document