panoramic video
Recently Published Documents


TOTAL DOCUMENTS

218
(FIVE YEARS 68)

H-INDEX

16
(FIVE YEARS 3)

Author(s):  
Anqi Zhu ◽  
Lin Zhang ◽  
Juntao Chen ◽  
Yicong Zhou

The panorama stitching system is an indispensable module in surveillance or space exploration. Such a system enables the viewer to understand the surroundings instantly by aligning the surrounding images on a plane and fusing them naturally. The bottleneck of existing systems mainly lies in alignment and naturalness of the transition of adjacent images. When facing dynamic foregrounds, they may produce outputs with misaligned semantic objects, which is evident and sensitive to human perception. We solve three key issues in the existing workflow that can affect its efficiency and the quality of the obtained panoramic video and present Pedestrian360, a panoramic video system based on a structured camera array (a spatial surround-view camera system). First, to get a geometrically aligned 360○ view in the horizontal direction, we build a unified multi-camera coordinate system via a novel refinement approach that jointly optimizes camera poses. Second, to eliminate the brightness and color difference of images taken by different cameras, we design a photometric alignment approach by introducing a bias to the baseline linear adjustment model and solving it with two-step least-squares. Third, considering that the human visual system is more sensitive to high-level semantic objects, such as pedestrians and vehicles, we integrate the results of instance segmentation into the framework of dynamic programming in the seam-cutting step. To our knowledge, we are the first to introduce instance segmentation to the seam-cutting problem, which can ensure the integrity of the salient objects in a panorama. Specifically, in our surveillance oriented system, we choose the most significant target, pedestrians, as the seam avoidance target, and this accounts for the name Pedestrian360 . To validate the effectiveness and efficiency of Pedestrian360, a large-scale dataset composed of videos with pedestrians in five scenes is established. The test results on this dataset demonstrate the superiority of Pedestrian360 compared to its competitors. Experimental results show that Pedestrian360 can stitch videos at a speed of 12 to 26 fps, which depends on the number of objects in the shooting scene and their frequencies of movements. To make our reported results reproducible, the relevant code and collected data are publicly available at https://cslinzhang.github.io/Pedestrian360-Homepage/ .


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Yingqi Kong

The panoramic video technology is introduced to collect multiangle data of design objects, draw a 3D spatial model with the collected data, solve the first-order differential equation for the 3D spatial model, obtain the spatial positioning extremes of the object scales, and realize the alignment and fusion of panoramic video images according to the positioning extremes above and below the scale space. Then, the panoramic video is generated and displayed by computer processing so that the tourist can watch the scene with virtual information added to the panoramic video by wearing the display device elsewhere. It solves the technical difficulties of the high complexity of the algorithm in the system of panoramic video stitching and the existence of stitching cracks and the “GHOST” phenomenon in the stitched video, as well as the technical difficulties that the 3D registration is easily affected by the time-consuming environment and target tracking detection algorithm. The simulation results show that the panoramic video stitching method performs well in real time and effectively suppresses stitching cracks and the “GHOST” phenomenon, and the augmented reality 3D registration method performs well for the local enhancement of the panoramic video.


2021 ◽  
Author(s):  
◽  
Thomas Iorns

<p>The application of the newly popular content medium of 360 degree panoramic video to the widely used offline lighting technique of image based lighting is explored, and a system solution for real-time image based lighting of virtual objects using only the provided 360 degree video for lighting is developed. The system solution is suitable for use on live streaming video input, and is shown to run on consumer grade graphics hardware at the high resolutions and framerates necessary for comfortable viewing on head mounted displays, rendering at over 60 frames per second for stereo output at 1182x1464 per eye on a mid-range graphics card. Its use in several real-world applications is also studied, and extension to consider real-time shadowing and reflection is explored.</p>


2021 ◽  
Author(s):  
◽  
Thomas Iorns

<p>The application of the newly popular content medium of 360 degree panoramic video to the widely used offline lighting technique of image based lighting is explored, and a system solution for real-time image based lighting of virtual objects using only the provided 360 degree video for lighting is developed. The system solution is suitable for use on live streaming video input, and is shown to run on consumer grade graphics hardware at the high resolutions and framerates necessary for comfortable viewing on head mounted displays, rendering at over 60 frames per second for stereo output at 1182x1464 per eye on a mid-range graphics card. Its use in several real-world applications is also studied, and extension to consider real-time shadowing and reflection is explored.</p>


2021 ◽  
Author(s):  
Wang Zhe ◽  
Du Jia ◽  
Li Guopeng ◽  
Song Xiaofeng
Keyword(s):  

2021 ◽  
Vol 45 (4) ◽  
pp. 589-599
Author(s):  
I.A. Kudinov ◽  
M.B. Nikiforov ◽  
I.S. Kholopov

We derive analytical expressions for calculating the number of elementary computational operations required to generate several personal regions of interest in a panoramic computer-vision distributed-aperture system using two alternative strategies: strategy 1 involves acquisition of a complete panoramic frame, followed by the selection of personal regions of interest, while with strategy 2 the region of interest is directly formed for each user. The parameters of analytical expressions include the number of cameras in the distributed system, the number of users, and the resolution of panorama and user frames. The formulas obtained for the given parameters make it possible to determine a strategy that would be optimal in terms of a criterion of the minimum number of elementary computational operations for generating multiple personal regions of interest. The region of interest is generated using only a priori information about the internal and external camera parameters, obtained as a result of their photogrammetric calibration with a universal test object, and does not take into account information about scene correspondences at the boundaries of intersecting fields of view.


2021 ◽  
Author(s):  
Dhimiter Qendri

This project details the design and implementation of an image processing pipeline that targets real time video-stitching for semi-panoramic video synthesis. The scope of the project includes the analysis of possible approaches, selection of processing algorithms and procedures, design of experimental hardware set-up (including the schematic capture design of a custom catadioptric panoramic imaging system) and firmware/software development of the vision processing system components. The goal of the project is to develop a frame-stitching IP module as well as an efficient video registration algorithm capable for synthesis of a semi-panoramic video-stream at 30 frames-per-second (fps) rate with minimal FPGA resource utilization. The developed components have been validated in hardware. Finally, a number of hybrid architectures that make use of the synergy between the CPU and FPGA section of the ZYNQ SoC have been investigated and prototyped as alternatives to a complete hardware solution. Keyword: Video stitching, Panoramic vision, FPGA, SoC, vision system, registration


Sign in / Sign up

Export Citation Format

Share Document