scholarly journals REAL TIME VIDEO STITCHING IMPLEMENTATION ON A ZYNQ FPGA SOC

2021 ◽  
Author(s):  
Dhimiter Qendri

This project details the design and implementation of an image processing pipeline that targets real time video-stitching for semi-panoramic video synthesis. The scope of the project includes the analysis of possible approaches, selection of processing algorithms and procedures, design of experimental hardware set-up (including the schematic capture design of a custom catadioptric panoramic imaging system) and firmware/software development of the vision processing system components. The goal of the project is to develop a frame-stitching IP module as well as an efficient video registration algorithm capable for synthesis of a semi-panoramic video-stream at 30 frames-per-second (fps) rate with minimal FPGA resource utilization. The developed components have been validated in hardware. Finally, a number of hybrid architectures that make use of the synergy between the CPU and FPGA section of the ZYNQ SoC have been investigated and prototyped as alternatives to a complete hardware solution. Keyword: Video stitching, Panoramic vision, FPGA, SoC, vision system, registration

2021 ◽  
Author(s):  
Dhimiter Qendri

This project details the design and implementation of an image processing pipeline that targets real time video-stitching for semi-panoramic video synthesis. The scope of the project includes the analysis of possible approaches, selection of processing algorithms and procedures, design of experimental hardware set-up (including the schematic capture design of a custom catadioptric panoramic imaging system) and firmware/software development of the vision processing system components. The goal of the project is to develop a frame-stitching IP module as well as an efficient video registration algorithm capable for synthesis of a semi-panoramic video-stream at 30 frames-per-second (fps) rate with minimal FPGA resource utilization. The developed components have been validated in hardware. Finally, a number of hybrid architectures that make use of the synergy between the CPU and FPGA section of the ZYNQ SoC have been investigated and prototyped as alternatives to a complete hardware solution. Keyword: Video stitching, Panoramic vision, FPGA, SoC, vision system, registration


It is a well-known fact that when a camera or other imaging system captures an image, often, the vision system for which it is captured cannot implement it directly. There may be several reasons behind this fact such as there can exist random intensity variation in the image. There can also be illumination variation in the image or poor contrast. These drawbacks must be tackled at the primitive stages for optimum vision processing. This chapter will discuss different filtering approaches for this purpose. The chapter begins with the Gaussian filter, followed by a brief review of different often used approaches. Moreover, this chapter will also render different filtering approaches including their hardware architectures.


2021 ◽  
Author(s):  
Gvarami Labartkava

Human vision is a complex system which involves processing frames and retrieving information in a real-time with optimization of the memory, energy and computational resources usage. It can be widely utilized in many real-world applications from security systems to space missions. The research investigates fundamental principles of human vision and accordingly develops a FPGA-based video processing system with binocular vision, capable of high performance and real-time tracking of moving objects in 3D space. The undertaken research and implementation consist of: 1. Analysis of concepts and methods of human vision system; 2. Development stereo and peripheral vision prototype of a system-on-programmable chip (SoPC) for multi-object motion detection and tracking; 3. Verification, test run and analysis of the experimental results gained on the prototype and associated with the performance constraints; The implemented system proposes a platform for real-time applications which are limited in current approaches.


2014 ◽  
Vol 668-669 ◽  
pp. 1098-1101
Author(s):  
Jian Wang ◽  
Zhen Hai Zhang ◽  
Ke Jie Li ◽  
Hai Yan Shao ◽  
Tao Xu ◽  
...  

Catadioptric panoramic vision system has been widely used in many fields, and also plays a very important role in environment perception of unmanned platform especially. However, the resolution of system is not very high, usually less than 5 million pixels at present. Even if the resolution is high, but the unwrapping and rectification of panoramic video is carried out off-line. Further, the system is also applied in stationary state or low stationary moving. This paper proposes an unwrapping and rectification method based on high-resolution catadioptric panoramic vision system used during non-stationary moving. It can segment dynamic circular mark region accurately and get the coordinates of center of circular image real-timely, shorten the time of image processing, meanwhile the coordinates of center and radius of the circular mark region would be obtained, so the image distortion caused by inaccurate center coordinates can be reduced. During image rectification, after achieving radial distortions parameters (K1, K2, K3), decentering distortions parameters (P1, P2), and the correction factor that has no physical meanings, we can used those for fitting the rectification polynomial, so the panoramic video can be rectified without distortion.


2013 ◽  
Vol 300-301 ◽  
pp. 729-734
Author(s):  
Hong Rui Ma ◽  
Jian Xian Cai ◽  
Rui Hong Yu

Most existing machine vision processing system is 8-bit or 16-bit processor control system, complex algorithms and multi-tasking of the vision system have been severely constrained. DaVinci DM355 integrated ARM926 RISC processor core and specialized image processor is a programmable DMSoC development platform with digital multimedia codecs, high integration, low-power consumption. The machine vision system based on DaVinci DM355 development goal is to establish a low-power hardware development board based on the DaVinci DM355, transplant Linux operating system based on the hardware board and develop corresponding driver.This will provide the basis for the realization of complex algorithm and multitasking system for machine vision system.


2021 ◽  
Author(s):  
Gvarami Labartkava

Human vision is a complex system which involves processing frames and retrieving information in a real-time with optimization of the memory, energy and computational resources usage. It can be widely utilized in many real-world applications from security systems to space missions. The research investigates fundamental principles of human vision and accordingly develops a FPGA-based video processing system with binocular vision, capable of high performance and real-time tracking of moving objects in 3D space. The undertaken research and implementation consist of: 1. Analysis of concepts and methods of human vision system; 2. Development stereo and peripheral vision prototype of a system-on-programmable chip (SoPC) for multi-object motion detection and tracking; 3. Verification, test run and analysis of the experimental results gained on the prototype and associated with the performance constraints; The implemented system proposes a platform for real-time applications which are limited in current approaches.


Author(s):  
Chawki El Zant ◽  
Quentin Charrier ◽  
Khaled Benfriha ◽  
Patrick Le Men

AbstractThe level of industrial performance is a vital issue for any company wishing to develop and acquire more market share. This article presents a novel approach to integrate intelligent visual inspection into “MES” control systems in order to gain performance. The idea is to adapt an intelligent image processing system via in-situ cameras to monitor the production system. The images are thus analyzed in real time via machine learning interpreting the visualized scene and interacting with some features of the MES system, such as maintenance, quality control, security, operations, etc. This novel technological brick, combined with the flexibility of production, contributes to optimizing the system in terms of autonomy and responsiveness to detect anomalies, already encountered, or even new ones. This smart visual inspection system is considered as a Cyber Physical System CPS brick integrated to the manufacturing system which will be considered an edge computing node in the final architecture of the platform. This smart CPS represents the 1st level of calculation and analysis in real time due to embedded intelligence. Cloud computing will be a perspective for us, which will represent the 2nd level of computation, in deferred time, in order to analyze the new anomalies encountered and identify potential solutions to integrate into MES. Ultimately, this approach strengthens the robustness of the control systems and increases the overall performance of industrial production.


2012 ◽  
Vol 187 ◽  
pp. 109-114
Author(s):  
Yu Bin Zhou ◽  
Yu Ning Yang

In order to realize omni-vision system of intelligent car for auto wandering in real-time processing, an image processing system with 6 vision channels based on FPGA&DSP is designed. In the system, two ZBT SRAM chips are used as the input and output cache for high data transferring. A FPGA chip is responsible for the core logic controlling and video synchronous. Digital videos are sent so processing module by camlink bus. Data are exchanged by EMIF and McBSP between FPGA and DSPs. EDMA is used for data transferring between SRAM in FPGA and ZBT SRAM. The QDMA is used for 2D data transferring to 1D into DSP cache. Tasks are assigned to chips by μC/OS on master DSP. All this together, real-time data sampling and processing for multi-channel vision was realized.


Sign in / Sign up

Export Citation Format

Share Document