scholarly journals Review on Deinterlacing Algorithms

2021 ◽  
Vol 23 (06) ◽  
pp. 1025-1032
Author(s):  
Karthik Karthik ◽  
◽  
Vinay Varma B ◽  
Akshay Narayan Pai ◽  
◽  
...  

Interlacing is a commonly used technique for doubling the perceived frame rate without adding bandwidth in television broadcasting and video recording. During playback, however, it exhibits disturbing visual artifacts such as flickering and combing. As a result in modern display devices, video deinterlacing is used where the interlaced video format is converted to progressive scan format to overcome the limitations of interlaced video. This conversion is achieved through interpolating interlaced video. Current deinterlacing approaches either neglect temporal information for real-time performance but poor visual quality, or estimate motion for better deinterlacing but higher computational cost. This paper focuses on surveying the deinterlacing algorithms which apply both spatial and temporal-based methods and focus on different aspects of both motion-adaptive, non-motion adaptive, and the time complexity through these implementations.

Sensors ◽  
2020 ◽  
Vol 20 (2) ◽  
pp. 534 ◽  
Author(s):  
Yuan He ◽  
Shunyi Zheng ◽  
Fengbo Zhu ◽  
Xia Huang

The truncated signed distance field (TSDF) has been applied as a fast, accurate, and flexible geometric fusion method in 3D reconstruction of industrial products based on a hand-held laser line scanner. However, this method has some problems for the surface reconstruction of thin products. The surface mesh will collapse to the interior of the model, resulting in some topological errors, such as overlap, intersections, or gaps. Meanwhile, the existing TSDF method ensures real-time performance through significant graphics processing unit (GPU) memory usage, which limits the scale of reconstruction scene. In this work, we propose three improvements to the existing TSDF methods, including: (i) a thin surface attribution judgment method in real-time processing that solves the problem of interference between the opposite sides of the thin surface; we distinguish measurements originating from different parts of a thin surface by the angle between the surface normal and the observation line of sight; (ii) a post-processing method to automatically detect and repair the topological errors in some areas where misjudgment of thin-surface attribution may occur; (iii) a framework that integrates the central processing unit (CPU) and GPU resources to implement our 3D reconstruction approach, which ensures real-time performance and reduces GPU memory usage. The proposed results show that this method can provide more accurate 3D reconstruction of a thin surface, which is similar to the state-of-the-art laser line scanners with 0.02 mm accuracy. In terms of performance, the algorithm can guarantee a frame rate of more than 60 frames per second (FPS) with the GPU memory footprint under 500 MB. In total, the proposed method can achieve a real-time and high-precision 3D reconstruction of a thin surface.


2015 ◽  
Vol 75 (2) ◽  
Author(s):  
Abdullah Bade ◽  
Ching Sue Ping ◽  
Siti Hasnah Tanalol

For the past 2-decades, the challenges of collision detection on cloth simulation have attracted numerous researchers.  Simple mass spring model is used to model the cloth where the movement of the particles within the cloth was controlled by applying the Newton’s second law. After the modeling stage, implementation of the collision detection algorithm took place on cloth has been done. The collision detection technique used is bounding sphere hierarchy. Then, quad tree is being used to partitioning the bounding sphere and the collision search was based on the top-down approach. A prototype of the collision detection system is developed on cloth simulation and several experiments were conducted. Time taken for this system to be executed is around 235.258 milliseconds. Then the frame rate is at the average of 22 frames per second which is close to the real time system. Times taken for the collision detection system travels from root to nodes were 23 seconds. As a conclusion, the computational cost for bounding sphere hierarchy is much higher because the bounding sphere required more vertices for generation process, however the execution time for bounding sphere hierarchy is faster than the AABB hierarchy.  


Author(s):  
Irina Znamenskaya ◽  
Nikolay Sysoev ◽  
Igor Doroshchenko

Digital imaging became one of the main tools for studying unsteady flows. Modern high-speed cameras support video recording at high frame rates which makes it possible to study extended high-speed processes. We demonstrate here different animations: water temperature field evolution with a frame rate of 115 Hz; high-speed shadowgraph visualisation of different flows - water jet formation process (100 000 frames / s), shadowgraph animations of the shock waves created by the pulsed discharges (124 000 frames / s). Also, as an example of plasma flow visualization technique, we offer 9 sequential images of the shock wave - pulse gas discharge visualization obtained by the high-speed CCD camera with the 100 ns delay between frames. We developed in-house software based on the machine vision and learning techniques for automatic flow animations processing. The examples of the automatic oblique shock detection using Canny edge detection and Hough transform and thermal plume detection based on the pre-trained convolutional neural network are provided and discussed.


2005 ◽  
Vol 05 (04) ◽  
pp. 485-490
Author(s):  
IVAN CORAZZA ◽  
MATTEO BOTTEGHI ◽  
CORINNA TERENZIANI ◽  
SEBASTIANO ZANNOLI ◽  
PASQUALINO MAIETTA LATESSA ◽  
...  

In some medical applications, the simultaneous acquisition of signals corresponding to physiological parameters and video recording allows a more accurate analysis of the problem and a more complete diagnosis. In rehabilitation, the correlation between parameters of interest (angles, speed, power, EMG) and images of the patient's movements is important to devise an adequate training protocol. In neurology, some pathologies need to be investigated by a comparison of images and EEG signals. Nowadays, commonly used systems are made up of two different apparatuses: one for signals acquisition and one for video recording. They are separate pieces of equipment, and the integration between data and video is possible only to the detriment of information and the possibility to make quantitative analysis. This paper describes a new digital system for the concomitant acquisition of signals and video. It is a low cost instrumentation, PC based and easy-to-use. Data and video are recorded in standard formats and can be analyzed in a post-acquisition stage. The time resolution of the system is given by the video frame rate (25 fps), although an A/D conversion system allows frequencies up to 8000 Hz. The prototype was tested to verify synchronism between data and frames, and differences smaller than the resolution (40 ms) were found. The feasibility of the system was checked in two different applications: rehabilitation training with an isotonic Leg Extension and a daily EEG examination of one patient at the Neurological Institute of Bologna University. Both the applications gave good results in terms of time resolution, synchronism and user-friendliness.


2011 ◽  
Vol 23 (1) ◽  
pp. 53-65 ◽  
Author(s):  
Yao-DongWang ◽  
◽  
Idaku Ishii ◽  
Takeshi Takaki ◽  
Kenji Tajima ◽  
...  

This paper introduces a high-speed vision system called IDP Express, which can execute real-time image processing and High-Frame-Rate (HFR) video recording simultaneously. In IDP Express, 512×512 pixel images from two camera heads and the processed results on a dedicated FPGA (Field Programmable Gate Array) board are transferred to standard PC memory at a rate of 1000 fps or more. Owing to the simultaneous HFR video processing and recording, IDP Express can be used as an intelligent video logging system for long-term high-speed phenomenon analysis. In this paper, a real-time abnormal behavior detection algorithm was implemented on IDP-Express to capture HFR videos of crucial moments of unpredictable abnormal behaviors in high-speed periodic motions. Several experiments were performed for a high-speed slider machine with repetitive operation at a frequency of 15 Hz and videos of the abnormal behaviors were automatically recorded to verify the effectiveness of our intelligent HFR video logging system.


2017 ◽  
Vol 10 (13) ◽  
pp. 180
Author(s):  
Maheswari R ◽  
Pattabiraman V ◽  
Sharmila P

Objective: The prospective need of SIMD (Single Instruction and Multiple Data) applications like video and image processing in single system requires greater flexibility in computation to deliver high quality real time data. This paper performs an analysis of FPGA (Field Programmable Gate Array) based high performance Reconfigurable OpenRISC1200 (ROR) soft-core processor for SIMD.Methods: The ROR1200 ensures performance improvement by data level parallelism executing SIMD instruction simultaneously in HPRC (High Performance Reconfigurable Computing) at reduced resource utilization through RRF (Reconfigurable Register File) with multiple core functionalities. This work aims at analyzing the functionality of the reconfigurable architecture, by illustrating the implementation of two different image processing operations such as image convolution and image quality improvement. The MAC (Multiply-Accumulate) unit of ROR1200 used to perform image convolution and execution unit with HPRC is used for image quality improvement.Result: With parallel execution in multi-core, the proposed processor improves image quality by doubling the frame rate up-to 60 fps (frames per second) with peak power consumption of 400mWatt. Thus the processor gives a significant computational cost of 12ms with a refresh rate of 60Hz and 1.29ns of MAC critical path delay.Conclusion:This FPGA based processor becomes a feasible solution for portable embedded SIMD based applications which need high performance at reduced power consumptions


2019 ◽  
Vol 9 (21) ◽  
pp. 4707
Author(s):  
Jungsik Park ◽  
Byung-Kuk Seo ◽  
Jong-Il Park

This paper proposes a framework that allows 3D freeform manipulation of a face in live video. Unlike existing approaches, the proposed framework provides natural 3D manipulation of a face without background distortion and interactive face editing by a user’s input, which leads to freeform manipulation without any limitation of range or shape. To achieve these features, a 3D morphable face model is fitted to a face region in a video frame and is deformed by the user’s input. The video frame is then mapped as a texture to the deformed model, and the model is rendered on the video frame. Because of the high computational cost, parallelization and acceleration schemes are also adopted for real-time performance. Performance evaluation and comparison results show that the proposed framework is promising for 3D face editing in live video.


Water ◽  
2021 ◽  
Vol 13 (16) ◽  
pp. 2206
Author(s):  
Evangelos Rozos ◽  
Katerina Mazi ◽  
Antonis D. Koussis

The recent technological advances in remote sensing (e.g., unmanned aerial vehicles, digital image acquisition, etc.) have vastly improved the applicability of image velocimetry in hydrological studies. Thus, image velocimetry has become an established technique with an acceptable error for practical applications (the error can be lower than 10%). The main source of errors has been attributed to incomplete intrinsic and extrinsic camera calibration, to non-constant frame rate and to spurious low velocities due to moving objects that are irrelevant to the streamflow. Some researchers have even employed probabilistic approaches (Monte Carlo simulations) to analyze the uncertainty introduced during the camera calibration procedure. On the other hand, the endogenous uncertainty of the image velocimetry algorithms per se has received little attention. In this study, a probabilistic approach is employed to systematically analyze this uncertainty. It is argued that this analysis may not only improve the performance of the image velocimetry methods but it can also provide information regarding the impact of the video recording conditions (e.g., low density of features, oblique camera angle, low resolution, etc.) on the accuracy of the estimated values. The suggested method has been tested in six case studies of which the data have been previously made publicly available by independent researchers.


Sign in / Sign up

Export Citation Format

Share Document