A new optical flow estimation method in joint EO/IR video surveillance

Author(s):  
Hong Man ◽  
Robert J. Holt ◽  
Jing Wang ◽  
Rainer Martini ◽  
Ravi Netravali ◽  
...  
2019 ◽  
Vol 26 (2) ◽  
pp. 139-157 ◽  
Author(s):  
Sidong Wu ◽  
Gexiang Zhang ◽  
Ferrante Neri ◽  
Ming Zhu ◽  
Tao Jiang ◽  
...  

2014 ◽  
pp. 112-117
Author(s):  
Rauf Sadykhov ◽  
Denis Lamovsky

This paper describes a new algorithm to calculate cross-correlation function. We combined box filtering technique for calculation of cross correlation coefficients with parallel processing using MMX/SSE technology of modern general purpose processors. We have used this algorithm for real time optical flow estimation between frames of video sequence. Our algorithm was tested on real world video sequences obtained from the cameras of video surveillance system.


2021 ◽  
Author(s):  
Guoyu Zuo ◽  
Chengwei Zhang ◽  
Jiayuan Tong ◽  
Daoxiong Gong ◽  
Mengqian You

2021 ◽  
Author(s):  
Ali Bou Nassif ◽  
Qassim Nasir ◽  
Manar Abu Talib ◽  
Omar Mohamed Gouda

Abstract Creating deepfake multimedia, and especially deepfake videos, has become much easier these days due to the availability of deepfake tools and the virtually unlimited numbers of face images found online. Research and industry communities have dedicated time and resources to develop detection methods to expose these fake videos. Although these detection methods have been developed over the past few years, synthesis methods have also made progress, allowing for the production of deepfake videos that are harder and harder to differentiate from real videos. This research paper proposes an improved optical flow estimation-based method to detect and expose the discrepancies between video frames. Augmentation and modification are used to improve the system’s overall accuracy. Furthermore, the system is trained on Graphics Processing Units (GPUs) and Tensor Processing Units (TPUs) to explore the effects and benefit of each type of hardware in deepfake detection. TPUs were found to have shorter training times compared to the GPUs. VGG-16 is the best performing model when used as backbone for the system, as it achieved around 82.0% detection accuracy when trained on GPUs and 71.34% accuracy on TPUs.


Sign in / Sign up

Export Citation Format

Share Document