Spatio-temporal super-resolution reconstruction based on robust optical flow and Zernike moment for dynamic image sequences

Author(s):  
Meiyu Liang ◽  
Junping Du ◽  
Xiaoyong Li ◽  
Liang Xu ◽  
Honggang Liu ◽  
...  
Author(s):  
HANS-HELLMUT NAGEL

Many investigations of image sequences can be understood on the basis of a few concepts for which computational approaches become increasingly available. The estimation of optical flow fields is discussed, exhibiting a common foundation for feature-based and differential approaches. The interpretation of optical flow fields is mostly concerned so far with approaches which infer the 3-D structure of a rigid point configuration in 3-D space and its relative motion with respect to the image sensor from an image sequence. The combination of stereo and motion provides additional incentives to evaluate image sequences, especially for the control of robots and autonomous vehicles. Advances in all these areas lead to the desire to describe the spatio-temporal development recorded by an image sequence not only at the level of geometry, but also at higher conceptual levels, for example by natural language descriptions.


2012 ◽  
Vol 532-533 ◽  
pp. 1680-1684
Author(s):  
Meng He Li ◽  
Chuan Lin ◽  
Jing Bei Tian ◽  
Sheng Hui Pan

For the weakness of conventional POCS algorithms, a novel spatio-temporal adaptive super-resolution reconstruction algorithm of video is proposed in this paper. The spatio-temporal adaptive mechanism, which is based on POCS super-resolution reconstruction algorithm, can effectively prevent reconstructed image from the influence of inaccuracy of motion information and avoid the impact of noise amplification, which exist in using conventional POCS algorithms to reconstruct image sequences in dramatic motion. Experimental results show that the spatio-temporal adaptive algorithm not only effectively alleviate amplification noise but is better than the traditional POCS algorithms in signal to noise ration.


2013 ◽  
Vol 2013 ◽  
pp. 1-14 ◽  
Author(s):  
Meiyu Liang ◽  
Junping Du ◽  
Honggang Liu

In order to improve the spatiotemporal resolution of the video sequences, a novel spatiotemporal super-resolution reconstruction model (STSR) based on robust optical flow and Zernike moment is proposed in this paper, which integrates the spatial resolution reconstruction and temporal resolution reconstruction into a unified framework. The model does not rely on accurate estimation of subpixel motion and is robust to noise and rotation. Moreover, it can effectively overcome the problems of hole and block artifacts. First we propose an efficient robust optical flow motion estimation model based on motion details preserving, then we introduce the biweighted fusion strategy to implement the spatiotemporal motion compensation. Next, combining the self-adaptive region correlation judgment strategy, we construct a fast fuzzy registration scheme based on Zernike moment for better STSR with higher efficiency, and then the final video sequences with high spatiotemporal resolution can be obtained by fusion of the complementary and redundant information with nonlocal self-similarity between the adjacent video frames. Experimental results demonstrate that the proposed method outperforms the existing methods in terms of both subjective visual and objective quantitative evaluations.


Author(s):  
Kojiro Matsushita ◽  
Toyotaro Tokimoto ◽  
Kengo Fujii ◽  
Hirotsugu Yamamoto

Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3722
Author(s):  
Byeongkeun Kang ◽  
Yeejin Lee

Motion in videos refers to the pattern of the apparent movement of objects, surfaces, and edges over image sequences caused by the relative movement between a camera and a scene. Motion, as well as scene appearance, are essential features to estimate a driver’s visual attention allocation in computer vision. However, the fact that motion can be a crucial factor in a driver’s attention estimation has not been thoroughly studied in the literature, although driver’s attention prediction models focusing on scene appearance have been well studied. Therefore, in this work, we investigate the usefulness of motion information in estimating a driver’s visual attention. To analyze the effectiveness of motion information, we develop a deep neural network framework that provides attention locations and attention levels using optical flow maps, which represent the movements of contents in videos. We validate the performance of the proposed motion-based prediction model by comparing it to the performance of the current state-of-art prediction models using RGB frames. The experimental results for a real-world dataset confirm our hypothesis that motion plays a role in prediction accuracy improvement, and there is a margin for accuracy improvement by using motion features.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Liliana Barbieri ◽  
Huw Colin-York ◽  
Kseniya Korobchevskaya ◽  
Di Li ◽  
Deanna L. Wolfson ◽  
...  

AbstractQuantifying small, rapidly evolving forces generated by cells is a major challenge for the understanding of biomechanics and mechanobiology in health and disease. Traction force microscopy remains one of the most broadly applied force probing technologies but typically restricts itself to slow events over seconds and micron-scale displacements. Here, we improve >2-fold spatially and >10-fold temporally the resolution of planar cellular force probing compared to its related conventional modalities by combining fast two-dimensional total internal reflection fluorescence super-resolution structured illumination microscopy and traction force microscopy. This live-cell 2D TIRF-SIM-TFM methodology offers a combination of spatio-temporal resolution enhancement relevant to forces on the nano- and sub-second scales, opening up new aspects of mechanobiology to analysis.


2020 ◽  
Vol 34 (07) ◽  
pp. 10713-10720
Author(s):  
Mingyu Ding ◽  
Zhe Wang ◽  
Bolei Zhou ◽  
Jianping Shi ◽  
Zhiwu Lu ◽  
...  

A major challenge for video semantic segmentation is the lack of labeled data. In most benchmark datasets, only one frame of a video clip is annotated, which makes most supervised methods fail to utilize information from the rest of the frames. To exploit the spatio-temporal information in videos, many previous works use pre-computed optical flows, which encode the temporal consistency to improve the video segmentation. However, the video segmentation and optical flow estimation are still considered as two separate tasks. In this paper, we propose a novel framework for joint video semantic segmentation and optical flow estimation. Semantic segmentation brings semantic information to handle occlusion for more robust optical flow estimation, while the non-occluded optical flow provides accurate pixel-level temporal correspondences to guarantee the temporal consistency of the segmentation. Moreover, our framework is able to utilize both labeled and unlabeled frames in the video through joint training, while no additional calculation is required in inference. Extensive experiments show that the proposed model makes the video semantic segmentation and optical flow estimation benefit from each other and outperforms existing methods under the same settings in both tasks.


Sign in / Sign up

Export Citation Format

Share Document