Optical Flow Analysis Based on Spatio-Temporal Correlation of Dynamic Image

1990 ◽  
Vol 21 (4) ◽  
pp. 97-108 ◽  
Author(s):  
Kazutoshi Koga ◽  
Hidetoshi Miike
Author(s):  
Yassine Benabbas ◽  
Nacim Ihaddadene ◽  
Tarek Yahiaoui ◽  
Thierry Urruty ◽  
Chabane Djeraba

2005 ◽  
Vol 125 (4) ◽  
pp. 673-674
Author(s):  
Jinwoo Kim ◽  
Kazuteru Funato ◽  
Rong-Long Wang ◽  
Kozo Okazaki

2021 ◽  
Vol 436 ◽  
pp. 273-282
Author(s):  
Youmin Yan ◽  
Xixian Guo ◽  
Jin Tang ◽  
Chenglong Li ◽  
Xin Wang

2021 ◽  
Vol 13 (12) ◽  
pp. 2333
Author(s):  
Lilu Zhu ◽  
Xiaolu Su ◽  
Yanfeng Hu ◽  
Xianqing Tai ◽  
Kun Fu

It is extremely important to extract valuable information and achieve efficient integration of remote sensing data. The multi-source and heterogeneous nature of remote sensing data leads to the increasing complexity of these relationships, and means that the processing mode based on data ontology cannot meet requirements any more. On the other hand, the multi-dimensional features of remote sensing data bring more difficulties in data query and analysis, especially for datasets with a lot of noise. Therefore, data quality has become the bottleneck of data value discovery, and a single batch query is not enough to support the optimal combination of global data resources. In this paper, we propose a spatio-temporal local association query algorithm for remote sensing data (STLAQ). Firstly, we design a spatio-temporal data model and a bottom-up spatio-temporal correlation network. Then, we use the method of partition-based clustering and the method of spectral clustering to measure the correlation between spatio-temporal correlation networks. Finally, we construct a spatio-temporal index to provide joint query capabilities. We carry out local association query efficiency experiments to verify the feasibility of STLAQ on multi-scale datasets. The results show that the STLAQ weakens the barriers between remote sensing data, and improves their application value effectively.


2020 ◽  
Vol 34 (07) ◽  
pp. 10713-10720
Author(s):  
Mingyu Ding ◽  
Zhe Wang ◽  
Bolei Zhou ◽  
Jianping Shi ◽  
Zhiwu Lu ◽  
...  

A major challenge for video semantic segmentation is the lack of labeled data. In most benchmark datasets, only one frame of a video clip is annotated, which makes most supervised methods fail to utilize information from the rest of the frames. To exploit the spatio-temporal information in videos, many previous works use pre-computed optical flows, which encode the temporal consistency to improve the video segmentation. However, the video segmentation and optical flow estimation are still considered as two separate tasks. In this paper, we propose a novel framework for joint video semantic segmentation and optical flow estimation. Semantic segmentation brings semantic information to handle occlusion for more robust optical flow estimation, while the non-occluded optical flow provides accurate pixel-level temporal correspondences to guarantee the temporal consistency of the segmentation. Moreover, our framework is able to utilize both labeled and unlabeled frames in the video through joint training, while no additional calculation is required in inference. Extensive experiments show that the proposed model makes the video semantic segmentation and optical flow estimation benefit from each other and outperforms existing methods under the same settings in both tasks.


Sign in / Sign up

Export Citation Format

Share Document