scholarly journals Spike-FlowNet: Event-Based Optical Flow Estimation with Energy-Efficient Hybrid Neural Networks

Author(s):  
Chankyu Lee ◽  
Adarsh Kumar Kosta ◽  
Alex Zihao Zhu ◽  
Kenneth Chaney ◽  
Kostas Daniilidis ◽  
...  
Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1150
Author(s):  
Jun Nagata ◽  
Yusuke Sekikawa ◽  
Yoshimitsu Aoki

In this work, we propose a novel method of estimating optical flow from event-based cameras by matching the time surface of events. The proposed loss function measures the timestamp consistency between the time surface formed by the latest timestamp of each pixel and the one that is slightly shifted in time. This makes it possible to estimate dense optical flows with high accuracy without restoring luminance or additional sensor information. In the experiment, we show that the gradient was more correct and the loss landscape was more stable than the variance loss in the motion compensation approach. In addition, we show that the optical flow can be estimated with high accuracy by optimization with L1 smoothness regularization using publicly available datasets.


2019 ◽  
Vol 368 ◽  
pp. 124-132 ◽  
Author(s):  
Mingliang Zhai ◽  
Xuezhi Xiang ◽  
Rongfang Zhang ◽  
Ning Lv ◽  
Abdulmotaleb El Saddik

2021 ◽  
Vol 3 (3) ◽  
Author(s):  
Syed Tafseer Haider Shah ◽  
Xiang Xuezhi

AbstractOptical Flow Estimation is an essential component for many image processing techniques. This field of research in computer vision has seen an amazing development in recent years. In particular, the introduction of Convolutional Neural Networks for optical flow estimation has shifted the paradigm of research from the classical traditional approach to deep learning side. At present, state of the art techniques for optical flow are based on convolutional neural networks and almost all top performing methods incorporate deep learning architectures in their schemes. This paper presents a brief analysis of optical flow estimation techniques and highlights most recent developments in this field. A comparison of the majority of pertinent traditional and deep learning methodologies has been undertaken resulting the detailed establishment of the respective advantages and disadvantages of the traditional and deep learning categories. An insight is provided into the significant factors that affect the success or failure of the two classes of optical flow estimation. In establishing the foremost existing and inherent challenges with traditional and deep learning schemes, probable solutions have been proposed indeed.


Author(s):  
Alex Zhu ◽  
Liangzhe Yuan ◽  
Kenneth Chaney ◽  
Kostas Daniilidis

2021 ◽  
Vol 62 (4) ◽  
Author(s):  
Jin Lu ◽  
Hua Yang ◽  
Qinghu Zhang ◽  
Zhouping Yin

2020 ◽  
Vol 34 (07) ◽  
pp. 10713-10720
Author(s):  
Mingyu Ding ◽  
Zhe Wang ◽  
Bolei Zhou ◽  
Jianping Shi ◽  
Zhiwu Lu ◽  
...  

A major challenge for video semantic segmentation is the lack of labeled data. In most benchmark datasets, only one frame of a video clip is annotated, which makes most supervised methods fail to utilize information from the rest of the frames. To exploit the spatio-temporal information in videos, many previous works use pre-computed optical flows, which encode the temporal consistency to improve the video segmentation. However, the video segmentation and optical flow estimation are still considered as two separate tasks. In this paper, we propose a novel framework for joint video semantic segmentation and optical flow estimation. Semantic segmentation brings semantic information to handle occlusion for more robust optical flow estimation, while the non-occluded optical flow provides accurate pixel-level temporal correspondences to guarantee the temporal consistency of the segmentation. Moreover, our framework is able to utilize both labeled and unlabeled frames in the video through joint training, while no additional calculation is required in inference. Extensive experiments show that the proposed model makes the video semantic segmentation and optical flow estimation benefit from each other and outperforms existing methods under the same settings in both tasks.


Sign in / Sign up

Export Citation Format

Share Document