Guided Matching Based on Statistical Optical Flow for Fast and Robust Correspondence Analysis

Author(s):  
Josef Maier ◽  
Martin Humenberger ◽  
Markus Murschitz ◽  
Oliver Zendel ◽  
Markus Vincze
1992 ◽  
Author(s):  
Jeremy M. A. Beer
Keyword(s):  

2005 ◽  
Vol 44 (S 01) ◽  
pp. S46-S50 ◽  
Author(s):  
M. Dawood ◽  
N. Lang ◽  
F. Büther ◽  
M. Schäfers ◽  
O. Schober ◽  
...  

Summary:Motion in PET/CT leads to artifacts in the reconstructed PET images due to the different acquisition times of positron emission tomography and computed tomography. The effect of motion on cardiac PET/CT images is evaluated in this study and a novel approach for motion correction based on optical flow methods is outlined. The Lukas-Kanade optical flow algorithm is used to calculate the motion vector field on both simulated phantom data as well as measured human PET data. The motion of the myocardium is corrected by non-linear registration techniques and results are compared to uncorrected images.


CICTP 2020 ◽  
2020 ◽  
Author(s):  
Tao Chen ◽  
Linkun Fan ◽  
Xuchuan Li ◽  
Congshuai Guo ◽  
Miaomiao Qiao
Keyword(s):  

Author(s):  
Htay Htay Win ◽  
Aye Thida Myint ◽  
Mi Cho Cho

For years, achievements and discoveries made by researcher are made aware through research papers published in appropriate journals or conferences. Many a time, established s researcher and mainly new user are caught up in the predicament of choosing an appropriate conference to get their work all the time. Every scienti?c conference and journal is inclined towards a particular ?eld of research and there is a extensive group of them for any particular ?eld. Choosing an appropriate venue is needed as it helps in reaching out to the right listener and also to further one’s chance of getting their paper published. In this work, we address the problem of recommending appropriate conferences to the authors to increase their chances of receipt. We present three di?erent approaches for the same involving the use of social network of the authors and the content of the paper in the settings of dimensionality reduction and topic modelling. In all these approaches, we apply Correspondence Analysis (CA) to obtain appropriate relationships between the entities in question, such as conferences and papers. Our models show hopeful results when compared with existing methods such as content-based ?ltering, collaborative ?ltering and hybrid ?ltering.


2009 ◽  
Vol 129 (5) ◽  
pp. 792-799
Author(s):  
Takashi Yamanaka ◽  
Masayuki Kashima ◽  
Kiminori Sato ◽  
Mutsumi Watanabe ◽  
Jun Ogata

2019 ◽  
Vol 139 (5) ◽  
pp. 603-608 ◽  
Author(s):  
Yutaka Suzuki ◽  
Kyosuke Hatsushika ◽  
Keisuke Masuyama ◽  
Osamu Sakata ◽  
Morimasa Tanimoto ◽  
...  

2021 ◽  
Author(s):  
Tobin Gevelber ◽  
Bryan E. Schmidt ◽  
Muhammad A. Mustafa ◽  
David Shekhtman ◽  
Nick J. Parziale

2020 ◽  
Vol 17 (4) ◽  
pp. 497-506
Author(s):  
Sunil Patel ◽  
Ramji Makwana

Automatic classification of dynamic hand gesture is challenging due to the large diversity in a different class of gesture, Low resolution, and it is performed by finger. Due to a number of challenges many researchers focus on this area. Recently deep neural network can be used for implicit feature extraction and Soft Max layer is used for classification. In this paper, we propose a method based on a two-dimensional convolutional neural network that performs detection and classification of hand gesture simultaneously from multimodal Red, Green, Blue, Depth (RGBD) and Optical flow Data and passes this feature to Long-Short Term Memory (LSTM) recurrent network for frame-to-frame probability generation with Connectionist Temporal Classification (CTC) network for loss calculation. We have calculated an optical flow from Red, Green, Blue (RGB) data for getting proper motion information present in the video. CTC model is used to efficiently evaluate all possible alignment of hand gesture via dynamic programming and check consistency via frame-to-frame for the visual similarity of hand gesture in the unsegmented input stream. CTC network finds the most probable sequence of a frame for a class of gesture. The frame with the highest probability value is selected from the CTC network by max decoding. This entire CTC network is trained end-to-end with calculating CTC loss for recognition of the gesture. We have used challenging Vision for Intelligent Vehicles and Applications (VIVA) dataset for dynamic hand gesture recognition captured with RGB and Depth data. On this VIVA dataset, our proposed hand gesture recognition technique outperforms competing state-of-the-art algorithms and gets an accuracy of 86%


Sign in / Sign up

Export Citation Format

Share Document