Stream implementation of the flux tensor motion flow algorithm using GStreamer and CUDA

Author(s):  
Dardo D. Kleiner ◽  
Kannappan Palaniappan ◽  
Gunasekaran Seetharaman
Keyword(s):  
2005 ◽  
Vol 44 (S 01) ◽  
pp. S46-S50 ◽  
Author(s):  
M. Dawood ◽  
N. Lang ◽  
F. Büther ◽  
M. Schäfers ◽  
O. Schober ◽  
...  

Summary:Motion in PET/CT leads to artifacts in the reconstructed PET images due to the different acquisition times of positron emission tomography and computed tomography. The effect of motion on cardiac PET/CT images is evaluated in this study and a novel approach for motion correction based on optical flow methods is outlined. The Lukas-Kanade optical flow algorithm is used to calculate the motion vector field on both simulated phantom data as well as measured human PET data. The motion of the myocardium is corrected by non-linear registration techniques and results are compared to uncorrected images.


2018 ◽  
Vol 12 ◽  
pp. 25-41
Author(s):  
Matthew C. FONTAINE

Among the most interesting problems in competitive programming involve maximum flows. However, efficient algorithms for solving these problems are often difficult for students to understand at an intuitive level. One reason for this difficulty may be a lack of suitable metaphors relating these algorithms to concepts that the students already understand. This paper introduces a novel maximum flow algorithm, Tidal Flow, that is designed to be intuitive to undergraduate andpre-university computer science students.


2020 ◽  
Vol 64 (4) ◽  
pp. 40412-1-40412-11
Author(s):  
Kexin Bai ◽  
Qiang Li ◽  
Ching-Hsin Wang

Abstract To address the issues of the relatively small size of brain tumor image datasets, severe class imbalance, and low precision in existing segmentation algorithms for brain tumor images, this study proposes a two-stage segmentation algorithm integrating convolutional neural networks (CNNs) and conventional methods. Four modalities of the original magnetic resonance images were first preprocessed separately. Next, preliminary segmentation was performed using an improved U-Net CNN containing deep monitoring, residual structures, dense connection structures, and dense skip connections. The authors adopted a multiclass Dice loss function to deal with class imbalance and successfully prevented overfitting using data augmentation. The preliminary segmentation results subsequently served as the a priori knowledge for a continuous maximum flow algorithm for fine segmentation of target edges. Experiments revealed that the mean Dice similarity coefficients of the proposed algorithm in whole tumor, tumor core, and enhancing tumor segmentation were 0.9072, 0.8578, and 0.7837, respectively. The proposed algorithm presents higher accuracy and better stability in comparison with some of the more advanced segmentation algorithms for brain tumor images.


2010 ◽  
Vol 256 ◽  
pp. 012006 ◽  
Author(s):  
Steven Solomon ◽  
Parimala Thulasiraman
Keyword(s):  

Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2407
Author(s):  
Hojun You ◽  
Dongsu Kim

Fluvial remote sensing has been used to monitor diverse riverine properties through processes such as river bathymetry and visual detection of suspended sediment, algal blooms, and bed materials more efficiently than laborious and expensive in-situ measurements. Red–green–blue (RGB) optical sensors have been widely used in traditional fluvial remote sensing. However, owing to their three confined bands, they rely on visual inspection for qualitative assessments and are limited to performing quantitative and accurate monitoring. Recent advances in hyperspectral imaging in the fluvial domain have enabled hyperspectral images to be geared with more than 150 spectral bands. Thus, various riverine properties can be quantitatively characterized using sensors in low-altitude unmanned aerial vehicles (UAVs) with a high spatial resolution. Many efforts are ongoing to take full advantage of hyperspectral band information in fluvial research. Although geo-referenced hyperspectral images can be acquired for satellites and manned airplanes, few attempts have been made using UAVs. This is mainly because the synthesis of line-scanned images on top of image registration using UAVs is more difficult owing to the highly sensitive and heavy image driven by dense spatial resolution. Therefore, in this study, we propose a practical technique for achieving high spatial accuracy in UAV-based fluvial hyperspectral imaging through efficient image registration using an optical flow algorithm. Template matching algorithms are the most common image registration technique in RGB-based remote sensing; however, they require many calculations and can be error-prone depending on the user, as decisions regarding various parameters are required. Furthermore, the spatial accuracy of this technique needs to be verified, as it has not been widely applied to hyperspectral imagery. The proposed technique resulted in an average reduction of spatial errors by 91.9%, compared to the case where the image registration technique was not applied, and by 78.7% compared to template matching.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 3953
Author(s):  
Han Pu ◽  
Tianqiang Huang ◽  
Bin Weng ◽  
Feng Ye ◽  
Chenbin Zhao

Digital video forensics plays a vital role in judicial forensics, media reports, e-commerce, finance, and public security. Although many methods have been developed, there is currently no efficient solution to real-life videos with illumination noises and jitter noises. To solve this issue, we propose a detection method that adapts to brightness and jitter for video inter-frame forgery. For videos with severe brightness changes, we relax the brightness constancy constraint and adopt intensity normalization to propose a new optical flow algorithm. For videos with large jitter noises, we introduce motion entropy to detect the jitter and extract the stable feature of texture changes fraction for double-checking. Experimental results show that, compared with previous algorithms, the proposed method is more accurate and robust for videos with significant brightness variance or videos with heavy jitter on public benchmark datasets.


Sign in / Sign up

Export Citation Format

Share Document