On transform coding of motion-compensated difference images

1992 ◽  
Vol 139 (3) ◽  
pp. 372 ◽  
Author(s):  
R.J. Clarke
1992 ◽  
Vol 139 (3) ◽  
pp. 364
Author(s):  
L. Wang ◽  
M. Goldberg
Keyword(s):  

1977 ◽  
Vol 13 (10) ◽  
pp. 277 ◽  
Author(s):  
J.B.G. Roberts ◽  
E.H. Darlington ◽  
R.D. Edwards ◽  
R.F. Simons

2021 ◽  
Vol 7 (2) ◽  
pp. 27
Author(s):  
Dieter P. Gruber ◽  
Matthias Haselmann

This paper proposes a new machine vision method to test the quality of a semi-transparent automotive illuminant component. Difference images of Frangi filtered surface images are used to enhance defect-like image structures. In order to distinguish allowed structures from defective structures, morphological features are extracted and used for a nearest-neighbor-based anomaly score. In this way, it could be demonstrated that a segmentation of occurring defects is possible on transparent illuminant parts. The method turned out to be fast and accurate and is therefore also suited for in-production testing.


2021 ◽  
Vol 11 (12) ◽  
pp. 5563
Author(s):  
Jinsol Ha ◽  
Joongchol Shin ◽  
Hasil Park ◽  
Joonki Paik

Action recognition requires the accurate analysis of action elements in the form of a video clip and a properly ordered sequence of the elements. To solve the two sub-problems, it is necessary to learn both spatio-temporal information and the temporal relationship between different action elements. Existing convolutional neural network (CNN)-based action recognition methods have focused on learning only spatial or temporal information without considering the temporal relation between action elements. In this paper, we create short-term pixel-difference images from the input video, and take the difference images as an input to a bidirectional exponential moving average sub-network to analyze the action elements and their temporal relations. The proposed method consists of: (i) generation of RGB and differential images, (ii) extraction of deep feature maps using an image classification sub-network, (iii) weight assignment to extracted feature maps using a bidirectional, exponential, moving average sub-network, and (iv) late fusion with a three-dimensional convolutional (C3D) sub-network to improve the accuracy of action recognition. Experimental results show that the proposed method achieves a higher performance level than existing baseline methods. In addition, the proposed action recognition network takes only 0.075 seconds per action class, which guarantees various high-speed or real-time applications, such as abnormal action classification, human–computer interaction, and intelligent visual surveillance.


Author(s):  
Brian Chmiel ◽  
Chaim Baskin ◽  
Evgenii Zheltonozhskii ◽  
Ron Banner ◽  
Yevgeny Yermolin ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document