Automatic classification of kinematic flap variants using ultrasound and optical flow

2020 ◽  
Vol 148 (4) ◽  
pp. 2655-2655
Author(s):  
Matthew Faytak ◽  
Connor Mayer ◽  
Jennifer Kuo ◽  
G. Teixeira ◽  
Z. L. Zhou
Author(s):  
Paul DeCosta ◽  
Kyugon Cho ◽  
Stephen Shemlon ◽  
Heesung Jun ◽  
Stanley M. Dunn

Introduction: The analysis and interpretation of electron micrographs of cells and tissues, often requires the accurate extraction of structural networks, which either provide immediate 2D or 3D information, or from which the desired information can be inferred. The images of these structures contain lines and/or curves whose orientation, lengths, and intersections characterize the overall network.Some examples exist of studies that have been done in the analysis of networks of natural structures. In, Sebok and Roemer determine the complexity of nerve structures in an EM formed slide. Here the number of nodes that exist in the image describes how dense nerve fibers are in a particular region of the skin. Hildith proposes a network structural analysis algorithm for the automatic classification of chromosome spreads (type, relative size and orientation).


Author(s):  
Yashpal Jitarwal ◽  
Tabrej Ahamad Khan ◽  
Pawan Mangal

In earlier times fruits were sorted manually and it was very time consuming and laborious task. Human sorted the fruits of the basis of shape, size and color. Time taken by human to sort the fruits is very large therefore to reduce the time and to increase the accuracy, an automatic classification of fruits comes into existence.To improve this human inspection and reduce time required for fruit sorting an advance technique is developed that accepts information about fruits from their images, and is called as Image Processing Technique.


2020 ◽  
Vol 17 (4) ◽  
pp. 497-506
Author(s):  
Sunil Patel ◽  
Ramji Makwana

Automatic classification of dynamic hand gesture is challenging due to the large diversity in a different class of gesture, Low resolution, and it is performed by finger. Due to a number of challenges many researchers focus on this area. Recently deep neural network can be used for implicit feature extraction and Soft Max layer is used for classification. In this paper, we propose a method based on a two-dimensional convolutional neural network that performs detection and classification of hand gesture simultaneously from multimodal Red, Green, Blue, Depth (RGBD) and Optical flow Data and passes this feature to Long-Short Term Memory (LSTM) recurrent network for frame-to-frame probability generation with Connectionist Temporal Classification (CTC) network for loss calculation. We have calculated an optical flow from Red, Green, Blue (RGB) data for getting proper motion information present in the video. CTC model is used to efficiently evaluate all possible alignment of hand gesture via dynamic programming and check consistency via frame-to-frame for the visual similarity of hand gesture in the unsegmented input stream. CTC network finds the most probable sequence of a frame for a class of gesture. The frame with the highest probability value is selected from the CTC network by max decoding. This entire CTC network is trained end-to-end with calculating CTC loss for recognition of the gesture. We have used challenging Vision for Intelligent Vehicles and Applications (VIVA) dataset for dynamic hand gesture recognition captured with RGB and Depth data. On this VIVA dataset, our proposed hand gesture recognition technique outperforms competing state-of-the-art algorithms and gets an accuracy of 86%


Author(s):  
Biswanath Saha ◽  
Parimal Kumar Purkait ◽  
Jayanta Mukherjee ◽  
Arun Kumar Majumdar ◽  
Bandana Majumdar ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document